id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
29531562 | pes2o/s2orc | v3-fos-license | Efficacy of Pharmacopuncture for Treating Children with Physical Disabilities in Uzbekistan
Objective: This research was performed to investigate the efficacy of complex rehabilitation combined with pharmacopuncture treatment for the children with neuromotor system diseases. Methods: Fifty (50) patients aged from 5 to 15 yr old were compared. Twenty (20) patients received conventional treatments and complex rehabilitation as a control group, and fifty (50) patients received complex rehabilitation with pharmacopuncture. At their first visits, the patients had checkups and neurological scales, and after 10 days of pharmacopuncture treatments and 55 days of rehabilitation, they also took neurological scales. We studied the pre and post effects of the treatment group. Results: The number of patients with ankle joint disorder and contracture, knee joint contracture, steppage, horsey hoof, shoulder weakness and contracture, radio-carpal joint disorder and contracture, arm hypotrophia, arm atrophia, leg hypotrophia and total atrophia decreased after treatments. Conclusion: This study showed the efficacy of pharmacopuncture combined with complex rehabilitation for the treatment of neuromotor system diseases.
Introduction
The physical disability of children is considered to be a medico-social problem for the whole of society [1][2][3][4][5][6]. Physical disabilities (PDs) are a leading cause of impaired quality of life and functioning [7]. In Canada, 6.3% of children aged 0-9 yr have some sort of disability, either physical, cognitive or both [8]. Physical disabilities may include musculoskeletal problems, neuromuscular problems or inherited problems. Such disorders can be caused by inflammation of the nervous system, poliomyelitis, defects of the spinal cord, syringomyelia, neurofibromatose outgrowth in the spinal cord, injury to the brachial plexus during birth (obstetric paralysis), encephalitis, etc. Children with PDs require early detection of the diseases, early treatment, and rehabilitation. The treatment and the rehabilitation should maximize their functioning both physically and mentally. The family members of children with PDs look for a method to treat their problems, including complementary and alternative therapies, as well as conventional treatment and rehabilitation [9]. The Abstract O Ob bj je ec ct ti iv ve e: : This research was performed to investigate the efficacy of complex rehabilitation combined with pharmacopuncture treatment for the children with neuromotor system diseases.
M
Me et th ho od ds s: : Fifty (50) patients aged from 5 to 15 yr old were compared. Twenty (20) patients received conventional treatments and complex rehabilitation as a control group, and fifty (50) patients received complex rehabilitation with pharmacopuncture. At their first visits, the patients had checkups and neurological scales, and after 10 days of pharmacopuncture treatments and 55 days of rehabilitation, they also took neurological scales. We studied the pre and post effects of the treatment group. economic burden for family with children who have disabilities is growing. A study in South India showed the mean expenditure of families with a severly disabled child was $254 per year, which is significantly higher than the corresponding expenditure of $181 per year of families with a normal child [10]. Also, in China, compared with normal children, the burdens of raising children with disabilities were increased by 19582.4 RMB (RenMinBi, Chinese currency) per year (autism), 16410.1 RMB per year (physical disability), 6391.0 RMB per year (mental disability) [11]. In medical scientific manuals, children with disorders of the neuromotor system, but with no proper studies of mechanic changes in their muscles, tend to have progressive pathological processes. Initially, these children need orthopedic aids, followed by wheelchairs and they will gradually be bedridden forever. This means that new ways of medical treatment need to be found and that these new methods must have wide-spread application in practice. Children with PDs are in need of special medical, social and educational aid. The treatment of patients with neuromotor disorders should be conducted during a long period of rehabilitation, along with the application of physico-mechanic therapy and complex orthopedic treatment, thus leading to a significant recovery. Children aged from 5 to 15 yr with neuromotor impairments are thought to be treated quite successfully during the process of rehabilitation by applying pharmacopuncture [12][13][14][15][16]. Pharmacopuncture in Uzbekistan is regarded as a new method, but it has much potential for treating many difficult and chronic neurological diseases. Thus, we show the effects of pharmacopuncture of cerebrolysin on neuromotor diseases.
1. Patients
Fifty children patients aged from 5 to 15 yr with neuromotor system diseases visited the 'Republic Children's Rehabilitation Center with diseases of bearing movable systems' and were under its supervision. Twenty patients received conventional treatments as a control group, and Fifty patients received Cerebrolysin Ⓡ (Ever Neuro Pharma GmbH, Unterach am Attrersee, Austria) pharmacopuncture as a treatment group. A 55-day complex rehabilitation was given to both groups. We evaluated the status of the patients twice: the first visit and the 55th-day of complex rehabilitation. In this article, we show the results for the treatment group (n = 50).
1. Improvement of treatment group after a 55-day treatment
For scoliosis, there were 10 patients with 2nd degree scoliosis and 12 patients with 1st degree scoliosis. After treatements, there were 2 patients with 2nd degree scoliosis and 10 patients with 1st degree scoliosis. Ten patients had been cured (Fig. 3) Ankle joint disorders were found in 20 patients before the treatment. After treatment, 12 patients were totally cured, and partial disorder was observed in 8 patients. Ankle joint contractures were found in 30 patients before the treatment. In 15 patients, fixation was completely successful, in 10 patients the ankle joint was partially fixed, and in 5 patients no change was seen (Fig. 4). Knee joint contracture was found in 28 patients before the treatment and was cured in 16 patients. Eight patients were treated with partial success, and in 4 patients, stable contracture was seen with no change (Fig. 4).
Number of Patients
F Fi ig gu ur re e 3 3 Improvement of scoliosis.
Number of Patients
F Fi ig gu ur re e 4 4 Symptom change between pre-and post-treatment.
Steppage was found in 22 patients before the treatment. In 6 patients, horsey hoof was cured, in 11 patients, partial recovery was observed, and in 9 patients, the disease was left stable (Fig. 4). Shoulder joint disorder or weakness was found in 10 patients before the treatment. Three patients were cured, 5 patients were partially cured, and 2 patients showed no change. Shoulder joint contracture was found in 28 patients before the treatment. In 15 patients, it was fully cured, in 10, it was partially cured, and 3 showed no change (Fig. 4). Radiocarpal joint disorder was found in 22 patients before the treatment, and it was cured in 8 patients. Eleven patients recovered partially, and 3 showed no change. Radiocarpal joint contructure was found in 12 patients before the treatment. In 8 patients, it was cured, in 3, it was partially cured, and in 1, it showed no change (Fig. 4). Hypotrophia of the arms was found in 32 patients before the treatment, atrophia in 6, leg/foot hypotrophia in 22, and total atrophia in 6 ( Fig. 4).
Discussion
Pharmacopuncture is a new method for doctors in another countries. Pharmacoupuncture is regarded asan unique method that uses acupuncture points and herb medicines, and it can be used to treat all kinds of disesases, such as internal medicine problems, gynecological problems, otolaryngological problems, neuropsychiatric problems, neuromotor problems and musculoskeletal problems. In Uzbekistan, many doctors have been using Cerebrolysin via intravenous and intramuscular injection. Also, some doctors have been using acupuncture with the patients. In our rehabilitation center, we developed a new method, like pharmacopuncture with Cerebrolysin. In the past in Uzbekistan, we used herb medicne to treat patients, but nowdays few doctors use herb medicine. We have heard that a few doctors have used pharmacopuncture with herb medicine but there are no articles or research on this topic. We use injection materials made in pharmaceutical companies. We applied pharmacopuncture with Cerebrolysin to children with neuromotor impairments [16][17][18]. The effect of pharmacopuncture with Cerebrolysin was regarded as big compared to the effect of Cerebrolysin injected intravenously. The problem was the expense when we used Cerebrolysin injected intravenously, the expense of treatement was large. However, this study, although we used just a small amount of Cerebrolysin, a 'nano-dose', its effect was very good, thus, we can say that this new method is very economical. Pharmacopuncture with Cerebrolysin has several advantages. It is much less painful. If we had used another type of injection material, it could have pain to in the children. Also, Cerebrolysin has many microelements, neuropeptides, and proteins that are very similar to human body fluids and that can produce neuron cells that proliferate in parts of dendrites, axons and neuroglias. We hypothesize that Cerebrolysin has a fast transdermal conduction effect, can get into the skin and can arrive at the target organ fast. We can make a few recommendations on using pharmcopuncture with Cerebrolysin. This method is a new method using acupuncture points and a Westernstyle drug, so it is a combined method using Oriental medicine and Western medicine. Thus, we recommend that the pharmaceutical company write the adminstration method for pharmacopuncture in the instructions for products like Cerebrolysin. For the neurological doctors, we suggest that this new method can produce very good effects for patients with neuromotor impairments, so doctors should use this method more activley. However, this study has several limitations. We did not survey the morbidity time, and we did not classify the patients according to the lesions. Until now, this new method could not be applied to patients with neuromotor impairments. Thus, as a basic step, we hope this study will stimulate more research in this neurological area.
Conclusion
Application of complex medical conventional rehabilitation methods in the treatment of children with diseases of the neuromotor system have not been efficient, but the application of medical rehabilitation in combination with pharmacupuncture has shown significantly better clinical and economical results in the treatment of children with neuromotor system. | 2016-05-12T22:15:10.714Z | 2013-06-01T00:00:00.000 | {
"year": 2013,
"sha1": "f833c1811f616a29d499cb11a5cea4811d8a9ab4",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3831/kpi.2013.16.009",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f181db7bad8e29b550924281568b4c51400a1ebd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
56382562 | pes2o/s2orc | v3-fos-license | NUTRITIVE VALUE OF FLAKES PRODUC TS WITH SUNFLOWER
This paper investigates the influence of sunflower addition (3, 6 or 9 g/100 g of sample) on the essential amino acids pattern of flakes products. The nutritive value of proteins in flakes products is expressed by the amino acid score (AAS) and the protein digestibility-corrected amino acid score (PDCAAS). The results obtained indcate that the AAS and PDCAAS values in flakes products increased with an increase in the sunflower share. Sunflower in flakes products positively contributed to the nutritive value of proteins. Flakes products with 9 g/100 g of sunflower are particularly suitable for adults, an AAS of 0.60±0.06 and a PDCAAS of 51.3±0.3.
INTRODUCTION
Cereals constitute the staple food of humans across the globe.In many countries, they are the mainstay of life and form the single largest component of people's daily diet.Ready-to-eat (RTE) cereals are processed grain formulations suitable for human consumption without further processing or cooking.Extrusion technology is of enormous importance to food processing and the production of breakfast cereals, which greatly affects the properties of corn flakes (Sumithra and Bhattacharya 2008;Kannadhason et al. 2009;Košutić et al 2016a).Extrusion technology facilitates using different ingredients for the enrichment of cereal-based flakes or snack products (Filipović et al. 2010;Nor et al. 2013).The dietary requirements for protein and amino acids in food are set according to the age of consumers.Food consumption is primarily determined by energy expenditure, i.e. as a function of the basal metabolic rate and physical activity level.Energy food requirements change not only with the age, sex and body mass, but also with the physical activity associated with lifestyle.Protein requirements are independent of the body mass, sex (in adult life) and age (WHO 2002).The essential amino acids are as follows: leucine, isoleucine, valine, lysine, threonine, trypothophan, methionine, phenylalanine and histidine.The following amino acids are semi-essential: cysteine, tyrosine, proline, glycine, arginine, glutamine and taurine (under certain physiological and pathological conditions of the organism they can become essential amino acids).The non-essential amino acids provide a source of nitrogen for establishing the nitrogen balance and include the following amino acids: alanine, serine and asparagine (WHO 2002).The amino acid score is the ratio of the amino acid content in the sample protein to the content of the same amino acid in the requirement pattern.It determines the effectiveness with which the absorbed nitrogen can meet the indispensable amino acid requirements at the safe level of protein intake.This is achieved by a comparison of the content of the limiting amino acid in the protein or diet with its content in the requirement pattern appropriate for age (WHO 2002).The protein quality evaluated relative to the protein digestibilitycorrected amino acid score (PDCAAS) value is relatively new, and it is calculated on the basis of the protein digestibility and amino acid score (WHO 2002).Corn flakes are possibly the most common form of breakfast cereals.Flakes formulation with sunflower can improve the nutritive properties of products (Košutić et al 2016b;Gawlik-Dziki et al.2012;Jozinović et al.2016).The purpose of this paper is to examine the influence of sunflower addition (3, 6 or 9 g/100 g of sample) to flakes products on the limiting amino acids pattern and protein requirements of different consumer age groups.
MATERIAL AND METHOD
Corn flour and the sunflower cultivar 'Cepko' were obtained in 2015 from local producers (the mill Žitoprodukt d.o.o.Bačka Palanka and Vitastil Erdevik, respectively).Sunflower was dehulled and milled using a Hammer mill 2300 rev/min with a 2.5mm sieve.Flakes were processed under industrial conditions by extrusion in a twin-screw extruder (Yuninan Daily Extrusion, Yunnan, Republic of China).The process of manufacturing flakes products is shown in Figure 1.A total of four formulations of corn flakes with different quantities of sunflower flour were tested, including the following samples: CF 1 (the control sample), CF 2 (97 g/100 g corn flour and 3 g/100 g sunflower flour), CF 3 (94 g/100 g corn flour and 6 g/100 g sunflower flour) and CF 4 (91 g/100 g corn flour and 9 g/100 g sunflower flour).
Amino acids
The samples were prepared for analyses using the 24h hydrolysis with 6N HCl, and then analysed by a liquid chromatograph Agilent 7890A GC system with a flame ionization detector FID, which is equipped with automatic sampler (auto sampler) and silica capillary column (SP-2560, 100 m x 0.25mm, ID, 0.20μm).Amino acid peaks were identified by comparing the retention time of the individual amino acids in the sample with the retention times of the Amino Acid Standard (Sigma-Aldrich, EC), as well as the internal library data (Košutić 2016).The results are expressed in percentages as a proportion of the individual amino acid in the total amino acids (Košutić 2016).
Protein quality Evaluation
The nutritive value of proteins is expressed by the following indicators: the amino acid score and PDCAAS.According to WHO (2002), threonine, sulphur containing aminoacids and lysine limiting amino acid were taken into consideration.
Amino acid score = mg of amino acid in 1 g test protein / mg of amino acid in the requirement pattern.
Statistical analyses
Descriptive statistical analyses for all the amino acid scores obtained were expressed as the mean ± standard deviation (SD).The one-way ANOVA analyses of the results obtained were performed using the StatSoft Statistica 10.0® software.The collected data were subjected to a one-way analysis of variance (ANOVA) for the purpose of comparing the means obtained, and significant differences were calculated according to post-hoc Tukey's HSD (honestly significant differences) test at the p<0.05 significant level (95% confidence).
RESULTS AND DISCUSSION
Protein quality evaluation aims to determine the capacity of food protein sources and diets to satisfy the metabolic demand for amino acids and nitrogen (WHO 2002).Therefore, any measure of the overall quality of dietary protein, if correctly determined, should predict the overall efficiency of protein utilization.Safe or recommended intakes can then be adjusted according to the quality measure for meeting the demands.The essential, semi-essential and non-essential amino acids of flakes products (CF1-CF4) with sunflower are presented in Figure 2 (the composition of individual amino acids in flakes products was reported by Košutić 2016).The addition of sunflower (3, 6 or 9 g/100 g) in corn flakes (CF2, CF3, CF4) contributes to an increase in the content of semi-essential and non-essential amino acids in comparison with the samples without the sunflower addition (CF1) (Figure 2).Amino acids of sunflower contributed to an insignificant change in the essential amino acids of corn flakes (CF2, CF3, CF4).The nutritive value of flakes protein, expressed as the amino acid score, is presented in Table 1.According to WHO (2002), the following amino acids are defined as deficient in protein: lysine, sulfur amino acids and threonine.
Fig. 2. Amino acids of flakes products
The amino acid score is a useful tool for determining the effectiveness by which the absorbed dietary nitrogen meets the essential amino acid requirements at the safe level of protein
ROTARY CUTTING 65 rpm/min
Products of rotary cutting φ = 12-15 mm, thickness 0.1 -0.3 mm intake.WHO/FAO/UNU (2002) marked lysine, sulfur amino acids and threonine as the most deficient amino acids in food proteins.The research results show that lysine is the limiting amino acid in all flakes products (CF1-CF4).All the values are less than 1, and the addition of sunflower increased the score.The ANOVA test showed statistically significant differences (p<0.05level, a 95 % confidence interval) in the score values for lysine in the samples with 9 g/100 g of sunflower (CF 4) compared to the samples CF1, CF2 and CF3.The results obtained indicate the highest score in lysine (0.60±0.01), sulfur amino acids (1.36±0.05)and threonine (2.26±0.13) in the corn flakes with 9 g/100 g of sunflower.According to the FAO / WHO / UNU (2002) data on the metabolic needs for protein and amino acids, the flakes AAS and PDCAAS values for four age groups of consumers are shown in Table 2.The changes in the AAS and PDCAAS within the same age group depend on the share of sunflower.The need for the limiting amino acid lysine is larger at an earlier age of life and the PDCAAS values are statistically significantly lower than those required for adults (over 18 years).The ANOVA test showed statistically significant differences (p<0.05level, a 95 % confidence interval) in the PDCAAS between the sample values with 6 and 9 g/100 g of sunflower (CF3, CF 4) compared to 0 and 3 g/100 g of sunflower (CF1, CF2) for age groups 4-18 years and >18 years.Sunflower addition positively affected the PDCAAS, which resulted in higher values, particularly for people over 18 years.Adding sunflower resulted in a statistically insignificant increase in the amino acid score in all the corn flakes analyzed.The results obtained show that protein flakes products with added sunflower are more suited to adults' needs (Table 3).
CONCLUSION
The results obtained indicate that the AAS and PDCAAS values in flakes products increased with an increase in the share of sunflower.The best AAS (lysine 0.58±0.01,sulfur amino acids 1.36±0.05,threonine 2.26±0.13)and PDCAAS for nutritive attributes of corn flakes proteins were recorded in 9 g/100 g of sunflower.
Sunflower in flakes products positively contributed to the protein nutritive value.Flakes products are new products with the enhanced essential amino acid pattern, exhibiting functional properties due to the added sunflower.Flakes products with sunflower are particularly suitable for the diet of adults.
Fig. 1.Process of manufacturing flakes products
Table 1 .
Nutritive value of protein in flakes products
Table 2 .
AAS and PDCAAS for different age groups of consumers The results are presented as the mean±SD, different letters within the same column indicate significant differences in the mean values (p<0.05) according to the Tukey's HSD test. | 2018-12-17T10:56:29.265Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "b2a6ce157b0acf470ff23cf8531bde9d67941039",
"oa_license": "CCBY",
"oa_url": "https://scindeks-clanci.ceon.rs/data/pdf/1821-4487/2017/1821-44871704204F.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b2a6ce157b0acf470ff23cf8531bde9d67941039",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
12196326 | pes2o/s2orc | v3-fos-license | Fep1, an iron sensor regulating iron transporter gene expression in Schizosaccharomyces pombe.
Schizosaccharomyces pombe cells acquire iron under high affinity conditions through the action of a cell surface ferric reductase encoded by the frp1(+) gene and a two-component iron-transporting complex encoded by the fip1(+) and fio1(+) genes. When cells are grown in the presence of iron, transcription of all three genes is blocked. A conserved regulatory element, 5'-(A/T)GATAA-3', located upstream of the frp1(+), fip1(+), and fio1(+) genes, is necessary for iron repression. We have cloned a novel gene, termed fep1(+), which encodes an iron-sensing transcription factor. Binding studies reveal that the putative DNA binding domain of Fep1 expressed as a fusion protein in Escherichia coli specifically interacts with the 5'-(A/T)GATAA-3' sequence in an iron-dependent manner. In a fep1 Delta mutant strain, the fio1(+) gene is highly expressed and is unregulated by iron. Furthermore, the fep1 Delta mutation increases activity of the cell surface iron reductase and renders cells hypersensitive to the iron-dependent free radical generator phleomycin. Mutations in the transcriptional co-repressors tup11(+) and tup12(+) are phenocopies to fep1(+). Indeed, strains with both tup11 Delta and tup12 Delta deletions fail to sense iron. This suggests that in the presence of iron and Fep1, the Tup11 and Tup12 proteins may act as co-repressors for down-regulation of genes encoding components of the reductive iron transport machinery.
Iron is an essential trace element (1,2). Because of its ability to undergo electronic changes by adopting both the reduced (Fe 2ϩ ) and oxidized (Fe 3ϩ ) forms, iron serves as catalytic cofactor for a wide variety of indispensable enzymes (3,4). Paradoxically, when present in excess, iron ions can have detrimental effects by reacting with reactive oxygen species such as hydrogen peroxide or dioxygen to produce free radicals that damage DNA, proteins, and membrane lipids (5). Therefore, cells possess specialized biochemical pathways that maintain the delicate balance between essential and toxic iron levels by controlling uptake and distribution (6).
Although iron is abundant in nature, its bioavailability is limited (7). In the presence of atmospheric oxygen, iron is oxidized to insoluble ferric hydroxides (8). Many organisms have developed different iron-scavenging systems for solubilizing iron and transporting it into cells, including cell surface reduction to soluble ferrous species, utilization of heme, and synthesis of siderophores, which are low molecular weight ironspecific chelators (9,10). Production and secretion of siderophores is a commonly used mechanism in aerobic bacteria and fungi, except budding and fission yeasts (11). Although these two yeasts lack the ability to synthesize siderophores, they can utilize siderophores produced by other microbes (12). In some fungi including Ustilago maydis (13), Neurospora crassa (14), Penicillium chrysogenum (15), and Aspergillus nidulans (16), when iron is in excess, siderophore synthesis is negatively regulated at the transcriptional level by a repressor. The promoter element necessary for DNA binding of the repressor contains the nucleotides 5Ј-GATAA-3Ј (17). Indeed, it was determined that this short sequence bears a strong sequence similarity to the recognition site, named GATA element, which is recognized by a family of regulatory proteins termed GATA binding transcription factors (13,18).
The use of bakers' yeast Saccharomyces cerevisiae as a model organism has led to the identification of critical components of the iron transport pathway (4,19,35). For high affinity iron uptake into cells, Fe 3ϩ is reduced to Fe 2ϩ by the Fre1 and Fre2 cell surface reductases (20 -23). After reduction, Fe 2ϩ ions are specifically transported across the plasma membrane by the Ftr1-Fet3 permease-oxidase complex (24). Within this complex, Fet3 can re-oxidize Fe 2ϩ to Fe 3ϩ in a copper-dependent oxidation reaction (25), allowing the passage of Fe 3ϩ ions across the membrane in concert with Ftr1. In the absence of Fet3 activity, Fet4 can transport reduced iron across the plasma membrane with low affinity (26,27). Alternative pathways for iron uptake in S. cerevisiae have been identified (28). For example, iron bound to siderophores can be taken up by cells through either components of the reductive iron uptake system or cell surface transporters of the ARN family (29). Furthermore, the cell wall mannoproteins Fit1, Fit2, and Fit3 have been found to mediate transport of iron (30). When cells are grown under iron starvation conditions, the expression of all yeast genes except FET4 encoding the above-mentioned components of the reductive and non-reductive iron uptake systems is up-regulated via the transcription factor Aft1 (30). Aft1 binds to the promoters of these genes in the absence of iron by interacting with the consensus cis-acting element, 5Ј-(T/C)(G/A)CACCC(A/G)-3Ј (31,32,34). When cells are grown under elevated iron concentrations, the Aft1 protein localizes to the cytoplasm (33), suggesting that Aft1 activity is modulated by its localization. Recently, studies in S. cerevisiae revealed a second iron sensor, Aft2 (36,37), with extended homology (residues 38 -285; 39%) to Aft1. Although Aft2 appears to control expression of genes involved in iron metabolism, a distinct regulatory function for Aft2 from that mediated by Aft1 has not been identified.
In Schizosaccharomyces pombe, studies have shown that Fe 3ϩ is reduced to Fe 2ϩ by the Frp1 cell surface reductase (38). Once reduced, Fe 2ϩ is transported across the plasma membrane via a permease-oxidase complex called Fip1-Fio1, orthologs of the Ftr1-Fet3 complex in S. cerevisiae (39). Although Fio1 is similar to Fet3, by itself Fio1 cannot complement the iron starvation defects of an S. cerevisiae fet3⌬ mutant strain, indicating that some molecular differences exist for high affinity iron uptake between the two species of yeast (39). It has been shown that the frp1 ϩ , fip1 ϩ , and fio1 ϩ genes are transcriptionally repressed under iron-replete conditions (38 -40). Interestingly, no sequence identity has been observed between the fio1 ϩ and FET3 promoter sequences or with any other 5Ј regions of iron-responsive genes from S. cerevisiae. Furthermore, BLAST searches for Aft1 or Aft2 homologs in the S. pombe genome data base (41) have revealed no S. pombe proteins with significant identity. Based on this observation, we sought to determine a consensus DNA sequence requirement for the putative S. pombe iron-sensing protein and, subsequently, to identify the fission yeast iron metalloregulatory protein that regulates the iron transporter gene expression.
In this study, we demonstrate that iron-mediated repression of the reductive iron transporter gene fio1 ϩ in S. pombe requires the promoter cis-acting element, 5Ј-(A/T)GATAA-3Ј. Furthermore, we find that the S. pombe Fep1 protein can sense and translate iron concentration changes to the iron transport machinery because of its ability to interact directly in an irondependent manner with the 5Ј-(A/T)GATAA-3Ј element found in the fio1 ϩ promoter region, which gives a marked repression of the fio1 ϩ gene expression. Moreover, we have also identified two proteins, Tup11 and Tup12, which act as putative corepressors for iron repression of the fio1 ϩ gene expression. Taken together, these results reveal the identity of cis-and trans-acting elements for molecular control of critical genes encoding components of the reductive iron uptake machinery in fission yeast.
Plasmids and Site-directed Mutagenesis-The plasmid pSP1fio1 ϩ -1155lacZ contains the fio1 ϩ promoter region up to Ϫ1155 from the start codon of the fio1 ϩ gene in addition to the E. coli lacZ gene. This latter plasmid was constructed via three-piece ligation by simultaneously introducing the EcoRI-StuI fragment of YEp357R (46) and the BamHI-EcoRI fragment from the fio1 ϩ promoter containing 1155 bp of the 5Ј-noncoding region and the first 13 codons of the fio1 ϩ gene into the BamHI-SmaI cut pSP1 vector (47). Four plasmids (pSP1fio1 ϩ -884lacZ, pSP1fio1 ϩ -793lacZ, pSP1fio1 ϩ -761lacZ, and pSP1fio1 ϩ -680lacZ) harboring sequential deletions from the 5Ј end of the fio1 ϩ promoter were created from plasmid pSP1fio1 ϩ -1155lacZ using the ExoIII/mung bean nuclease method as described previously (48). The plasmid pSK-fio1 ϩ 297 containing nucleotides from position Ϫ922 to position Ϫ625 with respect to the A of the ATG codon of the fio1 ϩ ORF was created to introduce mutations in either or both GATA elements (positions Ϫ800 to Ϫ795; positions Ϫ777 to Ϫ772) by site-directed mutagenesis. Precisely, the oligonucleotides 5Ј-Ϫ779 CCAATCTGGACAAAAGGGCGTC-GATGTAATCCAGATGCCTGGAAG Ϫ823 -3Ј, 5Ј-Ϫ756 CACTTTGATCG-GTTGCGACAGGACCAATCTGGACAAAAGTTATCAGATG Ϫ804 -3Ј, and 5Ј-Ϫ756 CACTTTGATCGGTTGCGACAGGACCAATCTGGACAA-AAGGGCGTCGATGTAATCCAGATG Ϫ815 -3Ј (letters that are underlined represent multiple point mutations in the GATA elements) were used in conjunction with pSKfio1 ϩ 297 and the Chameleon mutagenesis kit (Stratagene, La Jolla, CA). The DNA sequence for each construct created was verified by dideoxy sequencing, and the fio1 ϩ promoter fragment was inserted into the XhoI and SmaI sites of pCF83 (49) for analyzing heterologous reporter gene expression.
Disruption of the S. pombe fep1 ϩ Gene-A functional ura4 ϩ cassette was isolated from pUR18 (50) by PCR. The primers were designed to create ClaI and NdeI sites at the beginning and the end of the ura4 ϩ genetic marker, respectively. After digestion at these sites, the ura4 ϩ fragment was inserted to replace two-thirds of the fep1 ϩ ORF, leaving 477 and 200 bp each side of the fep1 ϩ locus for homologous recombination, creating pfep1⌬::ura4 ϩ . The gene disruption fragment (5Ј-fep1-ura4 ϩ -fep1-3Ј) was generated by restriction endonuclease digestion using unique flanking sites (BamHI and Asp718) and then transformed into the appropriate S. pombe strains by electroporation (51). The allele status of the locus in all strains generated was verified using Southern blotting and diagnostic PCR. Conveniently, this disruption rendered the mutant strain unable to grow aerobically on medium containing 10 g/ml phleomycin (Sigma), an antibiotic that confers iron-dependent toxicity. The phleomycin-sensitive growth phenotype, because of the inactivation of fep1 ϩ , was remediated by integration of the wild type fep1 ϩ gene to the leu1 locus in fep1⌬ strain cells. The plasmid for integration was constructed by insertion of a 3.2-kb SacII-XhoI genomic fragment encompassing the fep1 ϩ gene, which was cloned into plasmid pJK148 (52) before transformation into cells for homologous recombination.
RNA Analysis Methods-For RNase protection analyses (53), three plasmids for making antisense RNA probes were utilized. The plasmids pKSlacZ and pSKact1 ϩ used were described previously (40,48,54). The plasmid pSKfio1 ϩ was constructed by inserting a 218-bp BamHI-EcoRI fragment of the fio1 ϩ gene into the same sites of pBluescript II SK. The antisense RNA hybridizes to the region between ϩ91 and ϩ309 downstream from the initiator codon of fio1 ϩ . For Northern blot analyses, the fep1 ϩ gene was isolated by PCR using primers that corresponded to the start and stop codons of the ORF. This PCR product was purified using the GFX gel band purification kit (Amersham Biosciences). A 32 Plabeled probe was made from the DNA fragment using the Random primed labeling kit (Roche Molecular Biochemicals) and purified using the Quick spin probe purification column system (Roche Molecular Biochemicals). Hybridization was carried out according to the Schleicher & Schuell protocol. The S. pombe act1 ϩ probe (40) was used as an internal control for normalization during quantitation.
Expression of the MBP-Fep1 Fusion Protein-The DNA containing the amino-terminal 241 codons of Fep1 was fused in-frame to the maltose-binding protein. To generate this fusion, the fep1 ϩ gene starting at ϩ4 after the start codon up to ϩ723 was amplified using Pfu Turbo polymerase (Stratagene). The polymerase chain reaction fragment was cloned into the BamHI-PstI sites of pBluescript II KS and sequenced to verify its integrity. The fragment was digested and cloned into the pMAL-c2X vector (New England BioLabs, Beverly, MA) using the same restriction sites. Plasmid pMAL-fep1 ϩ was transformed into E. coli TB1. Fresh transformants of TB1 cells containing the plasmid pMAL-c2X or pMAL-Fep1 were grown to A 600 of 0.5 in rich medium (1% Bacto-tryptone, 0.5% yeast extract, 1% NaCl, and 0.2% glucose) containing 100 g of ampicillin/ml. At this early growth phase, the cells were induced in the presence of FeCl 3 (0 and 1 mM) or BPS (1 mM) with 0.2 mM isopropyl--D-thiogalactopyranoside for 2 h at 25°C. Harvested cells were washed once in ice-cold water and resuspended in C buffer (20 mM Tris-HCl at pH 7.4, 200 mM NaCl, 1 mM EDTA, 1 mM dithiothreitol, and 1 mM phenylmethylsulfonyl fluoride) with an equal volume of glass beads and protease inhibitors (8 g/ml aprotinin, 4 g/ml pepstatin, 2 g/ml leupeptin). The mixture was vortexed for 45 s at top speed at 4°C for 4 times. After centrifugation at 4°C, the whole cell extracts were purified by affinity chromatography using the amylose resin as described by the manufacturer.
Electrophoretic Mobility Shift Assays-To demonstrate specific DNA binding activity for Fep1, electrophoretic mobility shift assay binding reactions were carried out using 1ϫ binding buffer that contained 12.5 mM HEPES (pH 7.9), 75 mM NaCl, 4 mM MgCl 2 , 1 mM EDTA, 10% glycerol, 4 mM Tris-HCl (pH 7.9), 0.6 mM dithiothreitol, 1 g of poly(dI-dC) 2 , 5 M ZnSO 4 , and 5 M FeCl 3 unless otherwise stated. Typically, ϳ240 ng of affinity-purified MBP-Fep1 was incubated for 20 min at 25°C with ϳ1 ng of 32 P-end-labeled double-stranded oligomers harboring the two 5Ј-(A/T)GATAA-3Ј sites. When indicated, competitors to concentrations specified in Fig. 7A were added together with the probe. Once incubated, the reaction mixtures were loaded onto a 4% native polyacrylamide gel (30:0.8 acrylamide/bis ratio) that had been preelectrophoresed for 60 min in 0.25ϫ TB (44.5 mM Tris and 44.5 mM borate) at 4°C. The DNA-protein complex was separated from the free probe by electrophoresis at 4°C and 4 W constant power for 2 h. Subsequently, the gel was fixed, dried, and exposed to a Molecular Dynamics screen.
Identification of Cis-acting Elements Responsible for Ironrepression of the fio1 ϩ Multicopper Oxidase Gene Expression-
Studies of iron uptake in S. pombe show that the frp1 ϩ gene encodes a ferric reductase, which reduces Fe 3ϩ to Fe 2ϩ at the cell surface (38,55). Once reduced, Fe 2ϩ is taken up by a permease-oxidase complex called Fip1/Fio1, which transports iron across the plasma membrane with high affinity (39). A hallmark of the genes encoding components of the high affinity iron uptake system including frp1 ϩ , fip1 ϩ , and fio1 ϩ is the fact that they are transcriptionally expressed according to iron need. They are activated during iron deprivation and repressed by iron repletion (38 -40). In S. pombe, the fip1 ϩ and fio1 ϩ genes share the same promoter, with the fip1 ϩ -fio1 ϩ genes divergently transcribed (39). Consistently, it is thought that both genes share the same iron regulatory elements in that intergenic promoter. A previous investigation has shown that a short promoter region of the frp1 ϩ gene (from position Ϫ332 to position Ϫ279 relative to the first nucleotide of the initiator codon) was involved in response to iron repletion (38). However, no regulatory element was defined in detail for the iron responsiveness of the frp1 ϩ gene. The program GeneStream (Baylor College, Houston, TX) was used to determine a common cisacting element between fip1 ϩ -fio1 ϩ and frp1 ϩ promoters. Comparison of the fip1 ϩ -fio1 ϩ intergenic promoter with frp1 ϩ revealed one short region exhibiting 68.5% identity in 54-bp overlap between the two promoters ( Fig. 1). Within this shared promoter segment, we noted the presence of two copies of a repeated sequence, 5Ј-(T/A)GATA(A/T)-3Ј, similar to the binding sites for the GATA family transcription factors (56). A third sequence, highly conserved but distinct from the two GATA sequences, was also observed at the 3Ј end of that promoter segment. To ascertain whether the two GATA-like elements play a role in fio1 ϩ regulation by iron, a series of nested 5Ј deletions of promoter sequences beginning at position Ϫ1155 were created in the plasmid pSP1fio1 ϩ -1155lacZ (Fig. 2). This fusion promoter was able to down-regulate (ϳ2-fold) and upregulate (ϳ8-fold) lacZ mRNA expression in the presence of iron or BPS, respectively (Fig. 2C). Removal of the fio1 ϩ upstream region between Ϫ1155 and Ϫ884 had little effect on the iron-dependent regulation of the fio1 ϩ -lacZ fusion, except for the magnitude of the response, which was more pronounced with ϳ3-fold repression in response to iron and ϳ13-fold activation under iron starvation conditions (Fig. 2C). Further deletion to position Ϫ793 gave high constitutive levels of fio1 ϩ -lacZ fusion gene expression with failure to repress gene expression in response to iron concentrations below 100 M. Under iron deprivation conditions, increased gene expression was detected. When the fio1 ϩ promoter was further deleted to position Ϫ761, the fio1 ϩ -lacZ gene was still remarkably highly expressed. Furthermore, this pSP1fio1 ϩ -761lacZ derivative was completely defective in iron-regulated gene expression. Deletion to position Ϫ680 abolished the highly expressed steady-state level of fio1 ϩ -lacZ mRNA, lowering its expression to a minimal threshold. Interestingly, this 81-bp DNA region between positions Ϫ761 and Ϫ680 contained the above-men- FIG. 1. Short conserved region between the fip1 ؉ -fio1 ؉ and frp1 ؉ promoters. Two putative GATA-like elements of the fip1 ϩ -fio1 ϩ promoter are boxed. These GATA-like sequences are found within a predicted iron-dependent regulatory region of the frp1 ϩ promoter (38), which has yet to be characterized. tioned third conserved region located at the 3Ј end of the two GATA-like elements. However, our data do not allow us to establish whether this third conserved DNA region was responsible by itself for high constitutive levels of fio1 ϩ -lacZ fusion gene expression. Because of the observation that the integrity of the region between positions Ϫ884 and Ϫ761 was essential for driving iron repression of the fio1 ϩ -lacZ fusion gene, we examined whether a fio1 ϩ promoter segment including this region could regulate a heterologous reporter as a function of iron availability (Fig. 3). A short 297-bp DNA segment derived from the fio1 ϩ promoter (positions Ϫ922 to Ϫ625) was inserted in its natural orientation upstream of the minimal promoter of the CYC1 gene fused to lacZ in pCF83 (49). This fusion was able to repress (ϳ3-fold) lacZ mRNA expression in the presence of iron. Conversely, under iron starvation conditions, lacZ mRNA expression was strongly derepressed (ϳ12.5-fold) as compared with the level of transcript detected from control (untreated) culture (Fig. 3). Within this 297-bp DNA segment, two copies of a repeated sequence, 5Ј-(T/A)GATAA-3Ј, which bears a striking similarity to the binding sites for the GATA transcription factors, was altered in either or both repeats. Cells carrying these fio1 ϩ -CYC1-lacZ fusion plasmids were assayed for iron-regulated expression of lacZ mRNA (Fig. 3). Although the overall magnitude of the response is clearly optimal with the presence of both elements, the presence of only one of the two elements is sufficient to confer regulation in a iron-dependent fashion. As compared with the wild type promoter segment, ϳ40% of the response was still observed when the first element was unaltered and the second one mutated (Fig. 3). When the first element was mutated and the second one was wild type, although the down-regulation of the lacZ gene was compromised, induction was still observed in response to iron limitation. Indeed, when both repeats were mutated, there was a complete lack of iron-responsive gene expression (Fig. 3). Taken together, these results show that a conserved element in the fip1 ϩ -fio1 ϩ intergenic promoter with the sequence 5Ј-(T/A)GATAA-3Ј, which is also found as direct repeats in the frp1 ϩ promoter, plays a critical role in ironregulated gene expression in fission yeast.
The fio1 ϩ Gene Expression is Negatively Regulated by Iron through Fep1-Because of the presence and requirement of the cis-acting element 5Ј-(T/A)GATAA-3Ј for appropriate regulation of the fio1 ϩ gene expression, we sought to identify a transacting protein able to recognize such a DNA binding motif. Analysis of genomic DNA sequence from the S. pombe Genome Project revealed four complete ORFs that encode putative GATA-type transcription factors. Among them, SPAC23E2.01 2 (57) encodes a protein that exhibits an extended homology to four previously identified iron transcriptional repressors of siderophore biosynthesis in other fungi (16). This polypeptide of 564 amino acids, which we have termed Fep1 (Fe protein 1), has a predicted molecular mass of 60.6 kDa (Fig. 4A). The amino-terminal 220 amino acids of Fep1 bears strong identity (42%) to the amino-terminal region of Srea (residues 87-300) from A. nidulans (16), Srep (residues 77-287) (42%) from P. chrysogenum (15), Urbs1 (residues 304 -532) (39%) from U. maydis (13), and Sre (residues 86 -331) (37%) from N. crassa (14). Within this region of Fep1 reside two GATA-type zinc finger motifs (Fig. 4A). Analogous to the situation for urbs1 and sre, the steady-state levels of fep1 ϩ mRNA were found to be constitutive and unresponsive to cellular iron status (Fig. 4B). Interestingly, we observed two fep1 ϩ transcripts of ϳ3.1 and ϳ1.6 kb, with the lower one much weaker relative to the upper one (Fig. 4B). These two fep1 ϩ mRNA species may represent RNAs in which two distinct sites of poly(A) addition have been utilized. To investigate the role of Fep1 in fission yeast, we deleted the fep1 ϩ gene (fep1⌬). Inactivation of fep1 ϩ gave rise to a high level of fio1 ϩ gene expression without any change in response to iron repletion or iron starvation conditions (Fig. 5). In the fep1⌬ strain, the fio1 ϩ gene was highly derepressed by ϳ18-fold as compared with the basal level of fio1 ϩ transcript detected in the wild type strain (fep1 ϩ ) (Fig. 5B). Moreover, cells harboring an inactivated fep1 ϩ gene (fep1⌬) exhibited increased activity of the cell surface metalloreductase(s) (Fig. 6A), presumably as a consequence of lack of transcriptional repression of gene(s)-encoded reductase(s) (e.g. frp1 ϩ ). Using a plate assay for detection of cell surface reductase activity (45), we observed that fep1⌬ cells exhibited a strong and bright red coloration as a consequence of the tetrazolium salt reduction to formazan that occurs at the cell surface. Conversely, reductase activity is much lower in fep1 ϩ wild type cells or in mutant cells corrected by the restitution of a wild type copy of the fep1 ϩ gene. Consistently, fep1⌬ mutant cells displayed hypersensitivity to phleomycin, an antibiotic that cleaves nucleic acids in the presence of excess iron when cells are grown aerobically (58,59). As shown in Fig. 6B, mutant cells (fep1⌬) were unable to grow in the presence of the drug. In contrast, fep1⌬ cells in which the wild type fep1 ϩ gene was re-integrated regained phleomycin resistance, thereby indicating that the inability to grow was linked with the fep1⌬ mutation (Fig. 6B). Taken together, these data indicate that fep1⌬ cells accumulate iron in excess of the physiological requirement and suggest that Fep1 plays a critical role as a sensor to repress iron transporter gene expression as a function of iron availability.
Fep1 Interacts Directly with GATA Sequences in an Iron-dependent Manner-Based on the gene expression data we obtained, we predicted that the Fep1 factor directly interacts with 5Ј-(T/A)GATAA-3Ј sequences to mediate iron regulation. To test this hypothesis, we produced in E. coli cells a MBP-Fep1 fusion protein that comprises the amino-terminal region of Fep1 from residues 2 to 241. The polypeptide was purified to near homogeneity using two rounds of one-step affinity chromatography based on MBP affinity for maltose (60). 3 To examine whether the amino-terminal domain of Fep1 (residues 2-241) interacts with GATA-like elements, DNA binding experiments were carried out with the purified fusion protein. As 2 The fep1 ϩ gene corresponds to the GenBank TM accession number AJ457978. 3 6. Inactivation of the fep1 ؉ gene perturbs the homeostatic control of reductive iron acquisition. A, S. pombe strain bearing the disrupted fep1⌬ allele displays high levels of cell surface ferrireductase activity as indicated by the dark red formazan that precipitates around the colony within 10 -90 min. As a control, the wild type strain (fep1 ϩ ) or the disruption strain in which the wild type fep1 ϩ gene was re-integrated (fep1⌬; leu1::fep1 ϩ ) exhibits much lower reductase activity as shown by a lack of coloration to a level comparable with that of fep1⌬ cells. B, the indicated isogenic strains were spotted onto yeast extract plus supplements containing the iron-dependent free radicals generator phleomycin. The ability of a copy of the wild type fep1 ϩ gene to suppress phleomycin sensitivity when re-integrated was assayed.
shown in Fig. 7 by a representative electrophoretic mobility shift assay, the wild type 32 P-labeled 46-bp oligomer, which is identical to the fio1 ϩ upstream region between Ϫ808 and Ϫ762 relative to the first nucleotide of the initiator codon, forms a strong DNA-protein complex in the presence of Fep1. To investigate the specificity of this complex formation, we carried out competition experiments with unlabeled oligomers either wild type or containing multiple point mutations in either or both GATA motifs within the 46-bp fio1 ϩ DNA fragment (Fig. 7). Formation of the complex was inhibited by incubation with excess wild type oligomer but not by the double mutant M1,2 competitor (Fig. 7A), indicating that the complex is attributable to sequence-specific interactions. When the oligomer had the first GATA sequence (M1) mutated, the complex formation was only slightly diminished. In contrast, the oligomer harboring identical point mutations but within the second GATA sequence (M2) competed almost as well as the wild type oligomer, indicating that the first GATA motif is the strongest element for Fep1 binding, whereas the second motif is much weaker. Interestingly, there is a good correlation between the binding affinity of Fep1 toward the two GATA elements, as assayed in vitro by electrophoretic mobility shift assay analyses, and their relative strength for transcriptional iron repression in vivo (Figs. 2 and 3). To test whether the MBP-Fep1 fusion protein binds the 5Ј-(T/A)GATAA-3Ј elements in a iron-dependent manner, electrophoretic mobility shift assay experiments were performed using preparations derived from E. coli expressing the MBP-Fep1 fusion protein and from cells containing the expression plasmid alone as a control. As shown in Fig. 8A by a representative electrophoretic mobility shift gel, the wild type 46-bp double-stranded DNA fragment formed a complex with MBP-Fep1 when the fusion protein was purified from E. coli cells grown in the presence of 1 mM FeCl 3 before extract preparation and purification. Although this complex was also detected with purified MBP-Fep1 prepared from untreated E. coli cells, no such complex was observed when the purified fusion protein was prepared from cells grown in the presence of BPS (Fig. 8A). The proteins produced from cells grown under ironlimiting conditions appeared less stable as shown in Fig. 8B. Taken together, these results strongly suggest that Fep1 differentially binds the 5Ј-(T/A)GATAA-3Ј elements under conditions of iron adequacy to repress the expression of the fio1 ϩ and fip1 ϩ (and possibly frp1 ϩ ) iron transport genes.
A Role for S. pombe Tup11 and Tup12 for Appropriate Expression of the fio1 ϩ Gene-Because fio1 ϩ gene down-regulation by iron is controlled by Fep1, which acts as a transcriptional repressor, we sought to identify additional components that participate in the iron-mediated inactivation of the high affinity iron transport genes. The S. pombe Tup11 and Tup12 encode proteins required for repression of the fbp1 ϩ gene, which is down-regulated when cells are grown on glucose (43,61). Much like S. cerevisiae Tup1, S. pombe Tup11 binds specifically to histone H3 and H4, and it is thought that the protein forms a multimeric complex with Tup12 and a putative S. pombe Ssn6 ortholog to repress gene expression in fission yeast (43). Based on information from S. cerevisiae, it is highly likely that this complex does not bind DNA directly but is recruited by sequence-specific DNA binding transcription factors (62). Using isogenic strains harboring wild type tup11 ϩ and tup12 ϩ genes or insertionally inactivated tup11⌬, tup12⌬, or tup11⌬ tup12⌬ double mutant genes, we ascertained whether Tup11 and Tup12 play a role in fio1 ϩ gene regulation as a function of cellular iron status (Fig. 9). In the wild type strains, although low basal levels of expression were observed, fio1 ϩ mRNA were reduced (ϳ2-3-fold) in the presence of iron. Conversely, in the presence of BPS, fio1 ϩ mRNA levels were induced (ϳ3-7-fold) over basal levels. In the tup11⌬ and tup12⌬ single mutant strains, a similar profile of fio1 ϩ gene expression was found, except for the magnitude of the fio1 ϩ steady-state mRNA levels detected in the tup12⌬ mutant strain, which were more pronounced under each culture condition used (ϳ3-fold). In the tup11⌬ tup12⌬ double mutant strain, a high constitutive level of fio1 ϩ mRNA was observed, with a lack of significant down (ϳ1.1-fold) or up (ϳ1.4-fold) regulation of the fio1 ϩ gene expression. Because the strain with both tup11⌬ and tup12⌬ deletions fails to sense iron, this suggests that the Tup11 and Tup12 proteins may act downstream of Fep1 (Fig. 10). Taken together, these data reveal that transcriptional repression of the fio1 ϩ iron transport gene in fission yeast by the ironresponsive repressor Fep1 requires functional tup11 ϩ and tup12 ϩ genes. DISCUSSION In S. pombe, the frp1 ϩ , fip1 ϩ , and fio1 ϩ genes involved in reductive iron acquisition are transcriptionally regulated by iron availability (38 -40). In this report, we have defined a cis-acting element, 5Ј-(A/T)GATAA-3Ј, which is found in two copies in each of the frp1 ϩ , fip1 ϩ , and fio1 ϩ promoters, and is required for iron-dependent repression of fio1 ϩ . Our studies of fio1 ϩ gene regulation suggest that the distal 5Ј-(A/T)GATAA-3Ј element (from position Ϫ800 to position Ϫ795 relative to the first nucleotide of the initiator codon) is the strongest for mediating iron repression. This idea is supported by two experimental results, first, its relative strength for transcriptional iron down-regulation in vivo (Figs. 2 and 3) and, second, its ability to compete for Fep1 binding to the GATA element (Fig. 7). Similarly to the U. maydis sid1 promoter in which the two distal GATA sequences clearly show distinct strength as binding site of Urbs1 (63), Fep1 exhibits a higher affinity for interacting to the upstream 5Ј-(A/T)GATAA-3Ј element of the S. pombe fio1 ϩ promoter. Which nucleotides within and flanking this fio1 ϩ promoter element contribute to the magnitude of the regulatory response must await a comprehensive dissection of the cis-acting element. Furthermore, although the two 5Ј-(A/ T)GATAA-3Ј elements found in each of the fip1 ϩ , fio1 ϩ , and frp1 ϩ promoters are arranged as either inverted or direct repeats, it is currently unknown whether the geometry plays a role in the regulation of iron transporter gene expression via these elements.
A key role for S. pombe Fep1 in regulation of reductive iron transport was revealed by the following data. First, fep1⌬ cells exhibited a marked reductase activity at their surface. Second, in the fep1⌬ mutant strain, the fio1 ϩ -encoded cell surface multi-copper ferroxidase was constitutively highly expressed (ϳ18-fold) with respect to the basal level detected in wild type strain. Third, the disruption strain (fep1⌬) was hypersensitive to phleomycin, suggesting that intracellular iron levels were elevated in these cells since the toxicity to this drug is iron-dependent. Fourth, the DNA binding domain of Fep1, expressed as a fusion protein in E. coli as the sole protein with the ability to recognize GATA-like elements, exhibited specific binding to the 5Ј-(A/T)GATAA-3Ј sequence. Furthermore, this specific interaction was only observed when the purified fusion protein was prepared from cells grown in the presence of iron. Taken together, these data suggest that Fep1 plays a critical nuclear signaling function by directly repressing the expression of the reductive iron transport genes under conditions of high iron availability through the 5Ј-(A/T)GATAA-3Ј promoter elements.
Consistent with a role for Fep1 as an iron sensor that represses fio1 ϩ gene expression in the presence of iron, we have identified two genes, tup11 ϩ and tup12 ϩ , known to encode proteins that function as general transcriptional co-repressors (43) that are required for iron-regulated gene expression. Strains with both tup11⌬ and tup12⌬ deletions are insensitive to changes in iron levels. Indeed, in tup11⌬ tup12⌬ double mutant cells, fio1 ϩ expression was derepressed, reaching levels In the presence of iron, Fep1 binds DNA and forms a complex with Tup11 and Tup12, which act as co-repressors to inactivate gene expression. Conversely, in the absence of iron, the genes (e.g. fio1 ϩ ) encoding components of the iron transport machinery are synthesized since Fep1 fails to bind DNA. up to 8-fold of those observed in the wild type strain under comparable conditions. Interestingly, the steady-state levels of fio1 ϩ mRNA were also increased (to less extended) in the tup12⌬ single mutant strain (ϳ3-fold) but not in the tup11⌬ disruptant strain. Thus, tup12 ϩ may encode a nuclear component that limits fio1 ϩ expression even under conditions of iron scarcity. The Fep1 protein harbors several leucine-proline dipeptide repeats (from residues 286 -427) located in the middle part of the carboxyl-terminal half of the protein. One of them, 414 Leu-Pro-Pro-Ile-Leu-Pro 419 , is highly conserved in other repressors, including Srea from A. nidulans (16) and Srep from P. chrysogenum (15). Furthermore, in S. cerevisiae, this dipeptide repeat is also found in the Mig1 and Rox1 sequencespecific DNA binding repressors and has been proposed to play a role in protein-protein interactions with the general co-repressor complex that contains Tup1 and Ssn6 proteins (64). Perhaps this motif in Fep1 is required to recruit the co-repressor complex, which contains Tup11 and Tup12 in fission yeast (43). Recently, Knight et al. (65) demonstrate that iron-mediated regulation of genes involved in reductive iron uptake in Candida albicans requires a co-repressor protein orthologous to S. cerevisiae Tup1, supporting our observation that the S. pombe tup11 ϩ and tup12 ϩ genes plays a crucial role in downregulation of iron transport genes.
How does repression occur in response to iron? Interestingly, Fep1 harbors within its second zinc finger region an RXXE motif, which is composed of and flanked by amino acids such as arginine, aspartic acid, and glutamic acid (residues 184 -187) able to coordinate iron. This motif lacks only one glutamic acid residue (second position) to be identical to the REXXE motifs identified in the S. cerevisiae Ftr1 and Fth1 proteins and also found and shown to coordinate the binding of iron within mammalian ferritin light chains (24,66). Iron may directly bind Fep1, making the factor competent to recognize the 5Ј-(A/ T)GATAA-3Ј element in an iron-dependent manner for inactivating target gene expression. Consistently, within the U. maydis Urbs1 protein, a single substitution of arginine 494, which corresponds to arginine 184 of Fep1, renders the metalloregulatory factor unable to respond to the presence of iron for repressing gene expression (67). Furthermore, this putative iron-regulatory RXXE motif is highly conserved in all five identified fungal GATA factors. Interestingly, the MBP-Fep1 fusion protein purified from E. coli with no added iron or from cells grown in the presence of BPS appears unstable (Fig. 8B). One would expect that iron binding stabilizes the protein from putative cellular proteolysis. Efforts are under way to investigate the functional association of iron with this putative iron regulatory RXXE motif of Fep1.
How does repression take place through Fep1 and the 5Ј-(A/ T)GATAA-3Ј elements identified in this study relate to the previous report of repression of the fio1 ϩ promoter by the Cuf1 copper-sensing transcription factor (40)? At the core of this question, one point is critical to understand. Cuf1 is required for repression of iron uptake genes under low copper conditions (40). The rationale behind this proposed model is that in low copper conditions, the cells repress the copper-dependent iron uptake system, presumably to avoid a futile expenditure of energy in producing a system that lacks the necessary cofactor to function. In this study, the copper conditions were not limiting, therefore eliminating any interference with Fep1 function by Cuf1 in the fio1 ϩ promoter.
Although fission yeast lacks the ability to produce siderophore, it is intriguing that the Fep1 iron-sensing transcription factor exhibits many similarities to iron sensors from fungal species that produce siderophores, including U. maydis Urbs1, A. nidulans Srea, P. chrysogenum Srep, and N. crassa Sre (16,67). Based on the results available, we propose a model predicting that the nutritional transcription factor Fep1 can sense and translate iron concentration changes to the reductive iron transport machinery because of its ability to interact directly in an iron-dependent manner with the 5Ј-(A/T)GATAA-3Ј element and a general co-repressor complex that contains the Tup11 and Tup12 proteins (Fig. 10). Further studies will be needed to assess how Fep1 recruits and interacts with Tup11 and Tup12 to dictate the iron-dependent transcriptional response to maintain appropriate intracellular iron levels. | 2018-04-03T00:27:32.605Z | 2002-06-21T00:00:00.000 | {
"year": 2002,
"sha1": "483a5d3a9f2bf7349c9cf5dc28b0884e5cc08eac",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/277/25/22950.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "86be8d24cf76395211f31ae1b24f6c56d7fca996",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
53717790 | pes2o/s2orc | v3-fos-license | HSP70 Inhibitor Suppresses IGF-I-Stimulated Migration of Osteoblasts through p44/p42 MAP Kinase
Heat shock protein 70 (HSP70) is a ubiquitously expressed molecular chaperone in a variety of cells including osteoblasts. We previously showed that insulin-like growth factor-I (IGF-I) elicits migration of osteoblast-like MC3T3-E1 cells through the activation of phosphatidylinositol 3-kinase/Akt and p44/p42 mitogen-activated protein (MAP) kinase. In the present study, we investigated the effects of HSP70 inhibitors on the IGF-I-elicited migration of these cells and the mechanism involved. The IGF-I-stimulated osteoblast migration evaluated by a wound-healing assay and by a transwell cell migration was significantly reduced by VER-155008 and YM-08, which are both HSP70 inhibitors. VER-155008 markedly suppressed the IGF-I-induced phosphorylation of p44/p42 MAP kinase without affecting that of Akt. In conclusion, our results strongly suggest that the HSP70 inhibitor reduces the IGF-I-elicited migration of osteoblasts via the p44/p42 MAP kinase.
Introduction
It is firmly established that bone metabolism is regulated cooperatively by bone-forming osteoblasts and bone-resorbing osteoclasts and that bone tissue is consistently regenerated through bone remodeling [1,2]. The process of bone remodeling is initiated with osteoclastic bone resorption and osteoblasts subsequently migrate to the resorbed sites, which leads to bone formation. Adequate bone mass is maintained by the orchestrated cooperation of osteoclasts and osteoblasts [2]. Thus, the impairment of bone remodeling causes metabolic bone diseases such as osteoporosis. Evidence is accumulating that osteoblast migration is essential not only for physiological bone metabolism but also for pathological bone processes including bone-fracture healing [1,[3][4][5]. However, the exact mechanism behind osteoblast migration has not yet been clarified.
Insulin-like growth factor-I (IGF-I), which is embedded abundantly in the bone matrix, plays a crucial role in the regulation of bone metabolism [6,7]. Regarding IGF-I-effects on osteoblasts, we have previously shown that IGF-I upregulates the activity of alkaline phosphatase, which is a biomarker of bone formation, via p44/p42 mitogen-activated protein (MAP) kinase and phosphatidylinositol 3-kinase/Akt in osteoblast-like MC3T3-E1 cells [8,9]. As for the effect of IGF-I on osteoblast migration, IGF-I activates Akt and stimulates migration of osteoblast-like MC3T3-E1 cells as a chemotactic factor [10]. In our study [11], we have demonstrated that p44/p42 MAP kinase and phosphatidylinositol 3-kinase/Akt act as positive regulators in the IGF-I-induced migration of osteoblast-like MC3T3-E1 cells. However, how the molecular mechanism underlying IGF-I induces the osteoblast migration is unknown.
Heat shock proteins (HSPs) are induced in the cells exposed to various environmental stresses such as heat, hypoxia, and oxidation [12]. It is firmly established that HSPs play an important role as molecular chaperones in proteostasis under stress conditions [12]. Among HSPs, it is known that HSP70 (HSPA) is constitutively expressed in the unstressed cells and that HSP70 is involved in various physiological cell functions such as the regulation of steroid hormone receptors [13]. On the other hand, accumulating evidence indicates that HSP70 plays a pivotal role in pathological conditions including cancer, infection, and autoimmune diseases [14]. It has been reported that the overexpression of the HSP70 protein in tumor tissue is related to worse outcomes [15]. Therefore, it is currently recognized that the suppression of the HSP70 function is one possible therapeutic target against these diseases. With regard to the effects of HSP70 on bone cells, extracellular HSP70 reportedly stimulates the alkaline phosphatase activity and induces mineralization of human mesenchymal stem cells [16]. However, the details of HSP70 in osteoblasts remain to be clarified.
In the present study, we investigated the effects of HSP70 inhibitors on the IGF-I-elicited migration of osteoblast-like MC3T3-E1 cells and the underlying mechanism. In this paper, we show that the HSP70 inhibitor suppresses the IGF-I-elicited migration of osteoblasts through attenuation of the p44/p42 MAP kinase pathway.
Materials
IGF-I was obtained from R&D System, Inc. (Minneapolis, MN, USA). VER-155008 and YM-08 were obtained from Sigma-Aldrich Co. (St. Louis, MO, USA). Phospho-specific p44/p42 MAP kinase, p44/p42 MAP kinase, phospho-specific Akt (Thr308), and Akt antibodies were used for the first antibodies (Cell Signaling, Beverly, MA, USA). An ECL Western blotting detection kit was used (GE Healthcare UK Ltd., Buckinghamshire, UK). Other materials were purchased from commercial sources. VER-155008 and YM-08 were dissolved in dimethyl sulfoxide (DMSO). The maximum concentration of DMSO was 0.1%, which did not affect the assay for cell migration and the detection of the protein level using Western blotting.
Cell Culture
Cloned osteoblast-like MC3T3-E1 cells from an immortalized clonal cell line established from neonatal mouse calvaria [17], which were generously provided by Dr. M. Kumegawa, were maintained as previously reported [18]. MC3T3-E1 cells were cultured in α-minimum essential medium (α-MEM) with 10% fetal bovine serum (FBS) at 37 • C in a humidified atmosphere of 5% CO 2 /95% air. The cells were seeded into 90-mm diameter dishes (2 × 10 5 cells/dish) in α-MEM containing 10% FBS for five days. The medium was then exchanged for α-MEM containing 0.3% FBS and the cells were subsequently used for Western blot analysis after 48 h. For the cell migration assay, MC3T3-E1 cells were cultured in α-MEM with 10% FBS for three days, sub-cultured in α-MEM with 0.3% FBS for 6 h, and then used for the migration experiments.
Cell Migration Assay
For a wound-healing assay, cultured MC3T3-E1 cells were seeded at 10 × 10 4 cells/well into an Ibidi Culture-Insert 2 Well (Ibidi, Martinsried, Germany) with a 500-µm margin from the side of the well and allowed to grow for 24 h. After the insert was removed, the cells were then stimulated by 70 nM of IGF-I for 8 h. The cells were visualized by the EOS Kiss X4 digital camera (Canon, Tokyo, Japan) connected to a CK40 culture microscope (Olympus Optical Co. Ltd., Tokyo, Japan) before the stimulation of IGF-I and after 8 h. The area of migrated cells was measured by ImageJ software (version 1.48, NIH, Bethesda, MD, USA).
A transwell cell migration assay was performed by using a Boyden chamber (polycarbonate membrane with 8-µm pores, Transwell ® Corning Costar Corp, Cambridge, MA, USA), which was previously described [19]. Cultured MC3T3-E1 cells were trypsinized and seeded (10 × 10 4 cells/well) onto the upper chamber in α-MEM containing 0.3% FBS. IGF-I (10 nM) was added to the lower chamber in α-MEM with 0.3% FBS and incubated for 16 h at 37 • C. We, then, mechanically removed the cells on the upper surface of the membrane. Cells adherent to the underside of the transwell membrane were fixed with 4% paraformaldehyde and stained with 4 ,6-diamidino-2-phenylindole solution. These cells were stained, visualized, and counted by using fluorescent microscopy at a magnification of 20× by counting the stained cells. The migrated cells were photographed and counted by using fluorescent microscopy at a magnification of 20× by counting the stained cells from three randomly chosen high-power fields. When indicated, the cells were pretreated with VER-155008 or YM-08 for 60 min.
Western Blot Analysis
Cultured osteoblast-like MC3T3-E1 cells were pretreated with various doses of VER-155008 for 60 min and then stimulated by 10 nM of IGF-I or vehicle in 1 mL of α-MEM with 0.3% FBS for the indicated periods. The cells were then lysed, homogenized, and sonicated in a lysis buffer containing 62.5 mM Tris/HCl, pH 6.8, 2% sodium dodecyl sulfate (SDS), 50 mM dithiothreitol, and 10% glycerol. SDS-polyacrylamide gel electrophoresis (PAGE) was performed by using the method of Laemmli [20] in 10% polyacrylamide gels. The protein was fractionated and transferred onto an Immun-Blot polyvinyl difluoride (PVDF) membrane (Bio-Rad, Hercules, CA, USA). The membranes were blocked with 5% fat-free dry milk in Tris-buffered saline-Tween (TBS-T, 20 mM Tris/HCl, pH 7.6, 137 mM NaCl, 0.1% Tween 20) for 1 h before incubation with the indicated primary antibodies. Western blot analysis was performed, as described previously [21], using phospho-specific p44/p42 MAP kinase, p44/p42 MAP kinase, phospho-specific Akt or Akt antibodies as primary antibodies with peroxidase-labeled antibodies raised in goat against rabbit IgG (KPL, Inc., Gaithersburg, MD, USA), which are being used as secondary antibodies. The primary and secondary antibodies were diluted to optimal concentrations with 5% fat-free dry milk in TBS-T. The peroxidase activity on the PVDF membrane was visualized on X-ray films by utilizing an ECL Western blotting detection system.
Densitometric Analysis of Western Blotting
A densitometric analysis of Western blotting was performed by using a scanner and Image J software (image J version 1.48, NIH, Bethesda, MD, USA). The background-subtracted signal intensity of each phosphorylation signal was normalized to the respective total protein signal and plotted as the fold increase in comparison to control cells without stimulation.
Statistical Analysis
One-way ANOVA followed by Bonferroni's post-hoc comparisons tests were performed in all statistical analysis and p < 0.05 was considered to be statistically significant. Analysis was carried out by triplicate determinations from three independent cell cultures. All data are presented as the mean ± standard error of the mean (SEM).
Effect of VER-155008 on the IGF-I-Stimulated Migration of MC3T3-E1 Cells
In our previous study [11], we have shown that IGF-I elicits migration of osteoblast-like MC3T3-E1 cells evaluated by a wound healing assay and a transwell assay. We first examined the effect of VER-155008, an inhibitor of HSP70 [22], on the IGF-I-stimulated migration of MC3T3-E1 cells by a wound-healing assay. The increase of the filled area induced by IGF-I was significantly suppressed by VER-155008 (10 µM), which caused approximately a 35% decrease in the IGF-I-effect ( Figure 1).
wound-healing assay. The increase of the filled area induced by IGF-I was significantly suppressed by VER-155008 (10 µM), which caused approximately a 35% decrease in the IGF-I-effect ( Figure 1). In addition, we examined the effect of VER-155008 on the IGF-I-stimulated migration of osteoblast-like MC3T3-E1 cell using a Boyden chamber. VER-155008 markedly reduced the IGF-Istimulated MC3T3-E1 cell migration ( Figure 2). VER-155008 (10 µM) led to an about 75% reduction in the IGF-I-effect. In addition, we examined the effect of VER-155008 on the IGF-I-stimulated migration of osteoblast-like MC3T3-E1 cell using a Boyden chamber. VER-155008 markedly reduced the IGF-I-stimulated MC3T3-E1 cell migration ( Figure 2). VER-155008 (10 µM) led to an about 75% reduction in the IGF-I-effect.
Effect of YM-08 on the IGF-I-Stimulated Migration of MC3T3-E1 Cells
We next examined the effect of YM-08, which is another inhibitor of HSP70 [23], on the IGF-Istimulated migration of MC3T3-E1 cells by a wound-healing assay. YM-08 (10 µM) reduced remarkably the increase of the filled area induced by IGF-I (Figure 3). YM-08 (10 µM) led to about 40% reduction in the IGF-I-effect.
Effect of YM-08 on the IGF-I-Stimulated Migration of MC3T3-E1 Cells
We next examined the effect of YM-08, which is another inhibitor of HSP70 [23], on the IGF-I-stimulated migration of MC3T3-E1 cells by a wound-healing assay. YM-08 (10 µM) reduced remarkably the increase of the filled area induced by IGF-I (Figure 3). YM-08 (10 µM) led to about 40% reduction in the IGF-I-effect. Additionally, we examined the effect of YM-08 on the IGF-I-stimulated migration of MC3T3-E1 cell using a Boyden chamber. The IGF-I-stimulated migration of cells was significantly decreased by YM-08 ( Figure 4). The inhibitory effect of YM-08 on the migration was dose-dependent in the range between 0.1 and 30 µM. YM-08 (30 µM) caused approximately 50% reduction in the IGF-I-effect. Additionally, we examined the effect of YM-08 on the IGF-I-stimulated migration of MC3T3-E1 cell using a Boyden chamber. The IGF-I-stimulated migration of cells was significantly decreased by YM-08 ( Figure 4). The inhibitory effect of YM-08 on the migration was dose-dependent in the range between 0.1 and 30 µM. YM-08 (30 µM) caused approximately 50% reduction in the IGF-I-effect.
Effects of VER-155008 on the IGF-I-Induced Phosphorylation of p44/p42 MAP Kinase or Akt in MC3T3-E1 Cells
We have previously demonstrated that IGF-I elicits migration of osteoblast-like MC3T3-E1 cells through the activation of p44/p42 MAP kinase and phosphatidylinositol 3-kinase/Akt [11]. In order to investigate the mechanism underlying the suppression by the HSP70 inhibitor of the IGF-Istimulated cell migration, we further examined the effects of VER-155008 on the IGF-I-induced phosphorylation of p44/p42 MAP kinase or Akt. The IGF-I-induced phosphorylation of p44/p42 MAP kinase was significantly reduced by VER-155008 ( Figure 5). However, VER-155008 failed to affect the IGF-I-induced phosphorylation of Akt ( Figure 6).
Effects of VER-155008 on the IGF-I-Induced Phosphorylation of p44/p42 MAP Kinase or Akt in MC3T3-E1 Cells
We have previously demonstrated that IGF-I elicits migration of osteoblast-like MC3T3-E1 cells through the activation of p44/p42 MAP kinase and phosphatidylinositol 3-kinase/Akt [11]. In order to investigate the mechanism underlying the suppression by the HSP70 inhibitor of the IGF-I-stimulated cell migration, we further examined the effects of VER-155008 on the IGF-I-induced phosphorylation of p44/p42 MAP kinase or Akt. The IGF-I-induced phosphorylation of p44/p42 MAP kinase was significantly reduced by VER-155008 ( Figure 5). However, VER-155008 failed to affect the IGF-I-induced phosphorylation of Akt ( Figure 6).
Discussion
In the present study, we investigated the effects of HSP70 inhibitors on the IGF-I-elicited migration of osteoblast-like MC3T3-E1 cells. We first examined whether VER-155008, which is an HSP70 inhibitor [22], affects the IGF-I-elicited migration of osteoblast-like MC3T3-E1 cells evaluated by a wound-healing assay. VER-155008 significantly suppressed the IGF-I-elicited migration of MC3T3-E1 cells. In addition, we examined the effect of VER-155008 on the migration induced by IGF-I using a Boyden chamber and demonstrated that the IGF-I-induced migration was reduced by VER-155008. We next examined the effect of YM-08, which is another inhibitor of HSP70 [23], on the IGF-I-elicited migration of MC3T3-E1 cells. YM-08 significantly repressed the migration induced by IGF-I and evaluated by both a wound-healing assay and a transwell cell migration assay. Considering our findings, it is probable that the HSP70 inhibitor suppresses the IGF-I-induced migration of osteoblast-like MC3T3-E1 cells, which suggests that HSP70 acts as a positive regulator in the cell migration.
With regard to the intracellular signaling of IGF-I in osteoblasts, we have previously shown that p44/p42 MAP kinase and phosphatidylinositol 3-kinase/Akt act as positive regulators in the IGF-I-stimulated migration of osteoblast-like MC3T3-E1 cells [11]. Afterward, we investigated the exact mechanism behind the suppression by the HSP70 inhibitor of the IGF-I-stimulated migration. We demonstrated that the phosphorylation of Akt induced by IGF-I was not affected by VER-155008 in these cells. Thus, it seems unlikely that phosphatidylinositol 3-kinase/Akt is involved in the suppression by the HSP70 inhibitor of IGF-I-induced MC3T3-E1 cell migration. On the contrary, VER-155008 significantly reduced the phosphorylation of p44/p42 MAP kinase induced by IGF-I. Taken together, it is most likely that the HSP70 inhibitor reduces IGF-I-induced migration of osteoblast-like MC3T3-E1 cells through the inhibition of the p44/p42 MAP kinase. Regarding the mechanism of the molecular action of HSP70 on the p44/p42 MAP kinase pathway, it has been reported that mortalin, which is a member of the HSP70 family, could regulate the activity of MEK1/2, which is an upstream kinase of the p44/p42 MAP kinase, via protein phosphatase 1α in human melanoma cells [24]. It is possible that HSP70 could strengthen IGF-I-induced p44/p42 MAP kinase through stabilization of the MEK1/2-phosphorylated status in osteoblasts, which leads to the upregulation of migration.
Osteoblasts migrate to the sites resorbed by osteoclasts and the migrated osteoblasts then start bone formation at the resorbed sites [3][4][5]. Adequate migration of osteoblasts is indispensable for the regulation of physiological bone remodeling and the appropriate osteoblast migration is considered to be essential for maintaining both the quantity and quality of bone mass. Additionally, the osteoblast migration is crucial in pathological bone metabolic diseases including osteoporosis and fracture repair [3][4][5]. Since HSP70 plays an important role in the survival of cancer cells, HSP70 inhibitors have been developed as anti-cancer agents [25,26]. Our present findings strongly suggest that the HSP70 inhibitor could reduce the IGF-I-elicited migration of osteoblasts. It is established that IGF-I embedded in the bone matrix plays a crucial role in the regulation of bone metabolism [6,7]. Thus, using HSP70 inhibitors as anti-cancer agents, it is possible to modulate bone metabolism to result in a detrimental effect on bone. On the other hand, osteosarcoma is known to be a highly metastatic bone tumor [27]. The metastatic sequence involves migration from the primary tumor site to the surrounding extracellular matrix, intravasation, and extravasation. It has recently reported that overexpression of ribosomal protein L3, which is a target of 5-FU, reduces migration and reciprocally promotes apoptosis of lung and colon cancer cells under the treatment of 5-FU [28,29]. It is likely that suppression of migration causes a benefit for anti-cancer agents such as 5-FU, which has been used for osteosarcoma [30]. It is recognized that HSP70 is potently expressed in human osteosarcoma [31]. It has been reported that VER-155008 reduces cell viability and increases apoptosis of canine osteosarcoma cells [32]. Taking into account our present findings, it is possible that HSP70 inhibitors are useful candidates for drug combination in the chemotherapy of osteosarcoma and may result in the inhibition of tumor metastasis and invasion.
Regarding the expression of HSP70 in osteoblasts, we have previously demonstrated that HSP70 is highly expressed in osteoblast-like MC3T3-E1 cells without stimulation [33]. It has been reported that IGF-I reduces HSP70 expression in macrophages but not in fibroblasts [34]. The effect of IGF-I on the expression of HSP70 in osteoblasts needs to be clarified. On the other hand, we found that HSP70 inhibitors alone did not affect the baseline of osteoblast migration. Thus, it is likely that HSP70 inhibitors hardly affect the osteoblast migration under unstimulated conditions. However, clarification is needed regarding whether HSP70 plays a role in osteoblast migration in general. Further investigations including overexpression of HSP70 in osteoblasts are necessary to clarify the exact roles of HSP70 in bone metabolism.
Taken together, our results strongly suggest that the HSP70 inhibitor reduces the IGF-I-elicited migration of osteoblasts through the suppression of the p44/p42 MAP kinase pathway. | 2018-12-02T16:19:45.442Z | 2018-11-21T00:00:00.000 | {
"year": 2018,
"sha1": "259703a184e1ce5b821d4f853b6d80ee98c9cb44",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9059/6/4/109/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "259703a184e1ce5b821d4f853b6d80ee98c9cb44",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
225081793 | pes2o/s2orc | v3-fos-license | Hyaluronate supports hESC‐cardiomyocyte cell therapy for cardiac regeneration after acute myocardial infarction
Abstract Introduction Enormous progress has been made in cardiac regeneration using human embryonic stem cell‐derived cardiomyocyte (hESC‐CM) grafts in pre‐clinical trials. However, the rate of cell survival has remained very low due to anoikis after transplantation into the heart as single cells. Numerous solutions have been proposed to improve cell survival, and one of these strategies is to co‐transplant biocompatible materials or hydrogels with the hESC‐CMs. Methods In our study, we screened various combinations of biomaterials that could promote anoikis resistance and improve hESC‐CM survival upon co‐transplantation and promote cardiac functional recovery. We injected different combinations of Matrigel, alginate and hyaluronate with hESC‐CM suspensions into the myocardium of rat models with myocardial infarction (MI). Results Our results showed that the group treated with a combination of hyaluronate and hESC‐CMs had the lowest arrhythmia rates when stimulated with programmed electrical stimulation. While all three combinations of hydrogel‐hESC‐CM treatments improved rat cardiac function compared with the saline control group, the combination with hyaluronate most significantly reduced pathological changes from left ventricular remodelling and improved both left ventricular function and left ventricular ejection fraction by 28 days post‐infarction. Conclusion Hence, we concluded that hyaluronate‐hESC‐CM is a superior combination therapy for promoting cardiac regeneration after myocardial infarction.
by the lack of organs for donation and other legal-ethical issues, making it untenable as a solution for the rapidly increasing numbers of CVD patients. 5 On the other hand, the aforementioned classical treatments are unable to restore damaged cardiovascular tissue and can only delay the progression of CVDs. 3 Partial regeneration of damaged hearts is an alternative strategy that could avoid these pitfalls and revolutionize CVD therapy. In recent years, tremendous progress has already been made in both pre-clinical and clinical research on the therapeutic potential of stem cells with respect to cardiac regeneration. [5][6][7][8][9][10][11][12] However, there are still deep challenges for cell transplantation in cardiac regeneration therapies. One key obstacle is that only a small fraction of the engrafted cells retained at the injection site. For example, only <7% of bone marrow mesenchymal stem cells were detected upon injection into the coronary artery of the patients and only 2% of the stem cells remained 3-4 days after engraftment. 13 Typically, there are two ways stem cells can be delivered to the myocardium: intracoronary (IC) infusion and intramyocardial (IM) delivery. 14 However, the cellular survival rates have remained very low regardless of the delivery route.
Only 30%-40% of the stem cells could be detected at the early stage. Subsequently, the percentage of surviving cells steadily decline, reaching 1% to 15% by 4-12 weeks. 13,15,16 Such low survival rates could be caused by many factors, including cell death, ischaemia, immune rejection and 'mechanical' loss during heart beating. [17][18][19] Teng et al classified the trajectory in post-implantation cell numbers into three phases, namely phase I, a rapid and massive loss of cells immediately after cell transplantation due to both 'mechanical' loss during heart beating and material loss through the injection orifice; phase II, a period of gradual cell death; and finally phase III, an increase in cell numbers due to cell proliferation. 20 Many studies have aimed to reduce cell loss in the first two phases, 21 such as by transplanting cells at the point of cardiac arrest, enclosing the injection orifice with medical biological glue, co-transplanting cells with biomaterials, treating cells with anti-apoptosis factors or co-transplanting cells with factors that promote cell proliferation and inhibit apoptosis. 6,19,[22][23][24] These innovative transplantation strategies have greatly increased the rate of cell retention and survival after injection into the myocardium and improved the recovery of cardiac function.
Recently, some reports have found that hydrogels can activate cell signalling to prevent apoptosis and anoikis by providing a scaffold for cell adhesion. 25 Matrigel is a mixture of biologically derived extracellular matrix (ECM) proteins which improves cell retention and survival in the infarction area after co-transplantation with human embryonic stem cell-derived cardiomyocytes (hESC-CMs). 6 Alginate is a natural polysaccharide extracted from algae which forms a matrix after cross-linking and has been reported to prevent heart deterioration when injected into the infarction area of rat MI models. 26 Hyaluronate is another natural linear polysaccharide with disaccharide repeats of D-glucuronic acid and N-acetyl-Dglucosamine, that forms the main component of mammalian ECM.
Some studies have shown that hyaluronate could inhibit apoptosis, improve cell survival in the infarction area, promote vascular regeneration and promote recovery of cardiac function when co-transplanted with cells. [27][28][29] However, other reports have suggested that inflammation was aggravated after injection of hydrogel into the myocardium. 30 As a result, the best biomaterial for co-transplantation with hESC-CMs for promoting cardiac regeneration had remained unclear.
To screen for the best biomaterial, we cross-linked three different biomaterials (Matrigel, Alginate and Hyaluronate) to form hydrogels 7,28,31 and then co-transplanted the hydrogels with hESC-CMs into the myocardium of rat MI models. Clinical-grade functional hESC-CMs were derived using the VN differentiation system. 12 Subsequently, cardiac function was evaluated by ultrasound echocardiography, as well as electrocardiography in MI rats with programmed electrical stimulation 4 weeks after transplantation. Our results showed that hyaluronate-hESC-CMs provided the best functional outcomes in cardiac regeneration after acute MI in rat models.
| Ethical Statement
All procedures of this study were completed under the guidelines of the Institute of Zoology, Chinese Academy of Sciences and were approved by the institutional animal care and use committee of the Institute of Zoology, Chinese Academy of Sciences.
| Cell culture and differentiation
Our clinical-grade hESC line (Q-CTS-hESC-2) was maintained in commercially available E8 media on Vitronectin-NC-coated plates (1 μg/ cm 2 ). 32 Cells were passaged every 5 or 6 days using dispase (1 mg/ mL). The cultures were maintained with 3 mL medium per 9.6 cm 2 of surface area. All cultures were maintained at 37°C, 5% CO 2 and atmospheric O 2 in a humidified incubator (Thermo). Cardiac differentiation was performed according to methods previously reported in our laboratory. 12 Briefly, hESCs were digested into single cells using Accutase (Life Technologies) and reseeded at 10 5 cells/cm 2 density on Vitronectin-NC-coated plates. The cells were induced to differentiate with VN differentiation medium when they reached 90% confluence after 2-3 days of culture in E8 medium. In the first 24 hours, the VN medium was supplemented with 4 μM CHIR99021 (Stemgent), which induced hESC differentiation into mesoderm. Two days after, the medium was replaced with VN medium supplemented with 5 μM IWR1 (Sigma-Aldrich). The medium was changed on day 5, and the IWR1 treatment was maintained for another 3 days. Then, the medium was refreshed every other day with VN medium supplemented with 4 μg/mL insulin. Contractile activity was observed from day 8.
| Preparation of cross-linked biomaterial hydrogels
In our study, we selected sodium alginate and sodium hyaluronate as cross-linked hydrogels for delivering hESC-CMs to the myocardium of rat MI models. The cross-linking was performed as previously reported. 26,31 Briefly, we prepared 2% sodium alginate solution, 0.6% CaCl 2 and 2% sodium hyaluronate solution, respectively, and stored them at 4°C. Alginate solution was cross-linked with CaCl 2 in a 1:1 volume ratio before co-injecting with hESC-CMs.
| Echocardiography
Echocardiography data were collected on days −10, −2 and 28 of cell transplantation. Animals were lightly anaesthetized with 5% chloral
| Animals and surgical procedures
130 male Sprague Dawley rats at the age of 8 weeks were selected in our study. Surgery was performed under general anaesthesia with 5% chloral hydrate. Before surgery, rats were preliminarily assessed using the electrocardiogram from limb leads. The trachea was then exposed for the insertion of trachea cannula if the rat had a normal electrocardiogram. Rats were supported by mechanical ventilation at the set breathing rate of 80 per minute with 1:1 of inspiration and expiration. After opening the chest, the left coronary artery could be seen with the naked eye and the anterior descending branch was ligated with 7.0 suture to induce and model acute MI. 33 At the end of surgery, the thoracic fluid was absorbed with sterile gauze before closing the sternum and sterilizing the wound site with 75% alcohol.
From day −2 to the endpoint of day 28 of cell transplantation, animals were treated with cyclosporine A to suppress the immune response.
Rates were injected with 15 mg/kg (i.p.) dose per day in the first week and reduced to 10 mg/kg per day via oral administration thereafter.
| Cell transplantation
Q-CTS-hESC-2-CMs were purified at day 12/13 using the method of discontinuous Percoll gradient as previous reports 34 Eight days after surgery to induce acute MI, the rat MI models underwent a repeat thoracotomy, and 2 × 10 6 cells were injected via five separate injections into the infarcted border and central zone of the free left ventricular myocardium using an insulin syringe with 29-gauge needle. All groups except for the saline control group were supplemented with pro-survival cocktails, and the cell therapy groups were mixed with Matrigel, sodium alginate gels and hyaluronate gels, respectively. The surgeon was blinded to the details of each group.
| Programmed electrical stimulation
Four weeks after transplantation, the surviving rats were stimulated with programmed electrical stimulation (PES) to detect the stability of cardiac electrophysiology, using methods as previously reported. 5 In brief, each animal was anaesthetized with 5% chloral hydrate, mechanically ventilated and outfitted for standard limb leads ECG recordings (ADInstruments). Bipolar electrode needles contacted with the cardiac apex and left free wall of left ventricles after thoracotomy. Using standard clinical PES protocols, the pulse output was set at twice the capture threshold, containing a train of eight beats followed by a single extra stimulus for determination of the ventricular effective refractory period (VERP). After that, the heart was challenged three times with a train of eight beats followed by a single extra stimulus (with the S1-S2 interval set at VERP + 10 ms). If necessary, this procedure was repeated to apply three challenges with double or triple extra stimuli.
After PES, animals were sacrificed and injected with 10% potassium chloride into the ventricles, perfused with saline, followed by tissue fixation using formaldehyde. The infusion needle was inserted at the site of left ventricular apex and the auricula dextra was cut.
| Histology and immunocytochemistry
At the day 28 endpoint, all hearts were perfused, the right ventricles and atria were removed and sectioned into five rings from base to apex. The marked ring that had been transplanted with Q-CTS-hESC-2-CMs was selected, fixed and paraffin-embedded for histology. The ring was sectioned into 8 μm slices and then prepared for immunohistochemistry. We used primary antibodies directed against cTNT (Abcam) and ZNF397 (Rabbit polyclonal, Sigma-Aldrich) to identify engrafted Q-CTS-hESC-2-CMs. Secondary antibodies were diluted with 1% BSA and incubated for 1 hour, nuclei were stained with Hoechst 33342 (10 μg/mL) for 10 minutes and washed, and the slices were covered with coverslips and imaged with an LSM510Meta Confocal Microscope (Zeiss).
| Statistics
In our study, one-way ANOVA of Prism 5.0 was used to analyse the differences between groups with P = .05 for significance. All investigators were blinded to the types of data. Values are shown as mean ± SEM, unless stated otherwise. with both hydrogels and hESC-CMs ( Figure 1G).
| Acute MI rat left ventricular ejection fraction after co-transplantation with hydrogels
Four weeks after cell transplantation, the surviving rats' cardiac functions were measured with ultrasound echocardiography and myocardial electrophysiological stability was measured using programmed electrical stimulation. In a previous study, we reported that co-transplantation of cells with Matrigel significantly improved cardiac function in rats, compared to saline and Matrigel alone. 12 In this study, we obtained similar results. The average EF decreased These results demonstrate that co-transplantation of hESC-CMs with biocompatible hydrogels into the myocardium can prevent left ventricular function from further deterioration after acute MI in vivo, and hyaluronate had the best effect in improving cardiac function among the three biomaterials.
| Left ventricular remodelling in acute MI rats after co-transplantation with hydrogels
Left ventricular remodelling tends to occur after myocardial infarction. Maladaptive ventricular cardiomyocyte hypertrophy and scar tissue formation in the infarcted region causes expansion in the left ventricles, eventually resulting in chronic heart failure. Here, we measured the relative parameters of left ventricular remodelling in acute MI rats after injecting the mixtures of hydrogel-hESC-CMs (Table S1) . Figure 3A). On a per rat basis, the H-CM group also displayed the largest increase in ΔFS among the three biomaterial co-transplantation groups ( Figure 3B, Figure S1), whereas no differences were observed when the other two groups were compared to the saline group.
Ventricular fractional area change (FAC), assessed by ultrasound
echocardiography, is another assessment of cardiac contractile function. The results indicated that FAC decreased in all groups except the H-CM group at 4 weeks after transplantation ( Figure 3C).
Although we did not find significant differences when analysing ΔFAC on a per rat basis ( Figure 3D), the above FS and FAC data led us to conclude that co-transplantation of hESC-CMs with hyaluronate hydrogel into the myocardium of acute MI rat models had a positive effect in improving ventricular contractile function.
| Hyaluronate-cardiomyocytes protect against arrhythmias after acute MI
Arrhythmia is one of the lethal complications of acute MI because electrical conductance defects around the infarcted zone of the heart can lead to instability of overall cardiac electrophysiology.
We induced and detected arrhythmias using programmed electrical stimulation (PES) in all 4 treated groups of acute MI rats (ref; Figure 4E-F). Induced arrhythmia was detected in all 4 groups, but the H-CM group had the lowest ratio of induced arrhythmias ( Figure 4G).
| D ISCUSS I ON
The overall objective of this study is to screen for a suitable biomaterial that can be co-transplanted with hESC-CMs into animal models of acute MI in vivo and provide a reference for future clinical research. Cell therapy for CVD faces many challenges, such as mechanical loss from heart beating and cell death from stem cell anoikis, inflammation and immune rejection. These negative factors suppress the curative effects of cell therapy due to the low survival rate of cells in the damaged zone after transplantation.
Recently, scientists have found that combinations of cells and biomaterials can improve cell survival and increase cell retention by simulating the cellular microenvironment and activating anti-apoptosis signalling. 23,24,29,35 The biomaterial previously used to deliver hESC-CMs in pre-clinical trials was Matrigel. [5][6][7] Matrigel is a colloidal biological mixture, which consists of extracellular proteins derive from mouse sarcoma tumour cells. 36 But it is impractical to use Matrigel for clinical applications because it contains many undefined types of extracellular matrix proteins, oncogenic growth factors and other undefined ingredients. 37 Therefore, it is important for us to screen for a natural biomaterial to aid the delivery of hESC-CMs. Alginate is a natural biological polysaccharide which is stable, soluble, viscous and safe for use as a pharmaceutical excipient.
Moreover, as a cross-linked hydrogel, 38,39 alginate prevents adverse cardiac remodelling and dysfunction both shortly and long after acute MI in rats. 26 Hyaluronate-based gels are also appealing for co-injection, as this glycosaminoglycan polymer is one of the main components of naturally occurring extracellular matrix within mammalian connective tissues. It has been shown to promote angiogenesis in infarcted hearts, improve cell retention and survival, and left ventricular function. [26][27][28][29] Based on these findings, we selected alginate and hyaluronate-based biomaterials and cross-linked them to form hydrogels. 26,31 The resultant hydrogels were formulated and co-injected with hESC-CMs into the myocardium of rat acute MI models. We found that the combination of alginate and hESC-CMs effectively prevented left ventricular remodelling. It has been previously reported that injection of alginate hydrogels into the infarcted zone of rat acute MI models 26 can prevent cardiac deterioration. This was similar to what we observed. However, we found that the ventricular functional recovery was not as pronounced in the alginate co-transplantation group as other hydrogel co-transplantation groups. Consistent with our previous study, we found that co-delivery of hESC-CMs and Matrigel to the infarcted zone can also improve cardiac functional recovery. However, in this study, we demonstrated that hyaluronate hydrogel was the best among the biomaterials we screened for supporting hESC-CMs in cardiac regeneration after acute MI. The combination of hESC-CMs and hyaluronate-based hydrogel was the best in improving cardiac functional recovery, delaying left ventricular remodelling and preventing arrhythmias in rat acute MI models. While it is clear that hESC-CMs play the major role whereas hyaluronate plays the supportive role in cardiac regeneration after acute MI, the molecular mechanisms for this supportive function remain unclear. There are some reports suggesting that hyaluronate is one of the main components of the heart ECM, thus mediating cellular adhesion, self-renewal, differentiation and migration by providing a suitable microenvironment for cardiomyocytes. [40][41][42] In addition, hyaluronate can also be degraded rapidly in vivo and its degradation products can promote angiogenesis and cardiac regeneration. 40 In addition, it has been reported that hyaluronate rapidly restores metabolism of stem cells when co-cultured in vitro. 43 All of the above hypotheses may be possible mechanisms for the superior performance of the combination of hyaluronate and hESC-CMs in improving cardiac functional recovery after acute MI in vivo.
Programmed electrical stimulation is an important method to test the stability of cardiac electrophysiology. In our study, induced ventricular tachycardia was detected in all groups. It is known that hESC-CMs can aggregate and form cell islets upon retention in the infarcted area, thus increasing the risk of arrhythmia. 6,7 In addition, injected biomaterials may persist for a long time within the myocardium and may induce inflammation in the process. 44 Previous reports suggest that injection of synthetic hydrogels can worsen inflammation in the injected zone, suggesting that exogeneous hydrogels are not always beneficial for the heart, 30 and could disturb the electrical coupling between cardiomyocytes and hence induce arrhythmia. 21 In our study, acute MI rats that were co-injected with hyaluronate and hESC-CMs displayed the most stable cardiac electrophysiology and had the lowest rates of induced arrhythmias when stimulated. This could be because hyaluronate degrades rapidly in vivo within 12 hours after injection, and it is completely degraded within 13 days. 45 Hence, hyaluronate is the least likely biomaterial to cause inflammatory responses within the heart. 45,46 This could explain the lowest rates of induced arrhythmias in the group co-injected with hyaluronate and hESC-CMs.
In conclusion, we discovered that hyaluronate-based hydrogel is the most suitable biomaterial for delivering and supporting hESC-CMs in cell therapy for acute MI in vivo. Further work will be needed to explore the mechanism(s) underlying hyaluronate's role in supporting hESC-CMs during cardiac regeneration and functional recovery after acute MI.
CO N FLI C T O F I NTE R E S T
The authors declare that there is no conflict of interest that could be perceived as prejudicing the impartiality of the research reported.
DATA AVA I L A B I L I T Y S TAT E M E N T
The data that support the findings of this study are available from the corresponding author upon reasonable request. | 2020-10-28T13:06:01.935Z | 2020-10-27T00:00:00.000 | {
"year": 2020,
"sha1": "6d9e6a0d631428dd57f7b5e2c2d6818f097242c3",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/cpr.12942",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1a4a756bcf6f97c6ffdd200d5dc0781c5971f856",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234914035 | pes2o/s2orc | v3-fos-license | Workable Performance Management System for Government of Pakistan
There is a lot of debate in Pakistan on and about the poor performance of the government and the bureaucracy. The performance of the government is marred by the corrupt practices, inefficiencies and waste. The political leadership and bureaucracy both are publically called corrupt and stories appear on news, media channels every now and then of the corruption scandals, misuse of authority and wasteful working. We rank the lowest amongst the nations in terms of basic government services like education and healthcare. This paper aims to highlight the reasons of this poor performance and recommends a workable performance management program which can help the government in improving the performance of its various functions. It also gives an over view of the current practices of performance management system in the public sector of Pakistan and discusses briefly the history of performance management in the modern world. Most importantly we discuss the challenges we face while implementing a performance management system in government sector, what are the differences between a private sector performance management program and the problems we face when we implement it in public sector in a developing world specially in a country like Pakistan. The challenges become even more profound when we face a democracy where the political interference has destroyed the core of the bureaucratic structure. But all is not lost since many reforms over the past few decades put in place by successive governments have at-least paved the way for a more progressive performance management program which can help Pakistan deliver on its promise of becoming a great Islamic republic. The Islamic republic of Pakistan.
1) INTRODUCTION
Since the time of its birth, Pakistan has been struggling with the issues of resources and human capital. Over the years, resources increased but no effort was made to institutionalize human capital of the country. The bureaucratic structure which was inherited from the British was adopted without any material changes. This structure had its roots in the colonial philosophy with a strong emphasis on power for the rulers with very little accountability. However, in the past couple of decades we have witnessed some effort (though half hearted) to reform this structure. Most noteworthy is the use of performance management system being one of the reforms which will have a far-reaching impact on the efficiency and effectiveness of the bureaucratic structure of the country.
Unfortunately, we can clearly see that performance management has not worked well in Pakistan. This is mainly due to the fact that performance measures are often not used in program reviews or budget processes based on political realities. Also, the periodic changes or reforms that have been made are primarily procedural and cosmetic with little or no impact on the effectiveness of the program.
2) LITERATURE REVIEW
While we can trace the records of evaluating students in the early universities as well as evaluating soldiers for physical and mental strength hundreds of years ago. Bureaucracies have been evaluating employee performance for thousands of years. Chinese civil servants and military officers underwent mental, moral, and physical fitness evaluations as far back as 200 BCE (Danielle, Wiese and Ronald (1998). In the Middle Ages, European Guilds used evaluations for certifying craftsmen as Masters, and early universities used exams to evaluate students of divinity and the liberal arts.
However, a scientifically backed performance management solution is only about 100 years old. Taylor summed up his work in his book The Principles of Scientific Management which was voted the most influential management book of the twentieth century. Taylor (1911) proposed an efficiency model that became the basis of the modern performance management system and showed the world that how a properly designed scientific performance management program can improve productivity. Frederick Taylor's scientific management theory, also called the classical management theory, proved to be one of the most influential works in the performance management systems around the world. The writer himself says the larger profit would come to the whole world in general (Taylor, 1911).
Before we discuss the performance management in public sector or in bureaucratic structures we need to understand the theory of bureaucracy as propounded by Max Weber. Max Weber is thought to be the master mind of the theory of bureaucracy who declared that there must be a set of rules to guide the authorities in carrying out the official duties. He also opined that there must be a set of pre-qualifications and pre-requisites to join the bureaucracy along with the structure of hierarchy. He made it mandatory for officials to be impartial in discharging their duties without any loyalty to the ruler. He though that the officers must be appointed for life with the relevant pension and retirement benefits and must undergo periodic trainings to keep them abreast of how the work is done in public offices (Waters & Waters, 2015).
This theory by Max Weber is still valid for the Civil Service of Pakistan and we can see and taste its flavour in the operation and structure of the government sector in Pakistan. However, since we inherited the colonial structure which was primarily designed to keep the subjects subjugated, therefore the check and balance of the structure proposed by Max Weber never came into practice in Pakistan. Which allowed the political leadership to capitalize on and exploit this weakness of the system to the detriment of the people of Pakistan.
PUBLIC SECTOR PERFORMANCE MANAGEMENT IN PAKISTAN
Performance management is a systematic approach to improve performance. But it is easier said than done. It has not even worked for many Large private sector organizations so how can we expect to work for huge public administration set up. Thomas Woodrow Wilson (1856Wilson ( -1924 was an American politician and academic who served as the 28th President Of USA. from 1913 to 1921. He is considered the father of public administration in the United States. He first formally recognized public administration in an 1887 article entitled The Study of Administration. He wrote that "it is the object of administrative study to discover, first, what government can properly and successfully do, and, secondly, how it can do these proper things with the utmost possible efficiency and at the least possible cost either of money or of energy (Wilson, 1887).
So how can the efficiency be brought about in Public administration. The answer is a scientific method which is Performance management to improve performance. It requires many wheels to turn simultaneously just like the wheels of a car.
First wheel is the performance measurement. It is important to measure the input and output of a particular process or a system. In private sector the performance measurement is focused on things like profitability, market share, costs etc. However, it may not be relevant for most of the Public sector organizations. Hence the stepsister's Predicament ("if The Shoe Doesn't Fit, Get Another" We need a new set of performance measurements for public sector organization like outcome of the intended policy, efficiency and effectiveness of the program and most importantly transparency. Second wheel is the performance evaluation which comes with an integrated set of consequences. In private sector, it is done through a rating scale or ranking system which then translates into salary increases and bonuses in accordance with the evaluation. In public sector this may mean a whole new world, requiring the entire system to be put upside down. Here we must remember The Titanic Warning "It's What You Can't See That Can Sink You" As Daley writes in article, Judgmental techniques (which are more prevalent) follow the old command and control model of authority. They are quite explicitly linked to extrinsic rewards. In fact, the existence and adequacy of the reward structure is an important subsidiary question with regard to their effectiveness. This has proved an important limitation in their use among public sector agencies (Dennis, 1991).
In Public sector organizations we need new set of rules for interpreting performance measurement information, criteria and the consequence to go with it, just like we needed new set of performance measurement. First challenge comes with the interpretation of performance measurement information to justify money spent by the government. This requires a three-dimensional approach, the outcome of the intended policy, efficiency & effectiveness of the program and the transparency. 2nd challenge is the consequence which is constrained by the laws of the government, political interference and historically embedded corrupt practices in our bureaucratic hierarchy down to the core from top down to bottom up. Remember the Heisenberg Dilemma "Beware of the Law of Unintended Consequences" On the face of it it looks difficult to overcome this three-dimensional challenge of evaluation and it certainly looks impossible to break free from the threedimensional chain of consequences. In other words, are we saying that we, the citizens of Pakistan will never see the light of the day? We will never be able to see the promised results of the performance management movement which has proved so helpful in many of the developed countries? No. Not at all we aren't saying it cannot be implemented in Pakistan. This aspect will be discussed in detail in the recommendation section.
Third wheel is the capacity building of the incumbents to continuously and consistently improve the performance so that the bar continues to go higher and higher and standards continue to improve. Private sector manages it through training in the current role & development for the future roles. In Public sector training is treated as a reward and used for select few. Which in turn destroys the spirit of development and the need to put in practice the newly learned skills by the individuals. More importantly training must be used to improve the performance standards. As Daley notes that performance standards are meant to anchor an appraisal system to specific, job-related tasks. Inasmuch as they are consistent with written position descriptions (the basis/contract requirements upon which people are hired), they reinforce this connection between job and assessment of an employee's performance in the job. In addition, they help to communicate to the employee a clear understanding of job expectations (Dennis, 1991).
Fourth wheel is the process of implementation of the performance management program which requires lot of effort, commitment from the political leadership, bureaucratic leadership and the citizens. It also requires scientific techniques of change management principles to implement such a major change. Absence of this wheel has practically nullified every effort of the establishment division to promote performance management program in Pakistan. This wheel requires more detailed analysis and discussion which is done in this paper under performance prism and strategies and process implementation sections.
3) RESEARCH METHODOLOGY
The method used primarily to evaluate performance in Federal and Provincial Civil Service, is Performance Evaluation Report (PER), formerly called as Annual Confidential Report (ACR). Senior officer of the incumbent civil servant fills this report and it is done annually.
Public Sector in Pakistan is mandated by Establishment division to implement performance management system by the PAKISTAN PUBLIC ADMINISTRATION, RESEARCH CENTRE, MANAGEMENT SERVICES WING, ESTABLISHMENT DIVISION, ISLAMABAD (Edition 2004). This Edition and the previous directives of the establishment division directed and guided the federal government departments to implement performance management system. Process is also defined in the guidelines, i.e. how to fill out the forms and reports (A Guide to Performance Evaluation, 2004).
Goals were to be set at the beginning of the performance year, reports were to be submitted to measure progress against the goals. It also had directions, guidelines and various forms to fill out by the staff at various levels in the government hierarchy (A Guide to Performance Evaluation, 2004).
There are many types of performance evaluation forms for officers according to the civil servants' ranks. PERs are provided by Establishment division. For example, the PER form for officers in BPS-17 contains: (1) personal information of the officer, (2) self-evaluation of the officer by himself/herself, (3) evaluation of personal qualities by reporting officer, (4) pen picture, overall grading and fitness for promotion by reporting officer, (5) remarks of the countersigning officer and (6) remarks of the second countersigning officer. PER form for officers in BPS-19 and 20 is the same except the fitness for promotion part. PER form for officers in BPS-21 has three parts: (1) personal information of the officer, (2) self -evaluation of the officer by himself/herself and (3) evaluation of personal qualities and pen picture by reporting officer (Hanif, Jabeen & Jadoon, 2016).
4) RESULTS
A survey was conducted amongst the bureaucrats by Haque and Khawaja and the findings were in line with the general perception of populace of Pakistan. Following statement of the findings as reported in the report by Haque and Khawaja is alarming.
93% in the given sample felt that performance has deteriorated over the years and 38% of them thought that the deterioration is extreme. (Haque, & IKhawaja, 2007)
5) RECOMMENDATION
Before we start to implement any kind of reform and that too such a major change like a performance management system, we must recognize our political limitations. In Pakistan we would like the democracy to continue and in politics everything comes down to politics. This holds true for the entire democratic world whether developed or developing world.
This means that all performance measures will be subjected to political will. The performance measures specifically evaluations and accountability will be and must viewed through the electorate lens. As long as we do not recognize this limitation and make it work to our advantage, all our efforts to implement performance management system will go down the drain.
According to the interviews conducted by Ayesha Hanif University of the Punjab, Lahore. Nasira Jabeen University of the Punjab, Lahore. Zafar Iqbal Jadoon University of Central Punjab, Lahore. political meddling and interference were highlighted as the most important factor affecting not only performance but also the entire functioning and working of civil service. One senior bureaucrat viewed that merit starts deteriorating when no matter how sluggish or dim witted the officer is, he/she gets good scoring on PER and also better transfer and places of posting on the basis of political connections. Lack of merit-based transfers and postings make officials unconditionally loyal to their political masters and this fear factor heavily influences performance of the civil servants (Hanif, Jabeen & Jadoon, 2016).
All new literature on performance management programs are perfectly designed to match the private sector efforts to improve the productivity and efficiency starting from Frederick Tylor to this day. Although there is broad agreement that some form of performance measurement system is an important component of organisational control there is no general model that provides a precise prescription of such a system. (Fitzgerald, 2001) But nothing works for government sector. Even the latest versions of Kaplan's balance score card Which state that what you measure is what you get is not enough for public sector organizations and the government functions. (Kaplan & Norton 1992) Lynch and Cross performance pyramid also does not fit the glove of public sector performance management requirements. However, one theory proposed by Andy Neely and Chris Adams is worth a closer look. They call their performance management frame work a Performance Prism. The most important element of this new frame work is that it not only recognizes the existence of Stakeholders which are currently not considered stakeholders for the performance management frame works but it also requires them and quantify their contribution. There is a 'quid pro quo' between the all its stakeholdersstakeholders expect something from the public sector organisation -but the public sector organisation also wants something in return. Performance measurement should consider whether such stakeholders are delivering what the public sector organisation wants from them.
THE PRISM
The Performance Prism manages the performance through five interrelated angles. When light falls onto a prism, it is refracted, showing the complexity of light. In a similar fashion Performance Prism shows the hidden complexities of a performance management program.
STAKEHOLDER'S SATISFACTION & CONTRIBUTION
Neely and Adam recognize the importance of stakeholder mapping just like John Kotter, a master mind behind the change management philosophy. We will look at this angle from the first to fourth wheel perspective discussed earlier. i.e. which simply means all wheels will have to be seen through the performance prism. This will highlight the complexities of the performance management program in the public sector some of which have already been discussed earlier. Now each of the stake holder has different needs. For example, while the citizens of Pakistan demand the education and health services from the government but do they vote on the basis of these services or they give more importance to their ethnicity, bradri or tribe.
Also, we the citizens as a group are willing to pay taxes or most of us will evade taxes as in the past. While it is the responsibility of the bureaucracy to provide services to the citizens of Pakistan but the citizens have also something to contribute. Result will be a compromise on part of both parties and the journey will be slow. Like wise the political leaders demand performance from the bureaucrats but do they celebrate and reward performance or their own agendas, like corrupt practices, nepotism and parchi system. Here also bureaucracy is required to give political leaders the performance that they desire but in return the bureaucracy also has a right to demand contribution from political leaders through their commitment to the well being of the civil service officers, and rewards for those who do well in terms of their performance. Judiciary another stake holder has to amend the laws to give room to performance-based consequence not the rewards and promotions based on service and seniority.
STRATEGIES & PROCESSES
In Public sector strategy is critically important since each goal of every department is long term and it requires a proper strategy. For example, the goal of economic growth or a seat on the security council, getting out of FATF grey list, a robust health care system, an effective education system all require a proper strategy to achieve that specific goal. Strategy in the Performance Prism means how the goal will be achieved.
After the goal has been set with all the stake holders i.e. who is going to deliver what, suitable strategies have to be defined with specific milestone to be achieved along the way to determine whether they have the right business processes to support the strategies.
Performance measures will have to be developed to see how well these processes are working. Business process reengineering is a method used to identify redundant processes and Porter's Value chain analysis model is a way to identify the key processes. Identifying the key processes is one part but the capability of the people and availability of the technology and infrastructure required to operate the key processes is also equally essential. With out the capability of the people, technology and infrastructure even the best of the best process will fail to deliver the best outlined strategy. The same fact is reverberated across the globe by renowned researchers and professor. Most notably Professor Kotter who writes in his book and re-iterates in his other books and articles that change is hard. 70% change efforts fail (John P. Kotter 1996).
He also explains in his book; how can we pull off a major transformation such as implementing a robust and effective performance management program across the Public sector domain of Pakistan. His 8-step model is most effective in pulling off this kind of transition. Interesting to note is the fact that he also claims this model like wheels which have to be put in motion simultaneously and each step has to be given equal attention otherwise the change effort is bound to fail. In fact, a not perfect performance management system can work but a hasty deployment will surely fail. It is therefore recommended that the following 8 steps of successful transformation must be adopted if and when a performance management system is to be deployed for the public sector in Pakistan.
EIGHT STEPS TO TRANSFORMATION
1. Establishing a Sense of Urgency 2. Forming a Powerful Guiding Coalition 3. Creating a Vision
6) CONCLUSION
Pakistan has come a long way as a developing country and current reforms which are being undertaken are very promising but without the proper implementation of an effective performance management solution the transition from a fragile and vulnerable economy to a stable and strong economy will continue to remain a far cry. The tax payers' money will continue to be laundered and wasted. We also need to be cognizant of the fact that current governance structure of Pakistan is too top heavy to sustain. If the private sectors grow our exports and our manufacturing units reduce our dependencies on the imports, even then this heavy structure with hundreds of federal and provincial departments will be top heavy and will continue to threaten our existence and force us to pile up the debt burden. Therefore, we can safely conclude that a strong performance management program specifically tailored to the needs of our bureaucracy is the need of the hour for our beloved country. It is strongly recommended that a new performance management program is to be developed keeping in view the challenges of a political regime like Pakistan and consequences which can be applied consistently across the board. The implementation part must be done according to the principles of change management as propagated by Professor Kotter. "Long Live Pakistan".
REFERENCES:
A Guide to Performance Evaluation, Section 2 sub section ii clause ( | 2021-05-22T00:05:51.075Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "bd487048c89a8acd463e6aa3348df6d461a18093",
"oa_license": null,
"oa_url": "http://ibtjbs.ilmauniversity.edu.pk/journal/jbs/16.1/10.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "200f93a99fd4531110bf77e691d43b5ad185bfa0",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Business"
]
} |
225590547 | pes2o/s2orc | v3-fos-license | Quality management of electric energy and compensation of reactive power in the electric power system by means of stimulating tariffs
The article noted that feed-in tariffs are effective enough to attract consumers to participate in the process of improving power quality and reactive power compensation. It is shown that the application of premiums and discounts must not modify the revenues of the budget of ESO. Otherwise there is a contradiction to the principles of state regulation of tariffs for services of natural monopolies. In this case, legal barriers do not allow to use the model in practice. The proposed criteria that must be considered in the development and approval of the scale of discounts and allowances to ensure the legal purity of the incentive rate, to avoid its cancellation, as it was in 2000. Then, the proposed mechanism of discounts and allowances with the support of the Antimonopoly service and the Ministry of justice of the Russian Federation was cancelled as violating the principles of state regulation of tariffs in the conditions of the natural monopoly position of the network organization.
Introduction
Stimulating tariffs are widely used in world practice to attract electricity consumers to participate in the process of improving the quality of electricity and compensating for reactive power. The incentive rate Cstim for the consumer is the sum of the main Cbas and incentive part. The main part is the same for all consumers of the electric grid organization, receiving power at the same voltage level, and includes the average costs associated with the production and transmission of electricity [1].
The stimulating part is not the same for all consumers, it represents a fee for the distortion of power quality indicators (PKE) or for violation of the mode of reactive power consumption and is formed by surcharges and discounts to the main part of the tariff [2,3].
Relevance, scientific significance
For certain values of the incentive part of the tariff ΔCND, the installation and operation of technical means improving the quality of electric power and improving the mode of consumption of reactive power in a customer's network can be beneficial and stimulated to actions aimed at their use [4]. Unfortunately, the application of incentive tariffs is currently facing legal barriers. Attempts to introduce incentive tariffs in the form of discounts and surcharges fail. The lack of scientific knowledge and approaches for making organizational decisions currently does not allow the use of incentive tariffs.
Formulation of the problem
Despite the saturation of the market, stimulating tariffs are not acquired by technical means and are not used by electricity consumers. The quality of electrical energy (QE) remains low. The damage from low QE in Russia is estimated at a minimum of about $ 25 billion per year [5]. Reactive power flows are increasing. Nevertheless, the need to develop and implement a scale of discounts and allowances in the Russian Federation are actively discussed in the technical literature [6]. The development of a scale of discounts and allowances is associated with the need to formulate requirements, conditions and restrictions that should remove the legal barriers to their use in practice. Without this, it is not possible to interest consumers in the use of technical means of improving the quality of electricity and compensation of reactive power.
Theoretical part
The mathematical expression for the incentive tariff in the case of the use of allowances ΔCND in one form of the record has the form Equations should be centred and should be numbered with the number on the right-hand side.
ND bas stim
Or in another form At the same time (1+ ΔCND/Cbas) a can be called a raising factor to the tariff.
If a discount ΔCCK is applied to a consumer, then the expressions for the incentive tariff take the form: where (1-ΔCND/Cbas) can be called a reduction factor. The effectiveness of the incentive tariff should be considered on the model. To this end, it is proposed to consider the incentive tariff model, which consists of three interrelated elements. One of them is the energy supplying organization (ESO). The second is a consumer with a load that distorts the power quality indicator or a consumer that violates the mode of consumption of reactive power. We call this consumer culprit violation of electricity consumption (CVC). The third element is a consumer who is forced to perceive the consequences of a violation of the mode of electricity consumption by a consumer CVC. Consequences may be associated with low power quality, limited ability to increase power consumption due to network load reactive power, which limits the bandwidth of the supply network and so on. Using legal terminology we will call this consumer "victim" (PVS). The consumer of the VC seems to be a kind of equivalent generalized consumer, which unites all the consumers of the network organization except the consumer CVC.
It is obvious that there are a lot of parameters of the mode of electricity consumption, which may not meet the established requirements. Among them are PKE, indicators of consumption of reactive power and energy. In the model, the consumer CVC is considered as a violator of one of the parameters of the mode of electricity consumption. If the violation of the mode of power consumption occurs by several parameters, it is necessary to consider the model for each of these parameters separately. Suppose this parameter is nonsinusoidal [7,8].
In this case, CVC violates the requirements for the PKE not only in its network, but also in the network of the energy supplying organization, which, due to circumstances beyond its control, has to supply poor quality electricity to the consumer VS. This consumer is harmed. The incentive tariff model assumes a formalized way of compensating for damage to a consumer PVS as the culprit for the distortion of the PKE -the consumer CVC, but not directly, indirectly through the ECO. Damages occur in two stages.
At the first stage, the damage to the consumer CVC will compensate the ECO in the form of a discount to the tariff. At the same time, the income of the ECO will be reduced, which it could have received if it were not a consumer CVC, distorting the PKE in the network of the energy supplying organization. From a legal point of view, the right of the ECO to supply high-quality electricity to the consumer of the CVC is violated. As a result, the energy supplying organization does not receive a portion of the income that it could receive under normal conditions of civilian traffic. Lost are referred to as lost profits incomes in accordance with Art. 15, p. 2 of the Civil Code of the Russian Federation. The loss of profit for the ECO is measured by the amount of compensation for damage ΔCCK to the consumer PVS.
At the second stage, the consumer CVC compensates to the ECO the lost profit in the amount of the damage paid to the consumer VS. Consumer CVC provides a premium to the tariff ΔCND. The energy supplying organization at the same time compensates for its expenses for indemnification of damage to the PVS. The premium ΔCND is presented as a formalized way to compensate for the loss of benefits to the ECO on the one hand, and as compensation for damage to the consumer VS on the other hand. The role of the energy supplying organization in this model is to pay damages ΔCCK to the consumer of the PVS and recover damages ΔCND from the consumer CVC in the form of the loss of profit resulting from this.
The model assumes the participation of the energy supplying organization in the process of collecting and paying for damage, although this activity is not the main activity for it. Nevertheless, the supply of poor-quality electricity to the consumer PVS for the energy supplying organization affects its interests. She is forced to supply poor quality electricity and is responsible for this.
It is obvious that the model provides for the responsibility of the offender in the form of a surcharge to the tariff and implies the inevitability of its application. The premium is a stimulating factor. It has quite a strong influence on the consumer in terms of the installation of compensating and filter-compensating devices (FCD). Without it, it is hardly possible to influence the consumer's decision to install and operate such devices.
Research results, practical significance
Successful operation of the model is possible only under certain conditions and requirements. The requirements relate to the correctness of the application of discounts and surcharges by the state regulatory authority in case of formalized compensation to the consumer PVS at the expense of the consumer CVC. The discount is a refund. Surcharge is for lost profits. Discounts and allowances, one way or another, change the income part of the ECO budget.
When using discounts, the revenue side decreases. When applying allowances, the revenue side increases. In this case, it is necessary to provide guarantees to the energy supplying organization not to have difficulties associated with the main activity and to keep the budget revenues unchanged, due to the basic component of the incentive tariff Cbas. Prevent its unjustified reduction, on the one hand, and increase, on the other. This requirement is dictated by the principle of state regulation of tariffs for the services of natural monopolies. Revenues of the network organization as a natural monopoly should be economically justified. The tariff for the transmission of electricity, included in the main, basic part Cbas of the incentive tariff, is subject to state regulation and approved by government bodies. This is the Federal and Regional Tariff Service (FST and RST). When using the incentive part, the change in income of the ECO is unacceptable. Otherwise, a legal inconsistency arises.
The latter is quite possible in the conditions of market relations, when officials who make decisions about the size of discounts and premiums do not have clear and definite criteria for assessing decisions made. They must ensure the accuracy and transparency of information about the absence of additional income from the network organization. Incomes in excess of revenues due to the approved tariff for transmission of electricity. In this case, consumers will not have distrust and doubts about the fairness of the use of allowances. They will be ready to accept and maintain the mechanism for applying discounts and surcharges.
This criterion decisions can be the criterion of economic feasibility of the revenue of the ECO, which was first proposed in [9].
For the model, it represents where D * ECO -the value of the income of the ECO related to the supply of electricity to consumers CVC and VS, which is formed by charging fees associated with the supply of electricity without discounts and premiums; ΔCND and ΔCCK -the value of the allowance to the consumer CVC and the amount of the discount to the consumer PVS.
In general case ΔCCK in this model represents When kECO=1, the income of the energy supplying organization does not depend on the application of discounts and surcharges. The principle of economic feasibility of income is not violated.
The criterion for evaluating the decision made (5) implies the interdependence of the amount of the allowance and the discount. For example, if a surcharge is defined, the discount cannot be selected without considering its value. The communication between their values is as follows.
In expressions (8) -(14): WECO -electricity consumed by all consumers ECO; W * CVC -represents the proportion of electricity consumption by the consumer of security products; W * PSV -represents the share of electricity consumption by the consumer PVS.
Conclusion
Condition (14) must be considered when developing and approving a scale of discounts and premiums. If this is not done, legal barriers arise that will not allow the mechanism to be used to control the quality of electricity and the process of compensation and reactive power.
An attempt to introduce a stimulating tariff model at the state level took place [2,3]. However, she failed. The implemented mechanism for applying discounts and surcharges caused great resistance from consumers, and with the support of the Anti-monopoly Service and the Ministry of Justice of the Russian Federation was canceled as violating the principles of state regulation of tariffs in the context of the natural monopoly position of the grid organization [10]. | 2020-07-16T09:06:50.506Z | 2020-07-09T00:00:00.000 | {
"year": 2020,
"sha1": "fe27758bfe8f91838caf49e926de59aad2752004",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2020/38/e3sconf_hsted2020_01080.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "eb9dd3bb16792d129d86101d2f1496b378e1cd36",
"s2fieldsofstudy": [
"Economics",
"Engineering"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
251287100 | pes2o/s2orc | v3-fos-license | “I grabbed my stuff and walked out”: Precarious workers’ responses and next steps when faced with procedural unfairness during work injury and claims processes
Purpose: Injured workers can experience adverse effects from work injury and claims processes.Workers may be treated unfairly by employers, compensation boards, and return-to-work coordinators; however,how workers respond to these challenges is unknown. This article describes how injured precarious workersresponded behaviourally and emotionally to procedural unfairness in work injury and claims processes, and whatworkers did next. Methods: Interviews were conducted with thirty-six precariously employedinjured workers recruited in Ontario through social media, email, cold calling, word-of-mouth, and the “snowball”method. Thematic code summaries were analyzed to identify how precarious workers responded to procedural unfairness. Results: Workers went through all or most of these five stages (not always linearly)when faced with procedural unfairness: (1) passive, (2) fought back, (3) quit pursuit of claim, (4) quit job, and (5)won or got further in fight. Feeling confused, angry, frustrated, unsupported, disappointed, determined, optimistic,and wary were common emotions. Conclusions: Identifying unfairness and its emotional,behavioral, and material effects on workers is important to understand implications for compensation systems.Understanding and recognizing unfairness can equip employers, legal representatives, compensation boards, andphysicians, to address and prevent it, and provide worker resources. Policy changes can ensure accountability andconsequences to unfairness initiators. Supplementary Information The online version contains supplementary material available at 10.1007/s10926-022-10058-3.
Introduction
Work injury and claims processes are administratively complex, handled by multiple parties, and can be emotionally charged. Parties involved may hold different standpoints, creating room for misinterpretation and miscommunication [1]. While workers' compensation organizations provide support to workers, it is sometimes accompanied by difficult processes that negatively affect workers' health (i.e., cultural or social resources (e.g., income support benefits, protective employment laws, good workplace culture) may be more vulnerable to mistreatment and poorer service compared to claimants with more resources [6]. Those with short-term job contracts or few work hours face an increased likelihood of having little or no access to social security benefits and those who are self-employed have no access to employment standards. Overall, precariously employed workers do not receive the same protections enjoyed by those with adequately paid and secure employment [7]. For this study, we define precariously employed workers as those who are economically insecure because of low wages or inconsistent income (contract, part-time, self-employed, and minimum wage employment) 1 . Earlier research found that precarious workers in Ontario were uncertain how to access workers' compensation systems and were reluctant to speak up about their rights for fear of job loss [8]. As well, some employers misinformed precarious workers by telling them they were ineligible for workers' compensation. It is important to note that these findings are not relevant to higher-paid employed workers. In addition, recent immigrants are overrepresented in precarious employment. They can be unfamiliar with their rights and may not speak the native language, therefore vulnerable to unfair treatment in work injury and claims processes [9]. As well, young workers and those unfamiliar with the concept of workers' compensation may struggle to access these benefits.
In many instances, return-to-work procedures are perceived by injured workers as unfair. Research has shown that, in workers' compensation claims, medical evidence of work-relatedness is often unclear. Additionally, the worker's word may be pitted against their employer's. A Canadian study found that employers contested the work-relatedness of claims to avoid compensation costs and the hassle of work accommodations [10]. The authors also found that workers could "over comply" with uncooperative employers for fear of job loss and they continued to work despite an injury. Such claim suppressing action treats workers unfairly by increasing risk of worsening injuries and overall health. Additionally, compensation authorities (i.e., caseworkers, medical evaluators, adjudicators) in a systematic review of English-speaking countries were found to interact with workers unfairly and display behaviours and attitudes that demonstrated a lack of trust (such as poor listening, negative assumptions, suspicious attitudes) [11]. As a result, workers reported negative relationships that gave way to hostile 1 This definition of precarious is the consistent approach being taken by a series of studies and research outputs that are part of a partnership network examining the effectiveness of policies and regulatory frameworks in protecting precarious workers and supporting RTW after a workplace injury or illness [8].
interactions, that could eventually lead to unfair treatment due to denial of healthcare and compensation.
Difficult claims processes have been linked to poor mental health for injured workers. Injured workers faced with employer suspicion about the legitimacy of their injury have experienced job insecurity, negative workplace relations, and feelings of isolation [12]. Difficult relationships with workers' compensation systems have also been found to be a strong predictor of poor mental health outcomes [13]. When claimants are required to navigate contacts, organize documents, and submit these documents correctly, they can experience these systems as complicated, and using these systems may negatively affect their health [6]. Ontario workers who felt they were constantly fighting the system for payments, acknowledgement, and services were more likely to feel alienated, angry, frustrated, depressed, and anxious about their future, families, and employment. An Ontario survey study found that 70% of injured workers felt stressed about the workers' compensation process and reported that their health was adversely affected by their injury [12]. Other research found that Ontario workers felt that the system, as well as their families, sometimes did not understand the difficulties they had in managing their injuries, and how their injury affected their employment and personal life [14]. Findings from Australia are similar. Collie et al. [15] found a significant contributor to psychological distress during work injury was if workers reported feeling very concerned that their workplace would respond negatively to their injury and claim. Additionally, workers that needed support while navigating the claims process also reported psychological distress [15].
Literature on fairness also illustrates the powerful impact perceived injustice has on workers' health. An Australian study of workers who experienced perceived injustice in their claims process found that they had worse mental health 6-12 months post-injury [16]. Workers who hired lawyers and/or had medical assessments during their injury were found to have lower perceptions of fairness and poorer health [17]. Workers' perceptions of low workplace fairness have been associated with poor health, psychological strain, and emotional exhaustion [3]. Perceived overall fairness in the claims process can also adversely impact the outcome of a worker's return-to-work (RTW) [5]. On the other hand, goodwill in a worker's social environment can increase chances of a successful RTW, possibly due to the worker's perception of employer respect and efforts towards getting the worker back [4].
Although unfairness has been identified in processes associated with work injury, as well as difficult claims processes' consequences on workers' mental health, little is known about what workers do next. This study sheds light on how injured precarious workers in Ontario, Canada, respond to experiences of procedural unfairness. How do workers respond to perceptions of unfairness? How do their feelings affect the next steps they take with their claim? Our analysis describes workers' procedural unfairness, the stages workers go through when faced with procedural unfairness, and the emotional and behavioral consequences of this unfairness at each stage.
Methods
This study is part of a research partnership network examining the effectiveness of policies and regulatory frameworks in protecting precarious workers and supporting RTW after a workplace injury or illness [18]. This article specifically explores how injured workers responded behaviourally and emotionally to experiences of procedural unfairness in work injury and claims processes in Ontario.
In Ontario, workers' compensation is managed by the Workplace Safety and Insurance Board (hereafter called 'workers' compensation'). Most Ontario employers are required to have this coverage and they pay experience-rated premiums. Employers receive rebates or incur surcharges dependent on their reported work accident rate relative to their employment type group. At the time of the study, each day of "lost-time" work creates an expense for the employer. The system's intended purpose was to motivate employers to maintain safe workplaces and engage in expedient work accommodations following accidents or injuries. However, the system also leads to employer cost avoidance via injury claims suppression [19].
The study was conducted between 2017 and 2021 and involved interviews with workers and employers. As this analysis focuses on workers only, we provide methodological detail and findings from the employer group elsewhere [8,20]. Recruitment criteria for workers were over age 18, English language proficiency, precarious employment, and experienced a work-related injury in the last 10 years. Recruitment occurred via social media, email lists, cold calling, word-of-mouth, and the "snowball" method. Recruitment text was as follows: We seek workers willing to be interviewed for a study of experiences of workers who have been injured while working. Specifically, we seek adults who have been employed in temporary contract employment, temporary agency employment, part-time or minimum wage jobs as well as people who are self-employed. We want to understand your experience trying to RTW after work injury including interactions with your employer, with other employers and (if you file a claim) with workers' compensation. We are also interested in the impact on your family.
Our sample included 36 workers with a variety of precarious employment contracts, and a relatively even distribution of men and women from various Ontario industries (see Table 1). This was considered to be an appropriate and adequate sample for our research question, as it sufficiently answered the research question [21,22]. In-depth, semi-structured interviews were conducted in person and by telephone, averaging 30-60 minutes (during COVID-19, all interviews were phone interviews). Participants were provided a $50 honorarium. Workers were asked about their experiences of their job, employment relations, work injury, workers' compensation and RTW, management of sickness absence, impact on home and family, and policy/ process improvement suggestions (for full interview questions, see Online Resource 1). Field notes were written after each interview to note findings, compare data, and discuss (with the research team) any questions researchers had about workers' experiences, upon completion of the interviews. All recorded interview transcripts were transcribed verbatim. Our data gathering and analysis process was iterative: workers' common emotional responses at specific stages, to illustrate how their experiences affected them emotionally.
(1) Passive
Many workers in our study (n = 19) were initially passive when they perceived an unfair situation relating to the process of work injury and RTW, displayed in how they did not actively respond or resist it. They allowed events to happen, even if they disagreed with the outcome or course of action, or because they did not understand the process. In this stage, we found confusion was the prominent emotion.
Workers were unsure of their rights and how to act following a work injury.
HR, another production manager and main supervi-sor…said, "Unfortunately at this time, you can no longer work here, we're going to have to get you to leave 'til further notice" …I was like "What do you mean?
I haven't even gotten anything done with [workers' compensation] yet…" …I grabbed my stuff and walked out. (Ken, line worker, limited-term contract)
Ken talked about leaving his job without trying to contest his workplace laying him off following a work injury. His reaction and language "what do you mean?" demonstrated confusion.
Yvonne was also passive in her efforts to communicate with her workers' compensation RTW specialist when she felt he improperly assessed her accommodated work and failed to follow-up with her:
He [return-to-work coordinator] is telling the two [workplace] managers…that he doesn't know why he was there [to review my accommodation situation]. I was speechless…I didn't know what to say… (Yvonne, retail worker, full-time minimum wage).
Nakeisha complied with her workplace, workers' compensation, and RTW coordinator and returned to work despite her nervousness about the return being against her doctor's recommendations. During this time, she was in pain and her health was not improving. Her confusion was clear when she said: "They just kept playing games with me" and described lack of information about the workers' compensation claims process.
Workers often described instances in which employers took advantage of workers' lack of knowledge of their rights. According to workers, some employers had them sign forms confirming that they had adequate training before the injury (suggesting the injury was the worker's fault), possibly so that workers did not press for a workers' we analyzed data as we gathered it. Going back-and-forth between data and analysis allowed us to refine questions.
Thematic codes were created by the research team after they discussed initial findings: deductive codes (based on issues that they found were reflected in previous literature and interview questions) and inductive codes (reflective of new data, not framed in interview questions). These codes were thoroughly discussed and refined by all team members until a coding framework was agreed on. Interviews were then dual coded on the qualitative data analysis software NVivo by varied pairs of 6 research assistants, which ensured inter-rater reliability. Finally, coded segments were analyzed for themes, patterns, and nuances by the whole team. Analyses were critically discussed among all researchers.
This study's ethical approval was reviewed through the University of Waterloo Research Ethics Committee and the University of Ottawa Research Ethics Committee. Participants provided informed consent prior to participating. For confidentiality, pseudonyms are used instead of participants' names.
Results
The findings describe injured workers' experiences of procedural unfairness during their work injury and claims processes and how they responded to these experiences. Types of injustices include being laid off amid an ongoing claim, receiving inadequate modified work and/or medical attention, employer claim suppression, workers' compensation claim denial, and unresponsive claim adjudicators.
We propose a five-stage flowchart to depict workers' experiences when faced with procedural unfairness. The phases align with the general pathway of the workers' compensation process: (1) initially being passive, (2) later realizing the injustice and fighting back, (3) some workers quitting pursuit of the claim, (4) other workers quitting their job due to extreme pain and/or frustration, and in some situations, (5) workers winning or getting further in their fight (see Fig. 1). Many workers in our study fell under all or most of these stages at some point during their experiences with unfairness. These stages were not always linear, as workers had unique situations (i.e., workers began with passivity, started fighting back, then quit the claim). Sometimes, workers reported quitting their jobs because handling unfairness, among other factors, was overwhelming. In other cases, workers moved further in their fights against unfairness and won appeals. However, overall, we found that 23 (out of 36) workers experienced 2 or more of our proposed stages in the RTW process. We also summarize inadequate medical assessment of his health condition by frequently complaining to workers' compensation, his workplace, and RTW specialist. His complaints eventually led him to receive attention from workers' compensation, who placed him in a retraining program for workers with permanent injuries. Ken's frustration was evident when he explained how long it took to get what he needed.
I…kept complaining to [workers' compensation] … Every week I would call and complain…They [workers' compensation] sent me to an MRI…Then they sent me to their own…specialist…It took about a month for me to get that. (Ken, line worker, contract).
Pressuring was used by Wesley, who described how he pursued his workplace manager to manage his workers' compensation claim until it was accepted, after it was denied by workers' compensation on the basis his injury was not work-related. His frustration was evident in his repeated attempts to get his employers to act on his behalf: compensation claim. Workers also described employers encouraging them to use sick or vacation days to recover from a work-related injury rather than making a workers' compensation claim. The workers described being initially unaware of being taken advantage of, and as a result, agreeing with processes proposed by employers. Employers risk fines by failing to report a workplace injury; however, by not reporting, employers avoided extra workers' compensation-related costs (premium surcharges) and processes related to RTW.
(2) Fought back
Fighting back was a stage where workers took action to dispute unfairness. Our study found that 15 out of 36 workers fought in at least one of three ways: (1) complaining or pressuring those involved, (2) taking matters into their own hands, and/or (3) getting help from others. During this stage, anger and frustration were the most prominent emotions; these appeared to give workers motivation to fight.
(1) Complaining or pressuring those involved
Workers fought back by complaining and pressuring parties involved, to prompt action. Ken explained how he fought Terry also angrily described how he raised a previous injury claim to workers' compensation years later when he finally received documentation. The claim had been ignored by his workplace and workers' compensation after his work agency claimed that he was not their employee.
I did my report to… [workers' compensation]. They [workplace] wrote a letter to the board and said that I didn't work for them…The board never followed up with anything. Is this true or not? They didn't care… Years later when I got a copy of the claims, I'm going "Hey!" (Terry, truck driver and forklift operator, fulltime minimum wage).
Lisa researched programs after she became eligible for retraining funded by workers' compensation. She did this after her RTW specialist gave her one school as an option, to ensure that her retraining was adequate for her, while within the RTW specialist's budget. Lisa appeared angry when describing her RTW specialist who "made it seem as if she's the one…who makes the guidelines as to where I can and cannot go" for retraining. From this frustrating experience, Lisa was motivated to seek a better training program.
(3) Getting help from others
Some workers accessed their rights by seeking help from other parties. These parties helped them to stand up to the injustice and provided advice or representation. Fatima's doctor advised that she see a lawyer to help her fight for disability insurance (for which she ultimately did not apply).
Bob described how he repeatedly asked workers' compensation to incorporate his hospital paperwork with his claim file, following his initial refusal to sign workers' compensation papers when a workers' compensation representative met him at the hospital. At that time, he was unwell, had not fully read the documents, and did not understand the process. He later filed two workers' compensation claims, which were both denied by workers' compensation on the basis that his injury was not work-related: [Workers' compensation] met me at the hospital…she was like "Sign these papers…", really aggressive… I was like…"I have a dislocated patella" and she was like "We haven't seen a doctor yet so we don't know"…Eventually…I looked right at her and said "You're leaving right now" …It's…demoralizing…the attitude she had was like, "You have to do this"…She puts pressure on you…it was really aggressive…(Bob, DJ, part-time).
(2) Taking matters into their own hands
Workers also fought back by taking matters into their own hands, which involved the worker learning how to submit claims, understand the workers' compensation system, receive adequate accommodations, contact external parties, and get medical attention. After Ken's workers' compensation claim was denied, he appealed the decision. He described how he fought to see his surgeon, after workers' compensation did not help him with the medical referral. His frustration was evident in how he described his experiences conducting these processes alone.
I…filled it [workers' compensation appeal] out myself and submitted it…I had to fight the denial on my own… write up my own thing and send it…I had to fight to go see…my surgeon again… [Workers' compensation] wouldn't…do it. I had to…track him down myself(Ken, line worker, contract).
Kobe had to find out who to talk to in each party involved with claims processes and was the middle ground of communication. He described his frustration with his workplace and workers' compensation.
As far as contacting [workers' compensation], contacting the human resource people…it was always me having to find out who I need to talk to…It became more frustrating when…I was talking to [workers' compensation] … "Don't you guys want to contact my workplace and talk to…the managers or… was the prominent emotion among workers in this stage. For Wesley, the process was too complex and adversarial. I tried asking questions, but the process was long… there was nobody to help us. The only thing that could be told was that the injury was not at my workplace…I just quit…it…couldn't go through…I quit the claim. I didn't follow-up on the claim… (Wesley, shipping and receiving worker, contract).
Kobe felt unsupported after workers' compensation denied his claim based on insufficient medical evidence of an injury. He felt unable to continue his claim at that point. ' compensation] …Their … stance on it was that they are not going to treat it as a tear they are going to treat it as a strain/sprain kind of injury… Their response was that… "Our medical practitioners here are saying that it doesn't fit with the character of a tear" …I am thinking to myself… "What you guys are saying doesn't make any sense…I have an ultrasound…and an x-ray to confirm I have a tear." (Kobe, warehouse worker, temp).
(4) Quit job
Quitting their job was another way workers (n = 7) responded to procedural unfairness. Feeling disappointed and let down were the most prominent emotions in this stage. Nakeisha described how her fights regarding her injury status and RTW were draining, leading her to quit.
They [workers' compensation, RTW specialist, workplace] still kept…fighting with me and everything else…I just quit…I couldn't do it anymore… (Nakeisha, bartender, full-time minimum wage).
Yvonne tried contacting her workplace for information about her sick days (which was needed for her workers' compensation claim) but never heard back. This event contributed to her choice to finish working there. She felt let down after her long-term workplace lagged in providing her with the needed information. Terry sought support from a former workplace supervisor. When his employer did not report his injury to workers' compensation, Terry fought back by getting proof of his pain-related complaints and absences from his former supervisor and then attempting to initiate a workers' compensation claim. In response, Terry noted that his employer reported to workers' compensation that Terry never complained about pain or taking days off. Terry's anger and frustration were evident in describing how his workplace did not take responsibility.
I contacted my supervisor…She did letter up saying that…I had regularly…complained to her and… taken days off work…I thought, how dirty is that that's (emphasized) how employers are like…they don't care about the workers at all (Terry, truck driver, full-time).
Nakeisha's acquaintance helped her fight by providing expertise and connections. Her anger was evident when she talked about poor modified work conditions and lack of workers' compensation recognition of her permanent injury. This prompted her to fight back by contacting her provincial member of parliament.
My best friend's [family member] …he knows his rules, knows his everything…He's the one that got me in with the MPPs, and got the government behind me (Nakeisha, bartender, full-time minimum wage).
Other workers took legal routes. Mario started a negligence civil suit against the company he was working at when he was injured. Kobe appealed workers' compensation's decision with help from a member of a community legal clinic. He also met in front of the Human Rights Tribunal to try to come to a resolution with his two workplaces on what he financially deserves for his injury. Seth turned to legal aid lawyers to help with his appeal to the Workplace Safety and Insurance Appeals Tribunal. He was frustrated that his employer had miscommunicated to workers' compensation that his injury arose from a specific sport (that he never played), resulting in a denied claim.
(3) Quit pursuit of claim
Workers sometimes became tired of pursuing their claim and abandoned it (n = 14). After workers' compensation denied Wesley's claim by stating his injury was not workplace-related, he eventually gave up. Feeling unsupported compensation claims), challenges attributing an injury to a specific job (for workers who have concurrent jobs or change jobs often), and higher fear of job loss from speaking up (employer resistance and workers' limited voice) [8]. Precariously employed injured precarious workers differ from other groups of injured workers due to limited access to social security and employment protections normally provided to secure, full-time workers (e.g. unemployment benefits inaccessible to workers with few hours, minimum wage not available to self-employed), which suggests that different laws or systems may be needed to protect injured precarious workers [7]. The COVID-19 pandemic particularly affected precarious workers, who are often women and immigrants, as they were subject to labour market disadvantages including insecure job contracts, economic uncertainty and over-representation in frontline occupations leading to increased virus exposure [23,24].
Many precarious workers in our study were faced with unfairness. Workers' main experiences included getting laid off during an ongoing claim, receiving inadequate modified work, having little help throughout the claims process, and not being listened to. Subsequently, we propose five stages (and common emotions) that workers went through when faced with this unfairness: (1) passivity (feeling confused), (2) fighting back (feeling angry, motivated),3 (3) quitting pursuit of the claim (feeling unsupported), (4) quitting jobs due to extreme pain and/or frustration (feeling disappointed), and5 (5) winning or getting further in fights (feeling determined, wary).
Workers have been found to "stay silent" for various reasons (i.e., concerns about how speaking up might negatively affect them and workplace relations, fear of starting disputes, going against organizational norms, being under high time pressures and workloads) [25]. How much a worker feels that it is appropriate and safe to speak up in their workplace, impacts decisions to speak up [26]. Workers may also become passive when fighting for their rights because of mental and physical exhaustion, not knowing their rights, confusion, thinking their injury is not compensable, and feeling unsupported [27]. Workers with low self-esteem may feel that they do not deserve compensation if they do not observe effort (on behalf of employers or compensation systems) going towards their work injury.
Previous research has also identified the issue of workers fighting back against unfairness. Injured workers in Ontario have described peer support groups as supportive when fighting unfairness [14]. Anger and frustration among workers have been implied in occupational rehabilitation research. Workers in Australia described fighting for accommodated work following a work injury but were then given "demeaning" duties [28]. Many workers reported they were not listened to, and their feedback was not desired or valued
(5) Won or got further in fight
This final stage describes workers who won their claims or got further in their fights against procedural unfairness (n = 14). Determination, optimism, and wariness were common emotions. These complex feelings reflect workers' mixed thoughts and emotions about succeeding in their fights. Workers felt determined to succeed, optimistic if doing well, but wary due to lingering problems with handling procedural unfairness, including mental and physical exhaustion, and let-downs.
Ken's persistent complaining to workers' compensation and his workplace eventually led him to receive an MRI that he needed to prove a permanent injury and qualify for retraining. Terry described how his persistent fighting with workers' compensation allowed him to win his fight to claim benefits for permanent chronic pain and disability. Even though both workers succeeded in their claim-related fights, they remained wary and ready to possibly fight again for what they need.
Mario was optimistic when describing how his doctor provided evidence that he needed long-term disability, which helped him win his fight to receive workers' compensation benefits. Ian was similarly optimistic after he successfully displayed his pain at a workers' compensation evaluation center and won his fight to remain off work. The temp agency had not acknowledged Ian's statements about pain and asked him to return to driving forklifts-which he refused due to his pain. …When…I was evaluated, I was in extreme pain… nothing was being masked…The fact that I was able to return to the agency and say here you go…was the biggest pleasure. (Ian, forklift operator, full-time minimum wage).
After succeeding in having his condition taken seriously, workers' compensation placed Ian off work for another three months.
Discussion
The purpose of this paper is to describe how injured precarious workers responded behaviourally and emotionally to experiences of procedural unfairness in work injury and claims processes, and what these workers did next. To the best of our knowledge, no research has detailed this.
Precarious employment and work injury can place workers at risk of unfair treatment because of greater power and knowledge contrasts between employers and workers (employers have strategies for managing workers' consider specific emotions and behaviours when looking at injured workers and workers' compensation claims.
Procedural unfairness can have adverse effects on workers, which impacts quality of life and future success. It is important to identify unfairness and its emotional, behavioral, and material effects to better understand implications for workers' compensation systems. Understanding and recognizing unfairness can equip employers, legal representatives, workers' compensation boards, and physicians, to address and prevent it.
By considering emotions and behaviours, parties involved in helping injured workers can better understand how they experience RTW processes. RTW specialists, for example, could be more aware that workers may initially be passive when experiencing what they perceive to be procedural unfairness. Being aware that this passive behaviour exists may prompt them to ask better questions about how an injured worker feels about their situation, which may, in turn, prevent workers' further perceptions of unfairness. Similarly, workers who quit their claims and jobs due to procedural unfairness may feel unsupported, disappointed, and let down. Recognizing this pattern of emotions could help physicians to provide workers with appropriate support and resources. Recognizing that workers may feel angry, frustrated, but motivated in the RTW process might better prepare injured worker representatives to take into consideration these emotions while helping workers and ask better questions. Finally, by knowing that workers may respond to unfairness in certain ways (complaining or pressuring those involved, taking matters into their own hands, or getting help from others), policymakers may design policies to better address procedural unfairness in a workers' compensation system.
Disclosure statement
No potential conflict of interest was reported by the authors. by their workplaces during attempts to speak up against safety hazards [29].
Injured workers quitting jobs due to unfairness following an injury has also been documented. After an injury, American workers who worked long hours and night shifts were more likely to quit, be fired, and not be able to work fulltime [30]. Employers nudging injured workers to quit has also been identified. Employers in Florida and Wales were found to take advantage of workers' permanent injuries to reduce salaries, provide meaningless work, and expect workers to complete pre-injury work immediately, leaving workers feeling like "damaged goods" [28,31].
Unfairness has been associated with claims suppression: activities that limit the correct reporting of a worker's work injury [32]. Claims suppression is generally associated with employers (i.e., persuading workers to not report, underreporting severity or time off, offering to continue payment instead of reporting), although claims suppression has been reported among other parties (i.e., RTW coordinators, physicians) [32,33].
A strength of this study is our ability, via qualitative methodology, to gain a rich, contextual perspective of injured workers' situations and decision-making processes that cannot be captured using quantitative methods. By using social media, we could attract a diverse worker sample. Limitations of this study are that conclusions are drawn only from precarious workers in Ontario and lack perspective from non-precarious workers and other compensation systems. As well, our recruitment approach generated workers with primary physical injuries and so we lack insights that would be generated with a sample including work-related psychological injury. This research draws attention to new areas of enquiry: pervasiveness of issues such as workers quitting their jobs after experiencing a work injury and workers' actions following experiences of procedural unfairness in other employment contexts and insurance boards. Additionally, future research could examine relationships between age, and/or educational status of workers and their emotional and behavioural reactions to perceived unfair claims processes.
Conclusions
Literature on procedural unfairness affirms stages we identify in our analysis of how injured workers handle procedural unfairness related to their work injury. While previous literature touches upon these stages separately and briefly, our study examines them together. What happens after workers perceive unfairness in their work injury, including emotions and next steps, has not previously been explicitly examined. Additionally, our paper is one of few papers to | 2022-08-04T13:29:20.614Z | 2022-08-04T00:00:00.000 | {
"year": 2022,
"sha1": "b6509f7421b9a3941d43a4439e1b2037894f52c1",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10926-022-10058-3.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "516269585c663e494aa7b3a45d9f250c322e1bf3",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55024740 | pes2o/s2orc | v3-fos-license | Faunal carbon flows in the abyssal plain food web of the Peru Basin have not recovered during 26 years from an experimental sediment disturbance
Future deep-sea mining for polymetallic nodules in abyssal plains will impact the benthic ecosystem, but it is largely unclear whether this ecosystem will be able to recover from mining disturbance and if so, at what time scale and to which extent. In 1989, during the ‘DISturbance and reCOLonization’ (DISCOL) experiment, a total of 22% of the surface within a 10.8 km2 25 large circular area of the nodule-rich seafloor in the Peru Basin (SE Pacific) was ploughed to bury nodules and mix the surface sediment. This area was revisited 0.1, 0.5, 3, 7, and 26 years after the disturbance to assess macrofauna, megafauna and fish density and diversity. We used this unique abyssal faunal time series to develop carbon-based food web models for disturbed (sediment inside the plough tracks) and undisturbed (sediment inside the experimental area, but outside the plough tracks) sites. We developed a linear inverse model (LIM) to resolve carbon flows between 7 different feeding types within macrofauna, 30 megafauna and fish. The total faunal biomass was always higher at the undisturbed sites compared to the disturbed sites and 26 years post-disturbance the biomass at the disturbed sites was only 54% of the biomass at undisturbed sites. Fish and subsurface deposit feeders experienced a particularly large temporal variability in biomass and model-reconstructed respiration rates making it difficult to determine disturbance impacts. Deposit feeders were least affected by the disturbance, with respiration, external predation and excretion levels only reduced by 2.6% in the sediments disturbed 26-years ago compared 35 Biogeosciences Discuss., https://doi.org/10.5194/bg-2018-167 Manuscript under review for journal Biogeosciences Discussion started: 9 April 2018 c © Author(s) 2018. CC BY 4.0 License.
Introduction
Abyssal plains cover approximately 50% of the world's surface and 75% of the seafloor (Ramirez-Llodra et al., 2010).The abyssal seafloor is primarily composed of soft sediments consisting of fine-grained erosional detritus and biogenic particles (Smith et al., 2008).Occasionally, hard substrate occurs occasionally in the form of clinker from steam ships, glacial drop stones, outcrops of basaltic rock, whale carcasses, and marine litter (Amon et al., 2017;Kidd and Huggett, 1981;Radziejewska, 2014;Ramirez-Llodra et al., 2011;Ruhl et al., 2008).In some soft sediment regions, islands of hard substrate are provided by polymetallic nodules, authigenically formed deposits of metals, which grow at approximate rates of 2 to 20 mm per million years (Guichard et al., 1978;Kuhn et al., 2017).These nodules have the shape and size of cauliflower, cannon balls or potatoes, and are found on the sediment surface and in the sediment at depths between 4000 and 6000 m in areas of the Pacific, Atlantic and Indian Ocean (Devey et al., 2018;Kuhn et al., 2017).
Polymetallic nodules are rich in metals, such as nickel, copper, cobalt, molybdenum, zirconium, lithium, yttrium and rare earth elements (Hein et al., 2013), and occur in sufficient densities for potential exploitation by the commercial mining industry in the Clarion-Clipperton Fracture Zone (CCFZ; equatorial Pacific), around the Cook Islands (equatorial Pacific), in the Peru Basin (E Pacific) and in the Central Indian Ocean Basin (Kuhn et al., 2017).Extracting these polymetallic nodules during deep-sea mining operations will have severe impacts on the benthic ecosystem, such as the removal of hard substrate (i.e.nodules) and the food-rich surface sediments from the seafloor, physically causing the mortality of organisms within the mining tracks and re-settlement of resuspended particles (Levin et al., 2016;Thiel and Tiefsee-Umweltschutz, 2001).Defining regulations on deep-sea mining requires knowledge on ecosystem recovery from these activities, but to date information on these rates is not extensive (Gollner et al., 2017;Jones et al., 2017;Stratmann et al., 2018;Stratmann et al., in review;Vanreusel et al., 2016).Especially the recovery of ecosystem functions, such as food web structure and carbon (C) cycling, from deepsea mining is understudied.
Following the original definition by Bluhm (2001), we denote sites within the DEA (DISCOL Experimental Area), but not directly disturbed by the plough harrow as 'undisturbed sites' and sites that were directly impacted by the plough harrow as 'disturbed sites' (Bluhm, 2001).During subsequent visits, densities of macrofauna and megafauna were assessed, but data on meiofauna and microbial communities were only sparsely collected.Therefore, the food web models presented in this work cover a period of 1989 to 2015 and contain macrofauna, megafauna and fish.
Linear inverse modelling (LIM) is an approach that has been developed to disentangle carbon flows between food web compartments for data-sparse systems (Klepper and Van de Kamer, 1987;Vézina and Platt, 1988).It has been applied to assess differences in C and nitrogen (N) cycling in various ecosystems, including the abyssal plain food web at Station M (NE Pacific) under various particulate organic carbon (POC) flux regimes (Dunlop et al., 2016), and a comparison of food web flows between abyssal hills and plains at the Porcupine Abyssal Plain (PAP) in the north-eastern Atlantic (Durden et al., 2017).LIM is based on the principle of mass balancing various data sources (Vézina and Platt, 1988), i.e. faunal biomasses and physiological constraints, that are implemented in the model, either as equality or inequality equations, and these are solved simultaneously (van Oevelen et al., 2010).A food web model almost always includes more inequalities than equalities, i.e. it is mathematically under-determined, which implies that an infinite number of solutions will solve the models.In this case, a likelihood approach can be used to generate a large dataset of possible solutions for the model (van Oevelen et al., 2010), from which the mean and standard deviations for each flow is calculated.Food web models from different sites and/or points in time can be compared quantitatively by calculating network indices, such as the 'total system throughput' (T..) that sums all carbon flows in the food web (Kones et al., 2009).Hence, a decrease in the difference of T.. between the food webs from undisturbed and corresponding disturbed sites (ΔT..) over time is taken as a sign of ecosystem recovery following disturbance.
In this study, benthic food-web models were developed for undisturbed sites and disturbed sites at DISCOL to assess whether faunal biomass and trophic composition of the food webs varied and/or converged between the two sites over time.The model outcomes were compared with conceptual and qualitative predictions on benthic community recovery from polymetallic nodule mining published by Jumars (1981).Additionally, it was investigated how ΔT..
Data availability
Macrofauna, megafauna and fish density data (mean±std; ind.m -2 ) for the first four cruises (PD0.1 to PD7) were extracted from the original papers (Bluhm, 2001 annex 2.8;Borowski, 2001;Borowski and Thiel, 1998) and methodological details can be found in those papers.In brief, macrofauna samples (>500 μm size fraction) were collected with a 0.25 m -2 box-corer and densities of megafauna and fish were assessed on still photos and videos taken with a towed "Ocean Floor Observation System" (OFOS) underwater camera system.During the PD26 cruise (RV Sonne cruise SO242-2; Boetius, 2015), macrofauna were collected with a square 50 × 50 × 60 cm box-corer (disturbed sites: n = 3; undisturbed sites: n = 7) and the upper 5 cm of sediment was sieved on a 500 μm sieve (Greinert, 2015).All organisms retained on the sieve were preserved in 96% undenaturated ethanol on board (Greinert, 2015) and were sorted and identified ashore to the same taxonomic level as the previous cruises under a stereomicroscope.Megafauna and fish density during the PD26 cruise was acquired by deploying the OFOS (Boetius, 2015).Every 20 s, the OFOS automatically took a picture of the seafloor at an approximate altitude of 1.5 m above the seafloor (Boetius, 2015;Stratmann et al., in review) resulting in 1,740 images of plough marks (disturbed sites) and 6,624 images from undisturbed sites (Boetius, 2015).A subset of 300 pictures from the disturbed sites (surface area: 1,440.6 m 2 ) and 300 pictures from the undisturbed sites (surface area: 1,420.4m 2 ) were randomly selected from the original set of pictures and annotated using the open-source annotation software PAPARA(ZZ)I (Marcon and Purser, 2017).Megafauna were identified to the same taxonomic levels as for the previous megafauna studies conducted within the DEA (Bluhm, 2001), whereas fish were identified to genus level using the CCZ-species atlas (www.ccfzatlas.com).
The above-mentioned density data collected for macrofauna, megafauna and fish were used to build food web models to resolve carbon fluxes; hence, all faunal density data needed conversion into carbon units before they can be used in the food web model.Converting density data to carbon biomass values was challenging in the current study, as few to no conversion factors for deep-sea fauna are available in the literature.Below, we describe the approach we used to tackle this hurdle for macrofauna, megafauna and fish.
In case of a macrofaunal specimen, measuring the carbon content requires its complete combustion, which means that the specimen cannot be kept as voucher specimen in scientific collections.The macrofauna samples collected for this study are part of the Biological Research Collection of Marine Invertebrates (Department of Biology & Centre for Environmental and Marine Studies, University of Aveiro, Portugal) and were therefore not sacrificed.Instead, we used the C conversion factors of macrofauna specimens previously collected within the framework of a pulse-chase experiment in the Clarion-Clipperton Zone (CCZ, NE Pacific), in which a deep-sea benthic lander (3 incubation chambers à 20 × 20 × 20 cm) was deployed at water depths between 4050 and 4200 m (Sweetman et al., in review).The upper 5 cm of the sediment of the incubation chambers was sieved on 300 μm sieve and preserved in 4% buffered formaldehyde solution.Ashore, the samples were sorted and identified under a dissecting microscope and the biomass of individual freeze-dried, acidified specimens was determined with Biogeosciences Discuss., https://doi.org/10.5194/bg-2018-167Manuscript under review for journal Biogeosciences Discussion started: 9 April 2018 c Author(s) 2018.CC BY 4.0 License.at Thermo Flash EA 1112 elemental analyser (EA; Thermo Fisher Scientific, USA) to give the individual carbon content in mmol C ind -1 .The macrofauna density data (ind.m -2 ) from all cruises were converted to macrofauna biomass (mmol C m -2 ) by multiplying each taxon-specific density (ind.m -2 ) with the mean taxon-specific individual biomass value for macrofauna (mmol C ind -1 ; Table 1).Subsequently, the biomass data of all taxa with the same feeding type (Table 1) were summed to calculate the biomass of each macrofaunal compartment (mmol C m -2 ; Supplement 1, Figure 2).
The megafauna density data (ind.m -2 ) of the time series was converted to biomass (mmol C m -2 ) by multiplying the taxonspecific density with a taxon-specific mean biomass per megafauna specimen (mmol C ind -1 ; Table 1).To determine this taxon-specific biomass per megafauna specimen, size measurements were used as follows.The 'AUV Abyss' (Geomar Kiel) equipped with a Canon EOS 6D camera system with 8-15 mm f4 fisheye zoom lens and 24 LED arrays for lightning (Kwasnitschka et al., 2016) flew approximately 4.5 m above the seafloor at a speed of 1.5 m s -1 and took one picture every second (Greinert, 2015).Machine vision processing was used to generate a photo-mosaic (Kwasnitschka et al., 2016).A subsample covering an area of 16,206 m 2 of the mosaic was annotated using the web-based annotation software 'BIIGLE 2.0' (Langenkämper et al., 2017).The length of all megafauna taxa for which data were available from previous cruises was measured using the approach presented in Durden et al. (2016).Briefly, depending on the taxon, either body length, the diameter of the disk, or the length of an arm were measured on the photo-mosaic and converted into biomass per individual (g ind -1 ) using the relationship between measured body dimensions (mm) and preserved wet weight (g ind -1 ) (Durden et al., 2016).Subsequently, the preserved wet weight (g ind -1 ) was converted to fresh wet weight (g ind -1 ) using conversion factors from Durden et al. (2016) and to organic carbon (g C ind -1 and mmol C ind -1 ) using the taxon-specific conversion factors presented in Rowe (1983).For the taxa Cnidaria and Porifera no conversion factors were available.Therefore, taxon-specific individual biomass values were extracted from a study from the CCZ (Tilot, 1992).The individual biomass of Bryozoa and Hemichordata were calculated as the average biomass of an individual deep-sea megafauna organism (B, mmol C ind -1 ) at 4100 m depth following from the ratio of the regression for total biomass and abundance by Rex et al. (2006): (1) Following the approach applied to the macrofauna dataset, individual biomasses of taxa with similar feeding types (Table 1) were summed to determine the biomass of the megafauna food-web compartments (mmol C m -2 ; Supplement 1; Figure 1).
Individual biomass of fish was calculated using the allometric relationship for Ipnops agassizii: where a = 0.0049 and b = 3.03 (Froese and Pauly, 2017;Froese et al., 2014), as Ipnops sp. was the most abundant deep-sea fish observed at the DEA (60% of total fish density at undisturbed and 40% of total fish density at disturbed sites).The length between laser points: 0.5 m (Boetius, 2015)).The wet weight (g) was converted to dry-weight and subsequently to carbon content (mmol C ind -1 ) using the taxon-specific conversion factors presented in Brey et al. (2010).
Food web structure
The faunal biomass was further divided into feeding guilds in order to define the food web compartments of the model.Fish (Osteichthyes) were classified as scavenger/ predator and invertebrate macrofauna and megafauna were divided into filter/suspension feeders (FSF), deposit feeders (DF), carnivores (C) and omnivores (OF) (Figure 2).Since feeding types are well described for polychaetes (Jumars et al., 2015), we made a further detailed classification of the macrofaunal polychaetes into suspension feeders (PolSF), surface deposit feeders (PolSDF), subsurface deposit feeders (PolSSDF), carnivores (PolC), and omnivores (PolOF).
External carbon sources that were considered in the model included suspended detritus in the water column (Det_w), labile (lDet_s) and semi-labile detritus (sDet_s) in the sediment.Suspended detritus was considered a food source for polychaete, macrofaunal and megafaunal suspension feeders.Labile and semi-labile sedimentary detritus was a source for deposit-feeding (macrofauna, megafauna and fish) that died in the food web and was also the food source of omnivores.
Carbon losses from the food web were respiration to dissolved inorganic carbon (DIC), predation on macrofauna, megafauna and fish by pelagic/ benthopelagic fish, scavenging on carcasses by pelagic/ benthopelagic scavengers and faeces production by all faunal compartments.
Literature constraints
The carbon flows between faunal compartments are constrained by the implementation of various minimum and maximum process rates and conversion efficiencies as inequalities in all models, which are described here.Assimilation efficiency (AE) is calculated as: where I is the ingested food and F are the faeces (Crisp, 1971).The min-max range was set from 0.62 to 0.87 for macrofauna and polychaetes (Stratmann et al., in prep.), from 0.48 to 0.80 for megafauna (Stratmann et al., in prep.) and from 0.84 to 0.87 for fish (Drazen et al., 2007).
Linear inverse model solution and network index
A food web model with all compartments present in the food web, like e.g. the PD26 food web model for the undisturbed site, consists of 147 carbon flows with 14 mass balances, i.e. food-web compartments, and 76 data inequalities leading to a mathematically under-determined model (14 equalities vs. 147 unknown flows).Therefore, the LIMs were solved with the R package 'LIM' (van Oevelen et al., 2010) in R (R-Core-Team, 2016) following the likelihood approach (van Oevelen et al., 2010) to quantify the mean and standard deviations of each of the carbon flows from a set of 100,000 solutions.This set was sufficient to guarantee the convergence of mean and standard deviation within a 2.5% deviation.
The network index 'total system throughput' (T..) was calculated with the R-package 'NetIndices' (Kones et al., 2009) for each of the 100,000 model solutions and subsequently summarized as mean ± standard deviation.
Statistical analysis
Statistical differences between compartment biomasses of the undisturbed vs. disturbed sites for the same sampling event (PD0.1,PD0.5, PD3, and PD7; PD26 was omitted due to a lack of megafauna replicates) were assessed by calculating Hedge's d (Hedges and Olkin, 1985a), which is especially suitable for small sample sizes (Koricheva et al., 2013): where � E is the mean of the experimental group (i.e. the biomass at disturbed sites of a particular year), � C is the mean of the control group (i.e. the biomass at undisturbed sites of the respective year), s E and s C are the standard deviations with The weighted Hedge's d and the estimated variance (Hedges and Olkin, 1985b) of the total biomass of all compartments of the same sampling event were calculated as: with σd+ 2 =1/sum(1/σdi 2 ).
Following Cohen (1988)'s rule of thumb for effect sizes, Hedge's d=|0.2|signifies a small experimental effect, implying that the biomass of the food-web compartments is similar between the disturbed and undisturbed sites.When Hedge's d=|0.5|, the effect size is medium, hence there is moderate difference, and when Hedge's d=|0.8|, the effect size is large, i.e. there is a large difference between the biomass of the compartments between sites.
The network index T.. was compared between the undisturbed and disturbed sites of the same sampling event by assessing the fraction of the T.. values of the 100,000 model solutions of the undisturbed food web that were larger than the T.. values of the 100,000 model solutions of the disturbed food web.When this fraction is >0.95, the difference in 'total system throughput' between the two food-webs from the same sampling event is considered significantly different (van Oevelen et al., 2011), indicating that the carbon flows in the food web from that specific sampling event have not recovered from the experimental disturbance.
Food-web structure and trophic composition
Total faunal biomass was always higher at the undisturbed sites as compared to the disturbed sites from the same sampling year (Figure 1, Supplement 1), and ranged from a minimum of 5.45±1.27mmol C m -2 (PD0.1) to a maximum 22.33±3.40mmol C m -2 (PD3) at the undisturbed sites and from minimum of 1.36±1.24mmol C m -2 (PD0.1) to maximum 15.82±1.99mmol C m -2 (PD3) at the disturbed sites.At PD0.1 the total faunal biomass at the disturbed sites was only 25% of the total faunal biomass at the undisturbed sites, whereas at PD3 the total faunal biomass at the disturbed sites was 71% of the total faunal biomass at the undisturbed sites.At PD26, the faunal biomass at the disturbed sites was 54% of the biomass at the undisturbed sites.The absolute weighted Hedge's d |d+| of all faunal compartment biomasses for PD0.1 to PD7 ranged from 0.053±0.019at PD0.5 to 0.075±0.019(Supplement 2), indicating a strong experimental effect and therefore that biomasses of all faunal compartment did not recover over the period analysed (PD0.1 to PD7).
The faunal biomass at both the undisturbed and disturbed sites from PD0.1 to PD7 was dominated by deposit feeders (from 63% at undisturbed PD0.1 to 83% at disturbed PD0.5 and disturbed PD3) (Figure 3).In contrast, at the undisturbed sites of PD26, the largest contribution to total faunal biomass was from filter-and suspension feeders (44%), whereas deposit feeders only contributed 35%.At the disturbed sites of PD26, deposit feeders had the highest biomass (61%), followed by carnivores (19%) and filter-and suspension feeders (14%).
Faunal respiration (mmol C m -2 d -1 ) ranged from 6.02×10 -3 ±6.75×10 -5 (disturbed sites, PD0.5) to 3.92×10 -2 ±3.69×10 -4 (undisturbed sites, PD3).During the twenty-six years after the DISCOL experiment, modelled faunal respiration was always higher at undisturbed sites as compared to disturbed sites (Table 2, Figure 4).Over time, non-polychaete macrofauna contributed least to total faunal respiration (Table 2), except at the disturbed sites of PD0.5 and at both sites of PD3.During this PD3 sampling campaign, macrofauna contributed 49.97% at the undisturbed sites and 58.35% at the disturbed sites to the total faunal respiration.Polychaetes respired between 18.59% of the total fauna respiration at the undisturbed sites at PD26 and 77.61% of the total fauna respiration at the disturbed sites at PD0.5.The megafauna respiration contribution was highest at PD26, where they respired 64.95% of the total faunal respiration at the disturbed sites and 78.67% of the total faunal respiration at the undisturbed sites.The contribution of fish to total faunal respiration was always <2%.Besides respiration, faeces production contributed between 20.07% at disturbed PD3 and 34.65% at disturbed PD0.1 to total carbon outflow from the food web (Figure 4).The contribution of the combined outflow of predation by external predators and scavengers on carcasses to the total C loss from the food web ranged from 50.48% at disturbed PD7 to 65.33% at disturbed PD0.1.
Discussion
This study assessed the evolution of the food web structure and ecosystem function 'faunal C cycling' in an abyssal nodulerich soft-sediment ecosystem following an experimental sediment disturbance.By comparing a time-series over 26 years with food web models (undisturbed vs. disturbed sites), we show that the total faunal biomass at the disturbed site was still only about half of the total faunal biomass at the undisturbed sites 26 years after the disturbance.Furthermore, the role of the various feeding types in the carbon cycling differs and the 'total system throughput' T.., i.e. the sum of all carbon flows in the food web, was still significantly lower at the disturbed sediment compared to the undisturbed sediment after 26 years.
Model limitations
Our results are unique as it allowed for the first time to assess the recovery of C cycling in benthic deep-sea food webs from a small-scale sediment disturbance in polymetallic nodule rich areas.However, the models come with limitations.The standard procedures to assess megafauna densities have evolved during the 26 years of post-disturbance monitoring.The OFOS system used 26 years after the initial DISCOL experiment took pictures automatically every 20 s from a distance of 1.5 m above the seafloor (Boetius, 2015;Stratmann et al., in review).By contrast, the OFOS system used in former cruises was towed approximately 3 m above the seafloor and pictures were taken selectively by the operating scientists (Bluhm and Gebruk, 1999).Therefore, the procedure used in the former cruises very likely led to an overestimation of rare and charismatic megafauna, and probably to an underestimation of dominant fauna and organisms of small size (<3 cm) for PD0.1 to PD7 as compared to PD26.
Previous cruises to the DEA focused on monitoring changes in faunal density and diversity, but not on changes in biomass.
Hence, a major task in this study was to find appropriate conversion factors to convert density into biomass.However, no individual biomass data for macrofauna taxa were available for the Peru Basin, so we used data from sampling stations of similar water depths in the eastern Clarion-Clipperton Zone (CCZ, NE Pacific; Sweetman et al., in review).As organisms in deep-sea regions with higher organic carbon input are larger than their counterparts from areas with lower organic carbon input (McClain et al., 2012), using individual biomass data from the CCZ, a more oligotrophic region than the Peru Basin (Haeckel et al., 2001;Vanreusel et al., 2016) might have led to an underestimation of the biomass for macrofauna.However, this has likely limited impact on the interpretation of the comparative results within the time series, because the same methodology was applied throughout the time series dataset.Moreover, the determination of megafauna biomass was also difficult as no size measurements were taken from megafauna individuals during the PD0.1 to PD7 cruises.Consequently, it was not possible to detect differences in size classes between disturbed and undisturbed sediments or recruitment events in e.g.echinoderms (Ruhl, 2007) following the DISCOL experiment.Instead, we used fixed conversion factors for the different taxa for the entire time series.
Feeding-type specific differences in recovery
Eight years before the experimental disturbance experiment was conducted at the DISCOL area, Jumars (1981) qualitatively predicted the response of different feeding types in the benthic community to polymetallic nodule removal.Although several seabed test mining or mining simulations were performed since then (Jones et al., 2017), no study compared or verified these conceptual predictions on feeding-type specific differences in recovery from deep-sea mining.As few comparative studies are available, we compare here our food-web model results with those of the conceptual model predictions for scavengers, surface and subsurface deposit feeders and suspension feeders by Jumars (1981).Jumars (1981) predicted that organisms inside the mining tracks would be killed either by the fluid shear of the dredge/ plough or by abrasion and increased temperatures inside the rising pipe with a mortality rate of >95%.In contrast, the impact on mobile and sessile organisms in the vicinity of the tracks would depend on their feeding type (Jumars, 1981).
The author also predicted that the density of mobile scavengers, such as fish and lysianassid amphipods would rise shortly after the disturbance in response to the increased abundance of dying or dead organisms within the mining tracks.Indeed, when plotting the respiration of fish (in mmol C m -1 d -1 ) normalized to the fish respiration at the undisturbed sediment at PD0.1 over time, the respiration for the undisturbed sediment increased steeply until PD3 and dropped subsequently (Figure 6).
However, experiments with baits at PAP and the Porcupine Seabight (NE Atlantic) showed that the scavenging deep-sea fish Coryphaenoides armatus intercept bait within 30 min (Collins et al., 1999) and stayed at the food fall for 114±55 min (Collins et al., 1998).Hence, it is very likely that this rise in fish respiration at the undisturbed sediment 0.5 years after the DISCOL is a result of natural variability as opposed to the predicted rise in scavenger density and/ or biomass caused by the mining activity.At the disturbed sediment, no fish were detected at PD0.1 or PD0.5, which could be related to lack of prey in a potential predator-prey relationship (Bailey et al., 2006).However, because of the relatively small area of disturbed sediment (only 22% of the 10.8 km 2 of sediment were ploughed (Thiel and Schriever, 1989)), the low density of deep-sea fish (e.g. between 7.5 and 32 ind.ha -1 of the dominant fish genus Coryphaenoides sp. at Station M (Bailey et al., 2006)) and the high motility of fish, this observation may be coincidental.Jumars (1981) predicted that, on a short term, subsurface deposit feeders outside the mining tracks would be the least impacted feeding type, because of their relative isolation from the re-settled sediment, and their relative independence of organic matter on the sediment surface, whereas subsurface deposit feeders inside the mining tracks would experience high mortality.For the long-term recovery, the author pointed to the dependence of subsurface deposit feeders on bacterial production in the sediment covered with re-resettled sediment.In our food web model, sub-surface and surface deposit feeders were grouped into the deposit feeder category, except for polychaetes, for which we kept the surface-subsurface distinction.The biomass of PolSSDF fluctuated by one order of magnitude over the 26-year time series and had high biomass values at the undisturbed PD0.1 site, the disturbed PD3 sites and at both sites at PD7.The normalized respiration of PolSSDF also showed strong fluctuations at the undisturbed and disturbed sites over time (Figure 6) indicating a large natural variability or variable sampling results.Such temporal dynamics in deep-sea macrofauna were detected at Station M, where the density of several dominating metazoan macrofauna increased eight months after a peak in POC flux was measured at 50 and 600 m above the seafloor (Drazen et al., 1998).Hence, Jumars (1981) predictions for sub-surface deposit feeders could not be tested, provided the natural fluctuations in PolSSDF densities that were used to calculate biomass.Jumars (1981) anticipated that surface deposit feeders would suffer more strongly from deep-sea mining activities compared to sub-surface deposit feeders because the rate of sediment deposition would increase inside and beyond mining tracks, with this newly settling sediment altering the sediment composition and food concentration in the sediment.Indeed, the recovery of holothurian densities at the DEA was probably delayed owing to unfavourable food conditions (Stratmann et al., in review).
Nevertheless, deposit feeders seem to have advantages during the recovery from the DISCOL disturbance experiment.When comparing the contribution of deposit feeders from all size classes (macrofauna, polychaetes, megafauna) to respiration, predation by external predators and faeces production to the contribution of omnivores, filter-and suspension feeders and carnivores, their contribution was always higher at the disturbed site compared to the undisturbed site of the same sampling event.However, owing to the overall lower biomass inside the disturbed area compared to the undisturbed area, the absolute carbon respiration (in mmol C m -2 d -1 ) remained lower for deposit feeders at the disturbed site compared to the corresponding undisturbed site, even after 26 years when this difference was 2.6%.Jumars (1981) expected that the suspension feeders outside the mining tracks would be negatively affected during the presence of the sediment plumes and/ or as long as their filtration apparatus was clogged by sediment.This "clogging" hypothesis could not be tested here, because the models did not resolve these unknown changes in faunal physiology, but could only assess carbon cycling differences associated with differences in biomass.Furthermore, Jumars (1981) anticipated that the recovery of nodule-associated organisms, such as filter and suspension feeding Porifera, Antipatharia or Ascidiacea (Vanreusel et al., 2016) would require more than 10,000 years, owing to the slow growth rate of polymetallic nodules (Guichard et al., 1978;Kuhn et al., 2017) and the removal and/ or burial of the nodules.Directly after the initial DISCOL disturbance event, the respiration rate of filter and suspension feeders at the disturbed sediment was only 1% of the respiration rate of this feeding type at the undisturbed sediment.After 26 years, the relative difference in the filter and suspension feeding respiration rate was still 80%.Part of this difference at PD26 resulted from the presence of a single specimen of Alcyonacea with a biomass of 4.71 mmol C m -2 at the undisturbed site.However, even if we ignore this Alcyonacea specimen in the model, the respiration of suspension and filter feeding in the disturbed site would still be 71% lower compared to the undisturbed site, indicating a slow recovery of this feeding group.
To summarize the comparison of modelled potential recovery of the different feeding types with the predictions by Jumars (1981), scavenging and predatory fish at the undisturbed sediment followed first the predicted density pattern, though this might also have been related to natural variability.After three years, however, the fish contribution to carbon cycling was lower than expected from the predictions.Owing to an apparently strong natural variability in polychaete subsurface deposit feeder biomass, the recovery prognosis for subsurface deposit feeders could not be tested.Furthermore, it could not be assessed whether surface deposit feeders were more strongly affected by the mining activity than subsurface deposit feeders.In general, the time series analysis showed that deposit feeders likely benefited from the disturbance experiment in comparison to other feeding types.Confirming Jumars (1981) prediction, the activity of filter and suspension feeders in the food web did not recover within 26 years.
Conclusion
Deep-sea mining will negatively impact the benthic ecosystem of abyssal ecosystems.It is therefore important to be able to estimate how long the recovery of the ecosystem after a deep-sea mining operation will take.This study used the linear inverse modelling technique to compare the carbon flows between different food web compartments at undisturbed and disturbed sites respectively) of the total faunal biomass and food-web activity at the undisturbed sites.Deposit feeders were the least impacted by the sediment disturbance, with less than 3% relative difference in total carbon loss (i.e.respiration, external predation and feces production) between undisturbed and disturbed sites after 26 years.In contrast, filter and suspension feeders did not recover at all and the relative difference in respiration rate was 79%.Overall, it can be concluded that ecosystem functioning (as measured by total carbon cycling) within the macrofauna, megafauna and fish has not recovered 26 years after the experimental disturbance.
Table 1.Taxon-specific biomass per individual (mmol C ind -1 ) for macrofauna and megafauna including the specific feeding types.Macrofauna biomass data are based on macrofauna specimen collected in the abyssal plains of the Clarion-Clipperton Zone (NE Pacific) (Sweetman et al., in review).In contrast, megafauna biomass was estimated by converting sizemeasurements of specific body parts of organisms from DEA that were acquired using photo-annotation into preserved wet weight per organism using the relationships presented in Durden et al. (2016).Subsequently the preserved wet weight was 5 converted into fresh wet weight and biomass following the conversions presented in Durden et al. (2016) and Rowe (1983).
Whenever no conversion factors for a specific taxon were reported in Durden et al. (2016) mean taxon-specific biomass data per individual were extracted from Tilot (1992) for the CCZ.Cumacea a DF 1 2 3.09×10 developed over time to infer the recovery rate of C flows from experimental deep-sea disturbance in the Peru Basin.Biogeosciences Discuss., https://doi.org/10.5194/bg-2018-167Manuscript under review for journal Biogeosciences Discussion started: 9 April 2018 c Author(s) 2018.CC BY 4.0 License.
(
mm) of all Ipnops sp.specimens was measured on the annotated 600 pictures (300 pictures from undisturbed site, 300 pictures from disturbed site) in PAPARA(ZZ)I (Marcon and Purser, 2017) using three laser points captured in each image (distance Biogeosciences Discuss., https://doi.org/10.5194/bg-2018-167Manuscript under review for journal Biogeosciences Discussion started: 9 April 2018 c Author(s) 2018.CC BY 4.0 License.
corresponding groups, n E and n C are the sample sizes of the corresponding groups.The variance of Hedge's d σd 2 (Koricheva et al., 2013) is estimated as: Biogeosciences Discuss., https://doi.org/10.5194/bg-2018-167Manuscript under review for journal Biogeosciences Discussion started: 9 April 2018 c Author(s) 2018.CC BY 4.0 License.
Figure 2 .
Figure 2. Simplified schematic representation of the food web structure that forms the basis of the linear inverse model (LIM).All compartments inside the box were part of the food web model, whereas compartments outside the black box were only considered as carbon influx or efflux, but were not directly modelled.In order to simplify the graph, for macrofauna, polychaetes and megafauna, only feeding types were presented and no size classes.Solid black arrows represent the carbon 5 flux between food-web compartments and black dashed arrows represent the influx of carbon to the model.Blue-dotted arrows show the loss of carbon from the food web via respiration to DIC.The red dashed arrows indicate the loss of carbon from the food web as faeces and as predation by pelagic/ benthopelagic fish and the yellow-dashed arrow indicate the reduction of the carcass pool due to scavenging by pelagic/ benthopelagic fish.
Figure 6 .
Figure 6.Feeding-type related differences in the recovery of faunal respiration (mmol C m -2 d -1 ) over time following the DISCOL disturbance experiment.Due to a lack of pre-disturbance respiration rates (T0), the respiration rate for each feeding type (filter and suspension feeders=FSF, surface deposit feeders=SDF, subsurface deposit feeders=SSDF, fish) is standardized to the respective feeding type specific respiration rate at the undisturbed sediment of 0.1 years post-disturbance.The respiration 5 rate for filter and suspension feeders includes the respiration of macrofaunal, polychaete and megafaunal filter and suspension Biogeosciences Discuss., https://doi.org/10.5194/bg-2018-167Manuscript under review for journal Biogeosciences Discussion started: 9 April 2018 c Author(s) 2018.CC BY 4.0 License. | 2018-12-12T15:49:19.747Z | 2018-04-09T00:00:00.000 | {
"year": 2018,
"sha1": "20c15d1a523867d28611f2d83c43812e3e752609",
"oa_license": "CCBY",
"oa_url": "https://www.biogeosciences.net/15/4131/2018/bg-15-4131-2018.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "20c15d1a523867d28611f2d83c43812e3e752609",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
258971945 | pes2o/s2orc | v3-fos-license | Relevance of hazards in exoskeleton applications: a survey-based enquiry
Exoskeletons are becoming the reference technology for assistance and augmentation of human motor functions in a wide range of application domains. Unfortunately, the exponential growth of this sector has not been accompanied by a rigorous risk assessment (RA) process, which is necessary to identify the major aspects concerning the safety and impact of this new technology on humans. This situation may seriously hamper the market uptake of new products. This paper presents the results of a survey that was circulated to understand how hazards are considered by exoskeleton users, from research and industry perspectives. Our analysis aimed to identify the perceived occurrence and the impact of a sample of generic hazards, as well as to collect suggestions and general opinions from the respondents that can serve as a reference for more targeted RA. Our results identified a list of relevant hazards for exoskeletons. Among them, misalignments and unintended device motion were perceived as key aspects for exoskeletons’ safety. This survey aims to represent a first attempt in recording overall feedback from the community and contribute to future RAs and the identification of better mitigation strategies in the field.
Introduction
In the last two decades, exoskeletons have become a promising technology for the assistance, augmentation, and rehabilitation of healthy individuals and patients [1]. Recent research on exoskeletons has dramatically increased in the last years, as testified by the rapid growth of number of manufacturers in the global market [2]. However, this rapid inclusion of new solutions in the market has left behind safety-related standards and procedures, which are advancing at a slower pace. Performance, ergonomics, healthcare impacts, longterm safety, and other effects on humans have yet to be studied and understood, as testified by the lack of normative requirements on data collection and analysis in the current standards [3]. Risk assessment (RA) procedures must be carried out regardless of whether they are used as medical or non-medical devices. A standard RA *Correspondence: Diego Torricelli diego.torricelli@csic.es Page 2 of 13 Massardi et al. Journal of NeuroEngineering and Rehabilitation (2023) 20:68 normally starts determining which hazards in principle apply to a specific device. A hazard is a potential source of harm that in a determined scenario creates a so-called 'hazardous situation' , i.e., a circumstance in which people are exposed to that or more hazards. Working with a device in a hazardous situation can lead to the occurrence of a 'hazardous event' (AE), resulting in harm for the user. Harm is a physical injury or damage to the health of people. The risk is generally calculated as a combination of the harm's occurrence probability and its outcome's severity. The purpose of a RA procedure is to identify each hazard (including hazardous situations and AEs) arising in all stages of the device's life cycle, classify the severity and occurrence of the harms and estimate the risk of each identified hazard. The final step of this process is to judge whether the risk can be considered tolerable or not and, when not, to reduce the risk until it becomes tolerable [4]. Hazard analysis can start with brainstorming involving stakeholders from different fields, to provide a more comprehensive and multifaceted contribution. A RA for exoskeletons should be performed with the contribution of different and various actors to forecast the maximum number of possible AEs before properly assessing the risk level associated with each finding. The frequency rate of an event can be evaluated in pilot trials or stress tests that can vary from one device to another [5], whereas the occurrence of a known event per session can be assessed more rigorously [6,7]. Severity remains a partially subjective metric that is not always clearly classified [5]. However, both occurrence and severity shall be classified according to the specific device evaluated.
We designed this survey aiming to address the most relevant events that can be part of an exoskeleton RA and asked the participants to rate their occurrence and severity according to their experience. The survey included a limited selection of AEs extracted from existing international standards and scientific literature that can be generally applied to a wide range of exoskeleton categories. Additionally, we asked the participant to select from a list of possible causes of the proposed events, according to their experience. This work is not meant to be a RA, as the participants are distributed over a wide range of devices, whereas a real RA shall be detailed for each device and its specific characteristics. This survey aims to collect impressions and opinions from experts in the wearable robotics field and analyze which aspects should be considered when performing a RA. The items composing the presented survey are extracted from a review of the main standards applicable to exoskeletons and related publications, as reported in the following section. The method section shows the construction and composition of the survey submitted to the community. The result section summarizes the responses collected. The discussion section includes the author's comment on the results and the limitations of this work.
Literature review
Standards and legislation are not meant to provide concrete case hazards and technical safety measures. Several existing European directives apply to exoskeletons depending on their application field. Machinery Directive [8] and the Medical Device Regulation (MDR) [9] are the principal European directives applicable for non-Medical and Medical applications respectively. They cover a wide range of devices and uses, as result, their content to address specific exoskeleton hazards is limited. Their purpose is not meant to guide users through a technical safety evaluation. However, compliance with harmonized standards automatically ensures compliance with EU directives. ISO EN 13482:2014 [10] covers the safety requirements for personal care robots. As a type C standard, it specifically addresses a type of machine and application, providing a more appropriate indication of possible hazards and safety issues. Exoskeletons used as physical assistant robots are part of the personal care robot family covered by this standard, where they are specifically classified as restrain-type physical assistant robots. ISO EN 13482:2014 lists many hazards related to the personal care domain, some of them being not (or partially) applicable to exoskeletons. From this standard we selected the most appropriate ones for this investigation: -Hazards related to charging battery (clause 5.2): accidental contact with the charging connections present on the robots. -Hazards due to energy storage and supply (clause 5.3): electrical parts other than the battery connections, uncontrolled release of stored energy in switching off/on the device, and any power failure of the device that could lead to an unexpected shutdown. -Hazards due to robot shape (clause 5.6): contacts with robot physical parts such as sharp edges, corners, and surfaces that could lead to cuts, rubbings, and other related injuries. -Hazards due to emissions (vibrations, clause 5.7.2): vibration emissions that could create discomfort and other effects on a user's health (the standard also provides a range in Hz where vibrations shall be avoided). -Hazards due to stress, posture, and usage (clause 5.9.2): physical stress and posture hazards due to a robot's shape, weight, inertias, and other physical factors that constrain the user. -Hazards due to robot motion (clause 5.10): mechanical instability (clause 5.10.2) that may produce any kind of intended or unintended motion. These hazards are related to the breaking or loosening of mechanical parts but also the instability of the attaching or removing procedure of the device from the user (clause 5.10.6). Clause 5.10 includes hazards concerning physical contact during human-robot interaction (clause 5.10.9), underlining how robot and its component shall be designed to reduce any interaction "as far as reasonably possible". Interactions are composed of and influenced by a great variety of factors such as shears, frictions, forces and pressures, dynamic loads, and weights. -Hazards due to incorrect autonomous decisions and actions (clause 5.12): wrong decisions and incorrect actions that might cause an unacceptable risk of harm from any personal care robot designed to make autonomous decisions and actions.
Recently, the technical report ISO/TR 23482:2020 [11] was published to support ISO 13482:2014. The document provides further guidance on the RA and risk reduction process to be conducted for a personal care robot. It contains examples of RAs for different types of personal care robots that can serve as an example for those users approaching ISO 13482:2014 to develop a RA. Clause 7.4 presents a partial RA example for a restrain-type physical assistance robot. The example includes five mechanical hazards and hazardous events related to unintended motion and unexpected control signals to the actuators, two electrical hazards related to battery and touching live connections, one thermal hazard concerning maintenance users, one ergonomic hazard related to discomfort, and one material hazard for the emission of dust. Although ISO 13482:2014 covers many uses of the exoskeleton as personal care robots, it doesn't cover their application as medical devices. When considering medical devices, ISO EN 14971:2019 [12] is the reference document for the regulation of the RA procedure. This standard presents a list of hazards for generic medical devices, where exoskeletons can fit in electrical/mechanical associated hazards. However, the requirements for exoskeletons are different from other medical electrical equipment and medical electrical systems, since exoskeletons operate with a particular degree of autonomy and exchange energy with the patient in close contact and cooperation. ISO IEC 80601-2-78:2019 [13] includes particular requirements for basic safety and essential performance of medical robots for rehabilitation, assessment, compensation, or alleviation of lost body functions. It more specifically targets rehabilitation robots in medical applications and can be read as a hazard list for the addressed devices. This standard presents the following hazards applicable to exoskeletons: Another document we consider is a Federal register from the Food and Drug Administration (FDA), which regulates medical devices in the United States having general applicability and legal effect. The register vol.80 n.36 [14] identifies nine risks associated with exoskeleton use, each of them combined with related special controls to mitigate the risk and provide assurance of safety and effectiveness. The identified risks are: -Instability, falls, and associated injuries. Clinical evaluation was also considered. Clinical studies often cover specific conditions of use but are not meant to analyze all the safety aspects of the device's lifecycle. Some studies evaluated occurrences of AEs starting from the FDA's list previously presented [5,7,15]. Based on this literature analysis, we selected and merged the most relevant items in a non-comprehensive list of general hazardous events that, in our opinion, are likely to be shared between exoskeleton devices. We excluded AEs like falls and collisions, normally investigated during infield trials [16][17][18], being usually consequences of other primary AEs. Our list includes the following seven items: -Unintended/unexpected motion: Either a human or device fault leading to an undesired or unex- pected motion. An unintended motion is considered as a motion triggered by the user in an unintended way (i.e., a mistake in using the interface) while an unexpected motion also includes any device motions the user did not mean to trigger (i.e., device controller fault in the motion planning). We merged "unintended motion" and "unexpected motions" in a single item since the line between the two items is not always clear and easy to define. The item also includes excessive torques applied by the actuators and trajectory faults exceeding limits [16,19,20]. -Unintended shutdown: Either a human or device fault leading to an unwanted or unexpected device shutdown. Although unintended shutdowns can be thought of as part of unintended motions (previous item), they have been selected to form a singular item. This allows to specifically address all the situations where the device shuts down without causing unwanted motions. -Skin and soft tissue injury: Bruising, skin abrasion, pressure sores, soft tissue injury, primarily from the attachment points between the user's body and the exoskeleton device. Skin and soft tissue injuries are one of the most investigated AEs in the literature with a high rate of occurrence during exoskeleton use [6,16,[20][21][22][23][24][25]. They can be consequences of many factors and cover several types of skin injury. -Misalignments: Offset between the exoskeleton and the human joints. Misalignments can be considered a source of hazards since they produce higher forces at the interface, thus contributing to skin injuries, discomfort, and pain [16,17,26]. Misalignments are a very important factor to negotiate when using or designing exoskeletons [1,[27][28][29][30] thus, it has been included to highlight the experience users might have with them. The relation between misalignment and injury/discomfort is still largely unknown and requires attention. -Electrical fault: Battery failure, faulty cabling, and connectors, power shut down, discharges. Electrical hazards are often divided into battery-related hazards and other types of accidental contacts, such as electrical malfunctioning [17,19]. This single item encompasses all aspects of this family and reduces dispersion. -Hazardous vibrations: Vibrations making the motion difficult to control and/or uncomfortable. Hazardous vibrations have been scarcely investigated in previous studies. Conversely, they have been mentioned as a source of hazards in the standards [10]. The item is then included to collect its relevance according to exoskeleton experts/users.
-User error: User action or lack of user activity while using the device that leads to a different result than intended by the manufacturer. User errors comprehend a wide family of possible hazards and events. For the sake of simplicity, they have been gathered in this single item.
Methods
The survey was publicly announced and disseminated through the major mailing lists of the exoskeleton, human biomechanics and robotics communities. The survey was completely anonymous. Informed consent was presented at the beginning of the survey, where participants allowed responses to be recorded, analyzed and published. The survey was composed of 16 questions divided into three sections, plus an introductory section.
In the introduction, participants are asked to select their professional/academic background and to provide a brief description of the exoskeleton that they used or operated (lower/upper limb, active/passive, rehabilitation/assistance robot, number of degrees of freedom of the device and the commercial name, if applicable).
Section 1: frequency evaluation
In this section, the users were asked to evaluate how often they had to deal with each of the seven items presented. The following statement was proposed with a frequency scale composed of three levels [5]: Which of the following events have you experienced (or observed) during the use of exoskeletons? For each of them, please select how often you had to deal with it.
-Recurrent: It happens regularly, from once per day to several times per session. -Occasional: It happens occasionally, from several times per year to once per week. -Rare: It happens rarely, from never to once per year.
At the end of the section, the users can add other remarkable events not mentioned in the list (and specify the corresponding frequency level) and any further comments they might have.
Section 2: severity evaluation
The second section was designed to evaluate the severity level of each item proposed, following the same structure of the previous section. The following statement is proposed with a severity scale composed of three levels [5]: For each of the aforementioned events, select their experienced severity (see definition below). The focus is on the consequences on user health, e.g., potential injuries or adverse reactions. In case of more severe outcomes, select the most severe.
-Severe: The event is incapacitating. It requires medical attention/treatment, and the use of the exoskeleton cannot be continued (e.g., bone fractures, skin lesions with complications). -Moderate: The event interferes with the use of the exoskeleton but can be managed with simple measures. No prolonged effects (e.g., skin lesions without complications). -Minor: The event is noticeable but easily tolerable.
No medical intervention is needed and the use of the exoskeleton does not have to be interrupted, or only for a short rest (e.g., minor discomfort, reddening).
Once again, users could add other remarkable events not mentioned in the list (and specify the corresponding severity level) and any further comments they might have had.
Section 3: Evaluation of causes
The last section investigates the potential causes related to each item and the possible dependencies between them. Possible causes are presented for each of them, being the result of the literature analysis and experience. The users could select more than one option and add other non-listed causes they have experienced. The survey concludes with the option to add further AEs and/or causes along with any further feedback or input to this survey.
Hazard's relevance score
Hazards are evaluated in terms of the probability of occurrence and the severity of harm. Standards do not specify metrics to evaluate the probability and severity of harm, allowing organizations to select the method that is most suitable to them, either qualitative or quantitative [4,10,31]. Typical RA approaches use a risk matrix to indicate the level of acceptable risk by different combinations of severity and frequency levels [17,19,32,33]. Developing a RA is out of the scope of this work and it cannot be conducted for such a wide range of devices and applications included by the respondents of this survey. However, inspired by RA procedures, we propose a composed score for each presented item, to create a priority list of which hazards are most considered/relevant for the community. As shown in Fig. 1 a score is given to severity and frequency options (minor/rare = 1, moderate/occasional = 2, severe/recurrent = 3). Each frequency score is matched with the corresponding severity given by the same participant. The combination results in 6 possible levels divided into three categories: (1-2) Low Relevance (LR), (3-4) Moderate Relevance (MR), and (6-9) High Relevance (HR). Although the proposed score is inspired by risk level evaluation in RAs, score levels in Fig. 1 are chosen arbitrarily, based on the authors' experience and judgment. The proposed scores are not validated. They are used solely to get a rough estimation for comparisons between hazards and are not meant to represent any risk level.
Results
The survey received 65 answers. 71% of the respondents (46 participants) worked in research fields, including 9 Ph.D. students (14%). 15 respondents (23%) were engineers from companies. The remaining 6% (4 respondents) were physiotherapists. Concerning the type of exoskeleton, the majority of respondents had experience with lower limb exoskeletons (59%), whereas 16 (25%) dealt with upper limb exoskeletons. 10 participants dealt with both upper limb and lower limb devices. One participant claimed not to directly work with exoskeletons. Nearly all participants (91%) dealt with active devices. 16 (25%) worked with passive devices and 11 with both active and passive devices. Concerning the field of use, 28 participants declared to deal with exoskeletons for rehabilitation (43%), 14 with assistive exoskeletons (22%), and 19 with assistive-rehabilitative devices (29%). Industrial field was represented by 7 participants (11%). One participant didn't answer to the frequency evaluation of Scores are crossed and multiplied to get the relevance score of each item. Three combinations are considered: Resulting score from 1 to 2: low relevance (LR), Resulting score from 3 to 4: moderate relevance (MR). Resulting score from 6 to 9: high relevance (HR)
Table 1 Frequency responses
For each item the number of responses collected in the frequency section for "Recurrent", "Occasional" and "Rare". Column "Tot" is the total number of responses for each item
Table 2 Severity responses
For each item the number of responses collected in the severity section for "Severe", "Moderate", and "Minor". Column "Tot" is the total number of responses for each item Table 2 are presented in bar plot with the % of each severity class for each AE. Red bars represent the % of respondents who experienced severe outcomes from the event, yellow bars represent the % of respondents who experienced moderate outcomes from the event, and green bars represent the % of respondents who experienced minor outcomes from the event. Numbers on the bars refer to the exact % value recorded Fig. 4 Resulting scores for each AE accordingly to results in Table 3 are presented in bar plot with the % of each resulting score associated with the answers from severity and frequency feedback. A resulting score from 1 to 2 is related to low relevance (LR), resulting score from 3 to 4 is related to moderate relevance (MR) and resulting score from 6 to 9 is related to high relevance (HR). Numbers on the bars refer to the exact % of the answers that are related with the relevance score
Discussion
The different backgrounds of the respondents hamper the achievement of specific conclusions and observations for a single device category. Respondents may have referred to one specific device but also to exoskeletons in general.
We can still suppose that the result is an average, general estimate in the field, which cannot apply to any device and we shall avoid misunderstandings that might lead to read the results as a general RA. However, the result of this survey also represents a picture collected by an audience of real users operating outside the laboratory or clinical conditions. Such a result can favor a more concrete view of how exoskeletons are perceived, not only limited to research and scientific literature. The items presented as a list of hazards that usually/typically apply to exoskeletons can only be taken as a reference for RAs. Frequency and severity feedback shall be read with the knowledge of the variety of devices and conditions considered by the respondents, and cannot be related to an accepted frequency/severity classification of a specific RA.
The frequency analysis presented low variability among the responses. All the items, except misalignments, received a "rare" occurrence score from at least the 50% of the participants while "recurrent" was less than 10% of the answers. Severity evaluation also did not show significant trends with the consequences of the events averagely rated as "minor". Misalignments were rated as "recurrent" by 25% of the respondents. A recurrent event was proposed as something happening from once a day to several times per session, meaning that misalignments are often daily problems for the users. Our definition for misalignments was "an offset between the human and the device joint". This event is impossible to avoid considering that robot kinematics is just an approximation of the human body. Conversely, misalignments also received a 24% of "rare" occurrence rate. Misalignments might be considered negligible for some applications. One comment underlined how they were observed for paraplegic users (specifically taller users) and not for healthy users, suggesting that misalignments were noticed only for remarkable offsets. The controlled conditions from clinical environments might here influence the result since improper fitting and unprecise positioning are more easily avoided. None of the remaining events were frequently experienced with electrical malfunctioning collecting the lowest frequency score. This is understandable for devices in a commercial or pre-commercial stage since requirements for electrical safety are far clearer than for all the other event types. IEC-60601 is indeed very detailed on specific rules and safe limit values. This can perhaps be considered a simple field to comply with, in terms of knowing what can and needs to be done to make the system as (electrically) robust and safe as possible.
While the frequency evaluation might be based on a quantitative scale (one can record the occurrence rate of an event on a determined time unit), severity mostly relies on qualitative evaluations. The absence of clear and measurable criteria able to define severity makes the Table 3 Relevance score calculated responses For each item the number of responses matching scoring 1-2 (Low relevance), 3-4 (Moderate Relevance), 6-9 (High Relevance). The score is given by the product between frequency and severity score. Column "Tot" is the total number of responses for each item results more dependent on the specific device and application considered. Events leading to a severe injury are relatively rare in exoskeletons, for this reason, respondents might have less experience in rating severe AEs. In a recent review on exoskeleton risk management [16], two bone fractures were reported due to the occurrence of misalignments while the remaining listed events led to no injuries or skin damages that were resolved in a few days. Another review on AEs in stationary robots (e.g., Lokomat) collected 3 severe AEs out of 169 reported events although 43 remained unrated. However, the reported number of AEs could also be an underestimation, since reviews are limited to published reports. From the results, more than 10% of the participants experienced severe outcomes after unintended/unexpected motion (19%) and use error (12.7%). Unintended and unexpected motions can indeed lead to falls, having then higher hazardous potential compared to other hazards. The 19% of event's consequences classified as severe can anyway raise a flag since "severe" was associated with incapacitating outcomes. From some collected comments, the participants pointed out how their severity score was relatively low because they were testing the device under the supervision of clinical staff, prevention measures, or under very limited and constrained conditions, which reduced the overall risk level. As we stated for misalignment, frequency scores are normally influenced by the presence of clinical staff supervision and controlled conditions. The fact that occurrence and severity are based on tests performed in a clinical environment may indeed affect the perception of the users in evaluating the events. Both severity and frequency can indeed be strongly affected by studies performed in presence of staff members monitoring the user and the system. What the community can communicate about their perceived experience in exoskeleton safety can be far from reality. This point is stressed by the poor knowledge that technical personnel may have about safety aspects. Additionally, safety tests without intervention are difficult to perform when they include humans. Part of the respondents might also have provided a more technical experience rather than a knowledge of medical or physical outcomes. A lack of awareness from users and technicians might also explain the low severity score of the experienced outcomes. For example, the appearance of skin injuries can even occur days after exoskeleton use, and the user may not perceive the injury when doffing the device. In some cases, the supervisor might not pay attention to the harm the user is experiencing. Clinical studies normally focus on gait or task-related parameters without considering AEs, which can be of difficult detection. The combination of severity and frequency scores as indicated in Table 3 (and Fig. 4) produced a generally low relevance score for the proposed events. This can be interpreted as a matured confidence of the users towards these devices, but also as a lack of experience regarding the possible hazards and risks in these applications. Existing literature on exoskeleton safety is indeed much more limited when compared to the literature on electromechanical features, design, or control strategies. AEs are also poorly published, although investigators are obliged to list the ones occurring, with the obligation to take action in case of serious AEs. Further analysis of events' causes showed some more recognizable trends. 60% of the participants identified electrical faults and loss of communications as causes for unintended shutdowns. However, electrical faults also received the lowest risk score in the evaluation, in contrast with the unintended shutdown risk results. Electrical faults were on their side mainly attributed to issues with cables and precarious connections, although the frequency of these events was one of the lowest recorded. As a consequence, unintended shutdowns also received a low-frequency score.
Skin damages were associated with mechanical contacts (73.8%) and misalignments (60%). This result shows how the design of the device plays a key role in defining safe shapes, surfaces, and attachment designs, which can otherwise lead to harmful and uncomfortable use. Misalignments can be related to the design phase of the device. Attachment strategies and materials directly influence the capability of the device to be well aligned and maintain the desired configuration. Misalignments were linked to three main aspects, namely cuff design, incorrect cuff positioning, and oversimplified kinematics. As previously said, the simplicity of the device could reduce the overall complexity but increase deviations from real human kinematics. Cuffs and interfaces shall be a developers' priority to ensure safe human-machine contact and communication. However, incorrect cuff positioning could be also considered use error and not a design error (too high or low, rotated, etc.). More than 50% of the participants identified unintended triggers (exoskeleton incorrectly reacting to body movements) and sensor failure in reading as potential causes for unintended and unexpected motions, whereas still half of the participants also identified use errors as an important cause. If we analyze use errors, respondents highly agree on insufficient training as the major cause (67.7%). Training is one of the mitigation measures suggested by the FDA to decrease the level of risks. The other two major items for use errors are "wrong use of exoskeleton interface" and "wrong settings", which might be also related to a lack of training and practice of the device, equally contributing to an AE occurrence. Together with training, user errors can be mitigated by improving usability
Limitations
The relatively limited number of respondents, together with the result's imbalance in device type and participant's background (especially between industry and academy) did not allow to extract separate conclusions from the different domains. Thus, no targeted analysis of the proposed events was possible.
Hazards are not all applicable in the same way to all devices. A generalization had to be applied considering that the same hazard can lead to one outcome in one device, in one situation, and a different outcome in another device in another situation. Due to this variability, this work can only claim to collect feedback on exoskeletons' hazards and hazardous events in terms of occurrence, severity, and causes that the respondents have experienced. The background and experience of each participant can represent strong confounders for the analysis of the results. Attention shall be paid since the provided feedback can be driven by the participant's perceived risk, which can sensitively change depending on multiple factors, such as the different experiences of AEs when dealing with exoskeletons prototypes vs commercial devices, or healthy users related events vs scenarios with patients.
The vast majority of respondents were researchers presumably working with exoskeletons in a very controlled environment. For this reason, analysis can be performed based on the authors' expertise and best practice but not knowing whether the respondents would have rated it the same way in an actual RA. Further improvements would be to expand the survey to differentiate between real-world applications and clinical trials in a laboratory as well as differentiate between commercial devices and prototypes.
Conclusions
This article is one of the first attempts to collect feedback from different fields and applications in the exoskeleton community. It represents an interesting point of view on how safety factors can be perceived by real users and experts in the sector.
The participants could answer about the relevance of exoskeletons hazards in terms of occurrence and severity outcomes as well as potential causes. The conducted survey collected user experiences and general considerations on the safety of these devices, highlighting relevant connections among the presented events and pointing out important characteristics that researchers and developers shall focus on. Misalignments were the most recurrent adverse event (AE) and were mainly linked to design issues. Nevertheless, a consolidated agreement on misalignment definition is still missing, which may have introduced data dispersion.
Unintended motion was on average rated as the most dangerous event and found to be due to sensors and human errors, such as training and understanding the device.
Overall, and somehow unexpectedly, the majority of AEs did not reach high severity and frequency ratings. However, these results cannot be taken as a real risk assessment (RA). Each manufacturer shall decide what combination of frequency and severity is acceptable for each specific device and its intended use. The items presented to the respondents and their results can only be taken as a reference for future RAs.
The use of exoskeletons outside clinical environments and without expert personnel is still limited. These controlled conditions can influence the perception of how the device can produce AEs. For this reason, developers shall also stress tests in scenarios as near as possible to the outside world conditions. | 2023-05-31T14:12:59.299Z | 2023-05-31T00:00:00.000 | {
"year": 2023,
"sha1": "e329cc531e5139b25e830f931823a21605c025a4",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "e329cc531e5139b25e830f931823a21605c025a4",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
80186048 | pes2o/s2orc | v3-fos-license | Follicular Fluid Cortisol Releasing Hormone (CRH) Levels and Assisted Reproductive Treatment (ART) Outcomes
Purpose: Can Cortisol Releasing Hormone (CRH) levels in follicular fluid predict outcomes following assisted reproductive treatment (ART) cycles? Methods: Prospective cohort study of 50 women undergoing in vitro fertilisation (IVF)/intra-cytoplasmic sperm injection (ICSI) cycles over a two month study period. All patients were treated on the long stimulation protocol; follicular fluid was aspirated and pooled for each patient. The samples were processed appropriately and assayed using CRH radioimmunoassay (RIA). Results: This study confirmed that CRH was present in follicular fluid. The average level detected was 173 ± 9 pg/mL (mean ± standard error of mean [SEM]). The data suggests a positive correlation of CRH follicular fluid levels greater than 145 pg/mL with successful ART outcomes. Conclusion: The data indicates a positive correlation between ART outcomes and the presence of follicular fluid CRH levels greater than 145 pg/mL. The results should be interpreted with caution due to the small sample size and pooling of follicular fluid per patient. Furthermore, the pooling of follicular fluid is not representative of CRH levels in an individual follicle, and thus, mature oocyte. This study serves as a reminder to what has previously been hypothesised.
Introduction
The current quoted live birth rate in the United Kingdom (UK), following assisted How to cite this paper: Lim, L.N., Supramaniam, P.R., Mittal, M., Linton reproductive treatment (ART) is 25% [1].Reproductive clinicians aim to individualise treatment cycles to optimise cycle outcomes for patients.One such method is the transfer of more than one embryo per cycle attempt, subsequently increasing the multiple pregnancy risk [2].
In order to increase the current ART pregnancy rate and conform to the national drive to reduce the multiple pregnancy rate, fertility clinics are working towards improving embryo selection.Whilst most human preimplantation embryos are morphologically variable, containing both healthy and abnormal cells [3] [4], it is thought that successful implantation is associated with embryos containing only a limited proportion of cells of abnormal appearance.Good quality embryos are, in turn, thought to be associated with oocytes in which nuclear and cytoplasmic maturation during oogenesis has been appropriate [5] and this is likely to depend, at least in part, on adequate provision of nutrients and growth factors to the developing oocyte.The microenvironment of human follicles is critical for normal oocyte development, folliculogenesis and timely ovulation.
Follicular fluid provides the environment in which oocyte maturation occurs, thus affecting the oocyte's potential for fertilisation and embryonic development.
Studies have shown that a variety of growth factors and cytokines could play a role in oocyte development.These include members of the transforming growth factor, EGF, IGF, activin and inhibin families which can affect ovarian function.
Many of these factors have been implicated as regulators of gonadal steroid secretion, corpus luteum function, embryonic development and implantation [6] [7].
Cortisol Releasing Hormone (CRH) is present in the ovary.CRH gene expression has been found to be predominantly abundant in primary antral follicles and mature human ovarian follicles, implicating an autocrine role for ovarian CRH in follicular maturation [8], ovulation, luteolysis and oocyte maturation processes [9] [10].CRH peptide has also been detected in the cytoplasm of theca and stromal cells of ovarian follicles at all stages of development, as well as in the corpora lutea of both human and rat ovaries.Kerdelhue and his colleagues have reported that human follicular fluid from women undergoing in vitro fertilisation (IVF) contained CRH [11].Theory would suggest that low levels of CRH would be expected in mature follicles as the hormone inhibits thecal steroid production [12] which is required for oocyte maturation.However, it is unknown whether follicular fluid CRH levels reflect the peptide's inhibitory function in thecal cells, or whether CRH has additional positive roles, modulating oocyte growth and/or inflammatory events leading to ovulation.
We aim to investigate the role of CRH in oocyte quality through the measurement of peptide levels in ovarian follicular fluid and relating them to pregnancy rates.Furthermore, this study looks at the objective predictive value of follicular CRH in IVF outcomes.This may provide evidence that the peptide can either promote or hinder quality oocyte production.
Patients
Women undergoing ART [conventional IVF or intracytoplasmic sperm injection (ICSI)] treatment at Oxford Fertility, UK were prospectively recruited, during the study period.A total of 50 women were recruited into the study and gave valid written consent.Ethical approval for the study was obtained from the Oxfordshire Clinical Research Ethics Committee, reference number CO2.211.The patients were treated with the long stimulation protocol, with pituitary desensitisation commencing in the luteal phase of the menstrual cycle using a gonadotrophin releasing hormone agonist (GnRHa) (Nafarelin; Synarel, Searle, High Wycombe, UK) intranasally, 400 µg twice daily.Controlled ovarian stimulation occurred with recombinant follicle stimulating hormone (r-FSH) (Gonal F, East Sussex, UK; Puregon, Organon, Cambridge, UK) which was commenced once pituitary desensitisation was confirmed by oestradiol levels less than 73 pmol/L.
The initial dosage of r-FSH ranged between 150 -375 IU, dependent on the age of the female patient, early follicular phase FSH level and her body mass index.Transvaginal ultrasound scans and serum oestradiol levels were performed on three or four occasions to monitor follicular growth.When at least 3 follicles greater than 18 mm were present, 5000 IU or 10 000 IU of human chorionic gonadotrophin (hCG) (Profasi; Serono, Welwyn Garden City, UK) was administered as a single injection to induce final oocyte maturation.Transvaginal ultrasound-guided oocyte retrieval was performed under sedation 35 hrs after hCG administration.Embryo transfers (ETs) were performed 48 hrs following oocyte retrieval.Progestogen pessaries (Cyclogest, Hoechst, Hounslow, UK) were given for luteal phase support, commencing the day after oocyte retrieval.The pessary was continued for 2 weeks, when a urinary pregnancy test was undertaken to confirm pregnancy.
Sample Collection
The first follicular aspirates (prior to flushing of the follicle with media), for each patient was collected.The average number of follicles recruited per patient was 12.44 ± 1.04 (mean ± standard error of mean [SEM]).The aspirate from individual patients was pooled and the pooled sample was then centrifuged immediately at 300 × g for 10 mins at 4˚C.The supernatant was collected and stored at −20˚C, awaiting peptide extraction.
Follicular Fluid Extraction
The follicular fluid CRH was first extracted with methanol prior to measurement by CRH radioimmunoassay (RIA), secondary to the presence of Corticotrophin Releasing Hormone Binding Protein (CRHBP) in human follicular fluid [8] interfering with CRH estimation [13].The samples were treated with 3 volumes of ice-cold methanol, vortexed and left to stand for 10 mins on ice.The tubes were then centrifuged for 15 mins at 2000 × g; the supernatants were transferred to DOI: 10.4236/ojog.2017.7131301274 Open Journal of Obstetrics and Gynecology fresh tubes and dried under a stream of nitrogen at 40˚C.Dried samples were then stored at −20˚C until the assay.Follicular fluid extracts were reconstituted to their original volume with assay buffer and assayed by CRH RIA.
CRH1−41 Antibody
The CRH 1−41 antibody used had been developed within the same laboratory as part of an earlier study.Rabbit anti-CRH 1−41 (code M2) was raised against synthetic human CRH 1−41 coupled to bovine thyroglobulin using glutaraldehyde [13].
Iodination of CRH1−41
Synthetic CRH 1−41 peptide was iodinated using a modification of the iodogen method of [14]: the mixture containing 5 µg of synthetic CRH
CRH1−41 RIA Protocol
A 10-point standard curve, covering the range 10 pg/mL to 10 ng/mL, was prepared.The reaction was held at 4˚C overnight followed by separation by addition of 200 µL of a 1/10 dilution of sheep antiserum against rabbit Fc region (Igi Ltd, Sunderland) containing 1% normal rabbit serum (NRS: Gibco BRL, Life Technologies, Paisley, UK) and 4% PEG 6000 (Fison, Manchester, UK) in assay buffer.The tubes were incubated at room temperature for 30 mins, after which 1 mL PBS/0.01%Triton X-100 was added and the tubes were centrifuged at 4000 × g for 30 mins at 4˚C.The supernatants were aspirated and radioactivity in the pellets was measured in a LKB CompuGamma counter.Dilution of standards, samples, antiserum and labelled peptide were carried out in the assay buffer.
Embryo Grading
The embryo quality was assessed immediately prior to embryo transfer (ET), 42 hrs after insemination.The embryo grading system ranged from A to E, with grade A being the best in terms of nucleoli, fragmentation and cell division (Figure 1): Grade A-regular or only slightly irregular blastomeres, with or without minor fragments; Grade B-irregular blastomeres, with or without minor fragments; Grade C-irregular or regular blastomeres, up to 30% fragments; Grade D-irregular or regular blastomeres, more than 30% fragments and fragmented (usually unsuitable for transfer).
Results
The mean age of the recruited female patients was 34.10 (range 22 -43) years.
The aetiology of their infertility included male factor infertility, tubal disease, unexplained infertility, endometriosis and polycystic ovarian syndrome (48%, 24%, 16%, 10% and 2%, respectively).Eighty percent of the patients had an early follicular phase (defined as day 2 -5 of menstrual cycle) FSH that was ≤ 10 IU/L.Sixty-two percent of the patients (62%) had no previous reported pregnancy.The demographics are shown in Table 2.
The mean ± SEM oestradiol level on the day of hCG administration was 4238.30± 310.56 pmol/L.The mean number ± SEM of follicles recruited, number of oocytes retrieved and number of oocytes fertilised were 12.44 ± 1.04, 8.28 ± 0.64 and 5.32 ± 0.45, respectively (Table 3).
Three patients did not have an ET.Of these, two patients had failed fertilisation, whilst one patient became acutely unwell after oocyte retrieval leading to all of her embryos being frozen.With the exception of 2 patients who underwent a blastocyst stage ET, 5 days after oocyte collection, the remaining 45 patients underwent a day 2 or 3 ET.The mean number of embryos transferred per patient was 1.93 (87/45).The grading of these is shown in Table 4.
In total, 12 patients had a positive pregnancy test.The pregnancy rate per Three of the 12 (25%) pregnancies ended as first trimester losses, whilst the other 9 patients had an uneventful singleton live birth.Thus, the live birth rate per stimulated cycle was 18% (9/50).
This study confirmed the presence of CRH in follicular fluid.The average level detected was 173 ± 9 pg/mL (mean ± SEM). Figure 2 demonstrates the distribution of the CRH levels in the pregnant and non-pregnant groups.In this study, Pearson Correlation (where correlation is significant at the 0.05 level) indicated no link between the CRH concentration and aetiology of infertility or other recognisable favourable factors for IVF treatment, such as age of the female patients, aetiology of infertility, FSH level, parity, oestradiol level, number of follicles recruited, number of oocytes retrieved/fertilised and ET grading (Table 5).
Discussion
This study has demonstrated the presence of immunoreactive CRH in methanol extracted, stimulated ovarian follicular fluid, with a range of 20 -285 pg/mL.
These levels are almost 20 fold higher than those reported by Kerdelhue and his colleagues (5 -15 pg/mL) in which the peptide was extracted from follicular fluid using Sep-Pak C18 [11].Mastorakos and his colleagues have also reported immunoreactive CRH in ovarian follicular fluid, obtaining CRH levels between 3.2 -6.7 pmol/L (equivalent to 16 -33.5pg/mL) using acid extraction followed by high pressure liquid chromatography after reconstitution [10].The differences in CRH levels found could be secondary to patient variation, the heterogenei- The human follicular fluid CRH could be derived from several sources.Firstly, it might originate from the theca cells by diffusing through the granulose layer into the follicular antrum.Secondly, it may be derived from the stromal cells of the mature follicle, as described by Asakura and colleagues [8].
No correlation was found between follicular fluid CRH and serum oestradiol levels measured in this study.This differs from previous findings, where CRH suppressed the release of oestrogen (and insulin-like growth factor-I) from primary cultures of rat granulosa cells [17] and was able to inhibit the IL-1 mediated production of oestrogen and progesterone using granulose-theca cells from women undergoing IVF treatment [12] This study suggests a positive correlation between successful IVF outcomes and follicular fluid levels of CRH greater than 145 pg/mL.This finding is unexpected secondary to known knowledge that the hormone inhibits thecal steroid production [12] which is required for oocyte maturation.This would imply that low CRH concentrations would be expected in mature follicles.However, no direct relationship was found between the level of CRH and the pregnancy outcome following IVF treatment, which may be expected if follicular fluid CRH is beneficial rather than inhibitory (through its effect on steroid production).Furthermore, the CRH measurement could not be used as an additional objective biochemical marker for embryo selection/grading.
The conclusions may be limited by the small sample size of the study, or the experimental design adopted, with all follicular fluid aspirates from each patient being pooled to provide a mean value per mL per patient.CRH concentrations from individual follicular fluid aspirates would provide an absolute value per follicle, with a greater correlation to the oocyte from each follicle in relation to the fertilisation rate and embryo grading over pregnancy outcome.This method would account for co-variables known to impact pregnancy outcome, such as female age, parity, number of years of infertility and aetiology of infertility.
Conclusion
In summary, the study indicates a positive ART outcome with follicular fluid CRH levels greater than 145 pg/mL.However, the limited study design precludes the use of follicular CRH as an objective predictor for IVF outcomes.This positive effect may be secondary to CRH's potential roles in follicular [8] and oocyte maturation [18], leading to improved oocyte quality and consequently, a higher pregnancy rate.
ty between the methods employed for sample extraction in each study, and to the particular RIA used for measurement.It is possible, for example, that the extraction methods used in the previous two studies did not separate all of the CRH from any binding protein present in the sample; in these circumstances, the CRH measurements obtained would underestimate the true levels present in follicular fluid.Methanol extraction, on the other hand, is known to extract CRH from body fluids with efficiency approaching 100%[15].It is also possible that the CRH antibodies used in the three different RIA have differing cross-reactivities to the various molecular species of CRH, for example the CRH precursor, proCRH, and its intermediate metabolite, CRH 125−194 .The CRH antibody used in the present study has been shown to detect both of these CRH species in addition to the 1 -41 peptide[16], but cross-reactivity of the other CRH antibodies with these larger molecular forms is not reported.Additional chromatographic studies would have to be performed to determine the molecular size of the immunoreactive CRH detected in follicular fluid samples.
Table 1 .
The recovery of CRH in methanol, C18 Sep Pak and non extracted follicular fluid samples.Open Journal of Obstetrics and Gynecology 50 µL of M2 antibody at 1/3000 dilution was added to duplicate 200 µL aliquots of standard or sample and allowed to equilibrate for 24 hrs prior to the addition of 50 µL of 125 I-labelled CRH 1−41 [15,000 -22,000 count per million (c.p.m.)/tube].
Table 5 .
Correlation of concentration of CRH with age, parity, FSH level, aetiology of infertility, level of oestradiol, number of follicles recruited, number of oocytes retrieved, number of fertilised embryos and grading of transferred embryos. | 2019-03-17T13:10:35.583Z | 2017-12-07T00:00:00.000 | {
"year": 2017,
"sha1": "3246257a427af90261981f7a41e1d99a4479af51",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=81192",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "c9c26f2d527e6ffe01f47d837b4f6ca97d5c9259",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251845990 | pes2o/s2orc | v3-fos-license | An approach in medical diagnosis based on Z-numbers soft set
Background In the process of medical diagnosis, a large amount of uncertain and inconsistent information is inevitably involved. There have been many fruitful results were investigated for medical diagnosis by utilizing different traditional uncertainty mathematical tools. It is found that there is limited study on measuring reliability of the information involved are rare, moreover, the existed methods cannot give the measuring reliability of every judgment to all symptoms in details. Objectives It is quite essential to recognize the impact on the reliability of the fuzzy information provided under inadequate experience, lack of knowledge and so on. In this paper, the notion of the Z-numbers soft set is proposed to handle the reliability of every judgment to all symptoms in details. The study in this paper is an interdisciplinary approach towards rapid and efficient medical diagnosis. Methods An approach based on Z-numbers soft set (ZnSS)to medical diagnosis has been developed and is used to estimate whether two patterns or images are identical or approximately. The notion of Z-numbers soft set is proposed by combing the theory of soft set and Z-numbers theory. The basic properties of subset, equal, intersection, union and complement operations on the Z-numbers soft sets are defined and the similarity measure of two Z-numbers soft sets are also discussed in this paper. Results An illustrative example similar to existing studies is showed to verify the effectiveness and feasibility, which can highlight the proposed method and demonstrate the solution characteristics. Conclusion Diagnosing diseases by uncertainty symptoms is not a direct and simple task at all. The approach based on ZnSS presented in this paper can not only measure reliability of the information involved, but also give the measuring reliability of every judgment to all symptoms in details.
This manuscript proposes a new concept of Z-number soft sets and defines some
operations on Z-number soft sets, as well as a similarity measure for two Znumber soft sets. From the general perspective, the paper is interesting, however, some issues should be faced to improve the paper. There are some suggestions and comments.
Thank you very much for the suggestions and comments. We will answer for them carefully one by one. 1. The presentation quality of articles needs to be improved urgently. Some paragraphs and formats do not read well in this manuscript. There are many sentences in the article with obvious grammatical errors or unclear expressions. For example, there are three grammatical errors in the last sentence of the abstract; in subsection 2.2, the bibliography is cluttered, etc. The author should carefully check and modify.
Answer: Thank you for the comments! Based on this comment, we have revised three grammatical errors in the last sentence of the abstract and improved the language of the whole paper carefully. For example, the tense in the abstract is changed with present. The words "presented" "gave "and "introduced" are changed into "present", "give" and "introduce", respectively. And also, for consistence, the tense of other part is changed. Our own work is mainly in the present tense. We use simple past tense when we cited the other related work. Meanwhile, we also change the other grammatical errors and unclear expressions in the other part of the paper. The details can be seen from the file named "Revised Manuscript with Track Changes".
We have check and modify the bibliography of the whole paper carefully. The details can also be seen from the file named "Revised Manuscript with Track Changes".
The article lacks a comparison with other existing studies.
Answer: Thank you for the comments! Based on this comment, we have added a comparison with other existing studies (Line 463 to Line 503 of Page 14-15). The details can also be seen from the file named "Revised Manuscript with Track Changes". 6. The case in Section 5 is just a simple example, not a medical diagnosis in a realworld scenario, and provides little useful information in the Discussion section.
Answer: Thank you for the comments! Based on this comment, we have explained at the beginning of section 5. (Line 411 to Line 418 of Page 12-13).
We first explain that there is only a simple example only with two diseases under consideration to show the possibility of using this approachbased Z-numbers soft set, not a medical diagnosis in a real-world scenario, which is similar to the existing studies [32,41,]. We also explain the idea of applying this method in practice. On the one hand, there are not only two choices in reality to estimate for preliminary diagnosis of disease which could be improved by clinical results. In that case, the ranking results of the evaluation can help patients make the choice of registration ranking. On the other hand, the work in this paper can also help develop a platform for primary diagnosis.
In addition, we also have added a comparison with other existing studies in the discussion (Line 463 to Line 503 of Page 14-15). The details can also be seen from the file named "Revised Manuscript with Track Changes". 7. Please explain the similarities and differences between the "ZnSS" in the manuscript and the "Z-set" proposed in the research "Z-set Based Approach to Control System Design".
Answer: Thank you for the comments!21 Firstly, based on this comment, we have researched the article "Z-set Based Approach to Control System Design" and added it to the references ([67]). (Line 80 of Page 3).
Secondly, we have also added the description of similarities and differences (Line 90 to Line 96 of Page 3) between the "ZnSS" in the manuscript and the "Z-set" proposed in the research "Z-set Based Approach to Control System Design" as follows.
Both Z-set and ZnSS have a significant potential in the describing of the uncertainty of the human knowledge because it consists of restraint and reliability of the measured value in details. But ZnSS deal with nonparametric uncertainty and reliability of information by combining the advantage of the Molodtsov soft set theory and Zadeh's Z-set. In other words, ZnSS can not only measure reliability of the information effectively, but also be free from the inadequacy of the parameterization tools of traditional mathematical tools.
Finally, we also explain the similarities and differences in other relevant parts of the paper, such as in the conclusion (Line 505 to Line 510of Page 16). In our paper, we focus on combing the theory of soft set and Z-numbers theory to handle the reliability of every judgment to all symptoms in details. Z-numbers were proposed by Zadeh as a new way to deal with uncertainty and reliability of information. Z-numbers can describe levels of human judgment in details. Z-numbers have a significant potential in the describing of the uncertainty of the human knowledge because it consists of restraint and reliability of the measured value in details. In this paper, therefore, we try to conduct Z-numbers soft set to deal with nonparametric uncertainty and reliability of information by combining the advantage of the Molodtsov soft set theory and Zadeh's Z-numbers concept, which is the core purpose of this paper.
Reviewer #2: The authors use the Zadeh-fuzzy number in diagnosis in medicine. The paper is solid and uses the soft set in its application. The authors should mention into the paper the extension of Soft
Therefore, we will try to investigate these types of plithogenic hypersoft sets into the fields of medical diagnosis, decision making and so on in the future. Thank you for the suggestion again.
Reviewer #3: The paper proposed an approach to medical diagnosis and used to estimate whether two patterns or images are identical or approximately. In my opinion, this paper contains some interesting results which make a significant and technically sound contribution to the field. However, there are some issues that should be considered in the revised version.
Thank you very much for the suggestions and comments. We will answer for them carefully one by one.
1.Abstract should be restated by adding the importance and contribution of the work.
Answer: Thank you for the comments! This comment is similar to the comment 2 given by the Reviewer #1. We have added more descriptions of the importance and contribution of the paper precisely in the abstract. The details can be seen in the revised paper and in the answerdescription of comment 2 given by the Reviewer #1.
2.Introduction section should be completely updated by adding motivation, organization and novelty of the work. Answer: Thank you for the comments! This comment is similar to the comment 3 given by the Reviewer #1. We have added more descriptions of the motivation (Line14 to Line 30of Page 3, Line 47 to Line50 of Page 3), novelty (Line 97 to Line 111 of Page 3) and organization (Line 112 to Line 119 of Page 3-4) of the work in the introduction section.
The details can also be seen from the file named "Revised Manuscript with Track Changes".
3.The author should add the last two-year reference related the proposed work and link the proposed work. Please update the reference and citation.
Answer: Thank you for the comments! Based on this comment, we have added the 7 last two-year reference (reference [5][6][7][8], [10][11][12][13][14][15][16][17][18], [20], [25], [28][29][30]) related the proposed work and link the proposed work. (Line 11 to Line 39 of Page 1). Meanwhile, we have updated the reference and citation. The details can also be seen from the file named "Revised Manuscript with Track Changes".
4.Authors should use the common symbols which can be easily understandable and readable
Answer: Thank you for the comments! Based on this comment, common symbols are used in the revised version. The details can be seen from the file named "Revised Manuscript with Track Changes".
5.What effect is the use of the proposed model in achieving the objectives of the research? It is suggested that the results of the proposed model are stated in the conclusion with full detail.
Answer: Thank you for the comments! Based on this comment, we have expressed the superiority of the proposed method in the conclusion with full detail (Line 518 to Line 543 of Page 16).
The advantages of the proposed method are summarized below.
(1) The traditional uncertainty mathematical theories have developed greatly and achievements have been widely applied in medical diagnosis. However, the traditional uncertainty mathematical theories have their intrinsic difficulties, which are pointed by Molodtsov[19]. Soft set theory proposed by Molodtsov has been regarded as an effective mathematical tool to deal with uncertainty. The method presented in this paper based fuzzy extensions of soft set theory are presented can express different fuzziness of medical diagnose parameters effectively.
(2) Fuzzy numbers have been widely applied in decision making of medical diagnose. However, we found that the reliability of uncertainty symptoms in medical diagnose environments is also important. To solve this situation, Z-numbers are used to model and describe the diagnoses of decision-makers on uncertainty symptoms. Z-numbers combined with the constraint and reliability.
(3) When applying Z-numbers, we need an appropriate method for express different fuzziness of medical diagnose parameters and handle the reliability effectively. To address these problems, we combine the soft set and Z-numbers, we propose the notion of the Z-numbers soft set and treat the Z-numbers soft set as a whole, rather than converting the second component, to avoid the loss of symptoms information.
(4) Similarity measure have extensive application in the area of disease recognition. Considering the reliability of the information involved in the process, a measure of similarity between two ZnSS has been given in this paper to compare two Z-numbers soft sets in consequence, which can solve the calculation on similarities to help the final diagnose result.
(5) In the real the diagnoses of decision-making problems, the proposed method can obtain reasonable and effective results, as demonstrated by comparing the obtained results with those from the existing methods. This method can also be applied to other multi-attribute decision-making problems.
6.In addition to expressing the superiority of the proposed method, its challenges need to be addressed.
Answer: Thank you for the comments! Based on this comment, we have added the challenges of the proposed method in the conclusion. (Line 547 to Line 560 of Page 16-17).
"Although the proposed approach has been demonstrated to be effective through illustrative examples and in-depth discussion, there are still some aspects and potential areas that can be improved in future studies. The difference in importance between two components of a Z-number in Z-number soft set, namely the assessment value and the reliability measure need to be studied further. In this paper, determine the weight of the two components remains an unresolved issue, though the importance of these two components should be different. Second, in the proposed method for compare two Znumbers soft set using similarity measure without considering the association between parameters. Third, the feasibility and effectiveness of the method are just verified by numerical examples rather than real professional medical knowledge in this paper. Therefore, the future study directions will include the parameterization reduction of Znumbers soft sets. It is also desirable to further explore the applications of using the Znumbers soft sets approach to solve real world specific problems in the process of decision making in medical diagnosis."
7.In the research methodology section, explain why this idea was proposed and what is its superiority over other methods?
Answer: Thank you for the comments! Based on this comment, we have added explain why this idea was proposed and what is its superiority over other methods in the research methodology section. (Line 299 to Line 310 of Page 9).
"The majority of the decision-making process assumes that the decision maker's cognition for all aspects of a problem is the same. However, this is not quite true because of decision maker's inadequate experience, lack of knowledge, different risk preferences and so on. Therefore, it is quite essential to recognize the impact of the decision maker's cognition on the reliability of the information provided. We now develop an approach to solve this kind problem considering the reliability of fuzziness of problem parameters. For given objects(diseases) with certain attributes(symptoms), the Z-numbers soft set as a novel type of soft set can describe the uncertainty that not all the objects satisfy all attributes, but can show details cognitive information of one object satisfying the attribute. Because of the complexity of the real medical diagnosis, using only one aspect of information to describe uncertain events is fully difficult. A new decision-making method with new perspective is presented based on Z-numbers soft set to solve such practical decision-making problems in this section." 8.In the related works section, new resources will be used and implicitly referred to the challenges of previous methods.
Answer: Thank you for the comments! Based on this comment, we have improved the relevant analysis that why new resources will be used and implicitly referred to the challenges of previous methods in the related works section.
"One of the challenges in the process of this kind medical diagnosis is about how to handle the uncertainty effectively to achieve more accurate decision-making…However, in the process of medical diagnosis, there may be many critical diseases, where experts do not have sufficient knowledge to handle those problems. For these cases, experts may provide their opinion only about certain aspects of the disease based on the symptoms they focused and remain silent for those unknown symptoms. For this kind of situation, the traditional uncertainty mathematical theories mentioned above have their intrinsic difficulties. Therefore, one purpose of this paper is to apply the advantage of soft set to deal with uncertainty to solve this kind of special situation in medical diagnosis. For one aspect, it is free from the inadequacy of the parameterization tools of traditional mathematical tools [20]. For another aspect, theory of soft sets can be used by combining other traditional uncertainty mathematical tools." (Line 14 to Line 30 of Page 2) "Another challenge in the process of this kind medical diagnosis is about how to measure reliability of the information effectively to achieve more accurate decision-making…However, these methods only can give whole reliability to all attributes, without giving the measuring reliability of every judgment or information in details. Znumbers were proposed by Zadeh as a new way to deal with uncertainty and reliability of information." (Line 47to Line 49 of Page 2) 9.It is necessary to state in the related works section, why the previous methods are not responsible for solving the research problem accurately and you need to use to a new method? | 2022-08-26T06:17:09.556Z | 2022-08-25T00:00:00.000 | {
"year": 2022,
"sha1": "f32906e5d72dcd1527e8f1897a7f62ca5600ee74",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "2d2e85ae3304f6f88600dda161f07d8300d41ee9",
"s2fieldsofstudy": [
"Medicine",
"Mathematics",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246351915 | pes2o/s2orc | v3-fos-license | The Common Complications of Chemotherapeutic Agents and the Effect of Xylitol Chewing Gum on Oral Dental Hygiene in Patients Having Malignant
Aims: The aim of this study were to estimate the percentage of general and oral complications among patients receiving chemotherapeutic agents and correlate the oral complications with the age of the patients and with the drugs either used singly or in combination with the other chemotherapeutic agent and the effect of xylitol chewing gum on oral dental hygiene in patients having malignant diseases. Materials and Method: In this clinical trial, 70 patients with ages ranging between 7–65 years treated with different cancer chemotherapy for a duration from 3 months to 3 years. The patients were selected from those who treated in Hazim Al–Hafith center for treatment cancer in Mosul City. General and oral complications of chemotherapeutic agents were recorded and the agent recorded either used singly or in combination with the other chemotherapeutic agent. Twenty patients from those who had oral complications were examined and the plague and gingival indices were measured according to Silness and Loe (1963) at the base line. Then those patients were instructed to take xylitol chewing gum (4 grams/day); four times immediately after eating. The plague and gingival indices were measured again after 3 weeks of using the chewing gum. Results: The results of this study revealed that approximately half of the patients have general and oral complications while the others either had only general complications or had no complications (48.57%, 27.14%, 24.29% respectively). The incidence of oral complications correlated with the increasing in patients' age (P<0.01). The distribution of general and oral complications were correlated with the agent used either singly or in combination, where 100% of patients medicated with single therapy had general and oral complications while the patients medicated with multiple therapy 70.69% of them had general and oral complications and 29.13% had no significant complications. The patients who had taken xylitol chewing gum had a significant reduction in plague index while there is no significant reduction in gingival index (p<0.01). Conclusions: the study concluded that the general and oral complications arising in cancer patients can be attributed to the various modalities of cancer chemotherapy. Routine oral hygiene and elimination of preexisting dental disease and sources of mucosal irritation with a giving of salivary substitutes like xylitol reduce the incidence and severity of a number of oral complications of chemotherapy.
INTRODUCTION
Cancer chemotherapeutic agents are used clinically to destroy and suppress the growth and spread of malignant cells. (1,2) Most chemotherapeutic agents have common adverse effects include, nausea, vomiting, diarrhea, alopecia, fever and allergic reaction. (3,5,7) Oral complications are frequently encountered in patients receiving chemotherapy include, mucositis, infection, hemorrhage and xerostomia. (4) Oral complications always occur as a consequence of treatment for head and neck cancer and may be unpleasant and even life-threatening complications. (6,7) Several factors play a role in the development of these problems include, the type of malignancy, patient age, type and dosage of chemotherapy and oral hygiene level before and during therapy. (8,9) Oral mucositis is a universal oral complication in patients receiving highdose systemic cancer chemotherapy. (8,10) In addition, oral mucositis represent a significant risk factor for systemic infection, particularly in neutropenic patient, consecutive protraction or termination of chemotherapy may lead to treatment failure and result in increases in therapeutic The Common Complications of Chemotherapeutic Agents and the Effect of Xylitol Chewing Gum on Oral Dental Hygiene in Patients Having Malignant. Diseases expenses. (11) Xerostomia is a common complication in patients receiving cancer chemotherapy and radiation therapy. (12) Cancer chemotherapy can cause decrease in salivary secretion which is usually much less sever and transient. (13) Furthermore, dry mouth result in tissue with reduced barrier function which is contributes to increased mucosal irritation and infections. (14) Oral infections associated cancer therapy can be caused by fungal, viral and bacterial organisms. (4) These infections can cause tissue damage directly or increase the damage due to secondary infection of oral mucositis. (15) The effects of chemotherapy on bone marrow and oral flora coupled with the patients immunosuppressed state and altered oral microbial flora predispose these patients to oral mucositis, infection and hemorrhage. (16) In addition to that, chemotherapeutic agents may secondarily induce thrombocytopenia which is the usual cause of intra oral hemorrhage. (17) Oral hygiene is an important factor in therapy of oral mucosa. (18) The care of oral health play a role in prevention and reduce the severity of oral complication. (19) Improving the effectiveness of oral hygiene by cleaning the oral cavity with different forms of lotion and mouth washes and using a variety of products some form of pain relief, anti inflammatory treatment as required and aggressive antimicrobial treatment for any new mouth infection. (2,10) Sugar free xylitol gum is a chewing gum made with xylitol which is the popular sweetener substance that looks and tastes like sugar. (20) The benificial effects of xylitol on oral health it reduces the quantity of plague and the number of bacteria that cause tooth decay. (21)(22)(23)(24) Furthermore, sugar free gum has a beneficial means of saliva stimulation for people suffering from Xerostomia. (25) The aims of this study were to estimate the percentage of general and oral complications among patients receiving chemotherapy and correlate the oral complication with the age of the patients and correlate the complications with the agent either used singly or in combination, and compare the effect of xylitol chewing gum before and after taking in relation to their ability to control plague formation and gingival inflammation.
MATERIAL AND METHODS
Seventy patients were participated in this study, their age ranged between 7-65 years. All patients treated with different chemotherapeutic agents either single or multiple therapy including methotrexate, cyclophosphamide, 5-flurouracil, Bleomycin, Doxorubicin, Cisplatin and Pacitaxil.
A special case sheet was designed for each patient containing the following data: patients age, type of malignance, type of chemotherapy, dosage, duration, general and oral complications, measurements of plague and gingival indices before and after treatment with xylitol chewing gum. The study was conducted in Hazim Al-Hafith center for treatment cancer in Mosul City for the period from February 2006 to July 2006.
Oral examination was performed to look for signs of complications and patients were asked for any symptoms of oral and general complications of therapy. 48.57% of patients had general and oral complications of the therapy. 27.14% of patients had only general complication including nausea, vomiting, diarrhea, alopecia and allergy. Oral complications including xerostomia, mucositis, infection and hemorrhage. The others 24.29% of patients had no significant complications according to the intra-oral examination and patients questionnaire.
Patients about 58.82% from those who had oral complications were examined plague and gingival indices and measured according to Loe and Silness (1963). The results for plague index were recorded as occurance of plague grade zero (no plague in gingival area); grade 1 (a film of plague adherent to the gingival margin and the adjacent area of the tooth, the plague may only be recognized by running a probe across the tooth surface); grade 2 (a moderate accumulation of soft deposit within the gingival pocket or on the tooth and gingival margin, this can be seen with naked eyes); grade 3 (a heavy The estimation of gingival index is same as for plague index. The criteria for gingival index include grade zero (normal gingiva); grade 1 (mild inflammation, slight change in colour, slight oedema, no bleeding on probing); grade 2 (moderate inflammation, redness, oedema, bleeding of the gum on probing); grade 3 (sever inflammation, marked redness, oedema, ulceration and there is a tendency for spontaneous bleeding).
Those 58.82% of patients were asked to take xylitol chewing gum (4 grams/day); four times immediately after eating, they were instructed to brush their teeth during treatment. Plague and gingival indices were taken at the base line and after three weeks of using the gum.
Statistical analysis in this study included descriptive statistics; that's, calculation of frequencies and percentage. Chisquare test was used. The level of significance was recorded at p<0.05, paired t-test was used to compare the effect of xylitol chewing gum on plague and gingival indices before and after treatment (p<0.05).
RESULTS
The patients in this study either had oral and general complications or had only general complications or had no complications.
The percentage distribution of complications of chemotherapy was shown in Figure ( The percentage distribution of some common general complications among studied group were illustrated in Figure (2), where 60.38% had nausea and vomiting, 9.43% had diarrhea, 20.76% had alopecia and 9.43% had allergy (p< 0.001).
The percentage distribution of oral complications among studied group were represe-nted in Figure (3 The differences in plague and gingival indices before and after taking xylitol chewing gum in only 35% of patients who can came for follow up from 58.82% of patients who instructed to take chewing gum were illustrated in the Table (1), where there was significant reduction in plague index after taking gum at (p<
DISCUSSION
Patients receiving cancer chemotherapy often suffer from oral and general complications as a result of their disease and its treatment. (16) Oral complications remain the dose-limiting toxicity of a variety of chemotherapeutic regimens and may result in significant morbidity, impaired nutrition, treatment delays and dose reductions. (27,28) In this study the common general complication are gastrointestinal problems. This may be explained by the fact the gastrointestinal tract is a tissue that is rapidly turning over and the sloughing of the gastrointestinal mucosa can produce many disturbances like nausea, vomiting, and hemorrhagic diarrhea. (1) In addition to that, alopecia occur to lesser or greater extent during therapy with antineoplastic agents but the hair usually regrows when therapy is discontinued. (3) Moreover, some antineoplastic agent like cisplatin, pacitaxil and interferons causes hypersensitivity ranging from skin rash to anaphylaxis because these drugs including foreign particles causes allergic reaction. Therefore, these patients requiring premedication with dexamethasone to overcome the allergicity. (2) Acute and chronic complications of oral tissues and changes in physiologic process frequently accompany cancer therapies. (8) The initial effect of chemotherapy is on rapidly proliferating cells of oral epithelium. As a consequence the epithelium may show atrophy and ulceration. (14) In this study xerostomia was the most common oral problem associated with chemotherapy this certainly explained by the fact that cytotoxic agent affect salivation by different mechanisms that includ-ing alteration in flow rate, electrolyte balance and some time salivary function but are generally reversible and transient. (12,13) Therefore, were instructed the patients by rinsing their mouth frequently and giving sugar less or xylitol chewing gum for stimulation the salivation that act as protective layer to minimize dental plague and caries and minimize bacterial and fungal infections. (21,22) In this study 11.77% of patients suffered from mucositis and inflammation of oral cavity and ulceration. This may be due to many factors that affect oral cavity like high dose of chemotherapy giving to the patients, age of the patients, type of chemotherapy and a changes in oral environment and xerostomia caused by chemotherapy. (6) These results were in agreement with other studies that explained mucositis as dose-limiting complication in patients receiving chemotherapy, bone marrow transplantation and local irradiation for tumours in the head and neck area. (11) In addition, oral mucosa is comprised of membranes of a high mitotic index with rapid epithelial turnover and maturation rates, this causes the mucosa to be vulnerable to adverse effects of chemotherapy. (4) Fungal infection was encountered in 8.83% of patients at the palate and buccal mucosa. This may be explained by the fact that indirect effect of chemotherapy may include granulocytopenia with reduced salivary secretion, so that the protective mucin coating of the epithelium is compromised and these changes result in tissue with reduced barrier function and impaired ability to heal and to resist entry of pathogens, thus increasing the risk of systemic infection. (14,16) Moreover, oral bleeding can be manifested in 2.94% of patients with malignant disease due to thrombocytopenia resulting from chemotherapy induce marrow suppression. (16,17) Oral complications in this study showed to have incremental increase with age. This result was in agreement with several studies demonstrated that increases the potential for developing oral complications with age, type of malignancy, nutrional state and the level of oral health before and during therapy. (4,6,9) In this study all patients medicated with single therapy developed general and oral complication and more than half of the patients who received multiple chemotherapy also developed general and oral complication. These results were supported by other study (28) which showed that, the frequency and severity of complications are related to such factors as whether an agent is used singly or in combination, the dose and schedule of drug administration, the degree of myelosuppression and administration of many chemotherapy regimens may be complicated by toxicities that limit clinician's abilities to deliver the most effective doses of active agents. (29) However, 24.29% of patients who had no complications suffered from non-head and neck malignance and received low doses of multiple chemotherapeutic agents. This result was in agreement with other studies (6,9) which reported that approximately 40% of patients with non head and neck malignancies developed oral problems following exposure to chemotherapy. From 48.57% of patients, who had oral complication, 58.82% of them were educated and coming for receiving chemotherapy continuously, were examined and plague and gingival indices were measured as a base line and giving xylitol chewing gum, only 35% of patients came to follow up after three weeks of taking gum and measured plague and gingival indices because some of them stopped the drugs and others changing the time of receiving the drugs. The results showed that there were significant differences in plaque index after taking the gum. These results were in consistence with other studies presented that, xylitol reduces the quantity of plague and insoluble carbohydrates. Hence, the resulting plague is less adhesive on the teeth enabling easier removal through stimulation of saliva by chewing gum. (21,30) Moreover, the unique beneficial effect of xylitol on oral health are largely due to its 5. carbon chemical structure which is not recognized by mutans streptococci, oral bacteria that cause tooth decay. (21,22,24) .
CONCLUSIONS
Individual under going cancer therapy may be at a risk for a wide variety of complications that can affect morbidity and mortality. Acute oral complications associated with cancer chemotherapy are a frequent and potentially serious problem. Therefore, pretreatment oral assessment of those patients is an opportunity to identify and eliminate potential sources of sepsis and irritation. Routine oral hygiene has been accepted as an important component of oral care protocols for patients receiving cancer therapy and the patients should be motivated about good oral hygiene as well as their hospital and out hospital surrounding. | 2022-01-28T16:48:18.672Z | 2007-12-01T00:00:00.000 | {
"year": 2007,
"sha1": "70003bcf20a89787ca457ee25b56c8eabc47bc30",
"oa_license": "CCBY",
"oa_url": "https://rden.mosuljournals.com/article_164414_175adc328e67803743d07877d2ad3dfc.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "cd601475d202f5d42671ef8ba9d504b68d5675a4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
271526223 | pes2o/s2orc | v3-fos-license | Delayed Aspiration of Cerebrospinal Fluid From a Thoracic Epidural Catheter After Difficult Placement: A Case Report
A 69-year-old female with Crohn’s disease was admitted for open ileocecectomy with lysis of adhesions. The plan was to proceed with general endotracheal anesthesia and a thoracic epidural catheter for perioperative analgesia. Epidural access was attempted at the T10-11 and T11-12 interspaces, both of which resulted in accidental dural punctures. On the third attempt, the epidural catheter was inserted at the T9-10 interspace. Both the aspiration and test dose were negative. Thirty minutes later, after induction of general anesthesia, the catheter was again aspirated before the epidural pump was connected. Freely flowing, glucose-positive fluid was obtained, and the catheter was removed for the patient’s safety. This case suggests that accidental dural puncture may be a risk factor for inappropriate communication with the subarachnoid space. This can be assumed to increase the risk of unanticipated high or total spinal block and its life-threatening sequelae.
Introduction
Aspiration of epidural catheters after placement is a common method used to rule out intrathecal and intravascular placement [1].Aspiration is considered negative when fluid, whether it be blood or cerebrospinal fluid (CSF), cannot be aspirated freely from the catheter.In an awake, cooperative patient, negative aspiration is typically followed by a test dose of lidocaine with epinephrine [2].A test dose is considered negative when the patient denies symptoms of intravascular local anesthetic injection (perioral paresthesia, tinnitus, metallic taste), there are no signs of intravascular epinephrine injection (increases in blood pressure and heart rate), and there are no signs of intrathecal local anesthetic injection (sudden weakness, slurred speech, unexpectedly high sensory block) [2,3].
Epidural catheters have the possibility of a rare but serious complication of migration [4][5][6].Catheter migration can lead to intravascular, subdural, or subarachnoid cannulation.A catheter that has migrated to the subarachnoid space can be diagnosed by aspiration of CSF from the catheter.However, migration of the catheter can occur after initial negative aspiration and test dose and therefore can go unnoticed [4,5].This can lead to potentially life-threatening complications due to high doses of local anesthetic being placed in the subarachnoid space, leading to high or total spinal block [7,8].
Usually, when freely flowing CSF is aspirated from the catheter, it is assumed to be intrathecal [1].However, CSF can sometimes be aspirated from the epidural space, especially if there is a defect in the dura [1,9].This is a poorly described phenomenon and can lead the anesthesiologist to misinterpret the position of an epidural catheter.There are no universal guidelines for troubleshooting the catheter in such a situation.
In our case, a difficult thoracic epidural catheter placement after two accidental dural punctures (ADPs) was thought to be successful due to negative aspiration and test dose.However, repeat aspiration half an hour later was positive for freely flowing, glucose-positive fluid.The anesthesia team had no way to safely delineate the position of the catheter without delaying the case, and the catheter was removed for the patient's safety.This case highlights the diagnostic ambiguity of a delayed positive epidural catheter aspiration test, as well as the need for guidelines in such circumstances.
Case Presentation
A 69-year-old female, with American Society of Anesthesiologists (ASA) class II, body mass index (BMI) of 21.47 kg/m2, and Crohn's disease, was admitted for open ileocecectomy with lysis of adhesions.She had a remote history of ileocecal resection complicated by anastomotic fibrotic stricture and chronic abdominal pain.Other medical history included gastroesophageal reflux disease (GERD), hiatal hernia, and glaucoma.The plan was to proceed with general endotracheal anesthesia with a thoracic epidural catheter placed preoperatively to help manage perioperative pain.
In the preoperative holding area, consent was obtained for thoracic epidural placement, and the preprocedure timeout was completed.The patient was connected to standard ASA monitors and received 25 mcg of fentanyl and 1 mg of midazolam.With the patient in a sitting position, the skin was anesthetized with 3 cc 1% lidocaine.Using an 18G, 9 cm Tuohy needle (Braun Medical, Bethlehem, PA) and the loss of resistance technique with normal saline, the epidural was attempted using a midline approach.The sterile technique was used throughout the procedure.Loss of resistance occurred at a needle insertion depth of 4 cm.The first two attempts at the T10-11 and T11-12 interspaces were unsuccessful and resulted in a freely flowing CSF return through the needle hub.No blood loss or paresthesia were noted at any time.
On the third attempt, a 20G closed-tip epidural catheter (Braun Medical, Bethlehem, PA) was inserted at the T9-10 interspace to a length of 9 cm at the skin.Aspiration was negative at this time.A test dose injection of 3 cc lidocaine 1.5% with 1:200,000 epinephrine was administered through the catheter, which did not elicit a subjective response from the patient.She denied tinnitus, perioral paresthesia, metallic taste, and weakness of the lower extremities.Objectively, no changes in her vital signs were noted, including blood pressure and heart rate.The catheter was then secured at a length of 9 cm at the skin.Mastisol (Eloquest Healthcare, Ferndale, MI) was applied to the skin around the insertion site and catheter, followed by a sterile transparent dressing, thus preventing any manipulation of the catheter's position by traction outside of the body.The edges of the dressing were also reinforced with silk tape.The patient was moved back into a supine position with the head of the bed at 30 degrees.
Thirty minutes later, the patient was transferred from the preoperative area to the operating room (OR).The anesthesia team that would be present for the operation was made aware that there had been difficulty with placing the epidural but that the initial test dose had been negative.General anesthesia was achieved without any issues.The epidural catheter was then reassessed before attaching the epidural pump, which revealed catheter position was still 9 cm at the skin.However, aspiration of the catheter yielded a clear liquid, which flowed freely through the catheter.Aspiration was halted after 2 cc's were easily obtained.Due to concern for possible unintentional high or total spinal block, the decision was made to remove the catheter.The tip was confirmed to be intact upon removal.The patient underwent surgery without complications.This fluid was sent to the lab and was later confirmed to be CSF by an elevated glucose level of 64 mg/dL.Following extubation, she was moved to the post-anesthesia care unit (PACU) and recovered from anesthesia without any issues.Her pain was managed by IV opioids as needed, together with bilateral transversus abdominis plane (TAP) blocks, using 20 mL of bupivacaine 0.25% and 4 mg dexamethasone injected on each side.She tolerated the blocks well.Postoperatively, her pain was managed with oral opioids as needed and adjuncts.Of note, the patient developed a mild diffuse headache and subjective bilateral hearing loss postoperatively, which we attributed to the dural punctures.The headache and hearing loss resolved within 48 hours, and there were no residual neurologic symptoms.
Discussion
ADP is a common complication of epidural catheter placement, occurring in 0.4-6.0% of patients [10].Dural puncture is often diagnosed by the appearance of clear fluid flowing from the needle hub when advancing the needle.Factors that place patients at higher risk of ADPs in the thoracic spine are not well studied but are thought to include conditions that distort the normal curvature of the spine, such as scoliosis.It has also been shown that the incidence of ADPs is highest during repeated attempts for epidural placement [10].
Once an ADP occurs, there are three standard options: (1) place the catheter intrathecally; (2) attempt epidural access at a different intervertebral space; or (3) elect to forego neuraxial anesthesia entirely [11].Placing an epidural catheter into the intrathecal space after ADP has several purported benefits: rapid initiation of analgesia; the avoidance of need for further attempts to achieve epidural analgesia and possible repeat accidental dural puncture; and the potential reduction of post-dural puncture headache (PDPH) [7,10,12].The bulk of the literature supporting these claims comes from studies on lumbar epidurals for obstetric patients.There is limited literature describing the placement of intrathecal catheters following ADP in the thoracic spine [13].In our case, the risk of high or total spinal block and its associated complications outweighed the benefits of thoracic spinal anesthesia.Moreover, our institution does not have protocols in place to manage thoracic spinal catheters.Therefore, we elected to re-site the epidural catheter at another intervertebral level.
Following ADP, the incidence of PDPH could be as high as 76-85% [14].Symptoms of PDPH include headaches typically focused in the frontal-occipital distribution, worsened by sitting or standing and alleviated by lying down.Additional symptoms include neck stiffness, tinnitus, hypoacusia, photophobia, and nausea.The mechanism of the condition is suspected to be due to reduced CSF pressure due to leakage of CSF from the intrathecal space [14].As in our patient, these symptoms typically resolve within a week.Our patient's development of PDPH supports the notion that there was a significant leak of CSF into the epidural space.
In our case, initial aspiration and test dose were negative, but freely flowing CSF was aspirated in the OR only half an hour later.There are several distinct possibilities that may explain this phenomenon.These possibilities include the following: (1) catheter tip migration from the epidural or subdural space into the subarachnoid space; (2) intrathecal placement with initial false negative aspiration and a false negative test dose; (3) filling of the epidural space with CSF, which surrounded the catheter; or (4) the catheter tip migrating to lie adjacent to a dural defect within the epidural space.
Epidural catheter migration into intravascular, subdural, or subarachnoid space is a rare complication of epidural anesthesia.Epidural catheter migration is difficult to detect because of a wide range of potential symptoms and a lack of diagnostic guidelines.If our epidural catheter did, in fact, migrate into the subarachnoid space, intrathecal injection of epidural-dose local anesthetic and narcotic could have resulted in respiratory arrest and cardiovascular collapse due to a high or total spinal block [6][7][8].
If the catheter was placed into the intrathecal space initially, it is possible that the lack of response to the test dose was a false negative.However, with our standardized follow-up questions after administering the test dose and the patient's ability to raise her legs to get back into bed, the sensitivity approaches 100%.Therefore, the test dose is excellent for ruling out intrathecal placement [3].On the other hand, the test dose has been criticized for its poor specificity, resulting in many false positives [2].This is a topic for another discussion.In our case, the aspiration test preceding the test dose was also negative; this may have been due to tissue blocking flow through the catheter, or the catheter was not intrathecal.To our knowledge, there is no data regarding the sensitivity of aspiration for detecting intrathecal cannulation.However, in obstetric patients, the rate of accidental intrathecal injection after negative aspiration is rare, estimated to occur in 0.0008-0.06% of patients [2].Both the sensitivity of the test dose and the rarity of intrathecal cannulation after negative aspiration led us to believe that our catheter was not initially placed into the intrathecal space.
Another possibility is that the catheter was seated properly in the epidural space, hence the initial negative aspiration.Due to the multiple ADPs, the epidural space filled with CSF as it leaked from the subarachnoid space, opening the epidural space around the catheter.This could theoretically lead to a positive subsequent aspiration test [15].Similarly, if the catheter tip migrated to overlie the dural defects, aspiration may pull fluid directly from the intrathecal space through these defects.The minimal resistance to flow observed would suggest a significant leak, and administration of anesthetic through this catheter could theoretically result in spinal anesthesia by diffusion of epidural anesthetic into the intrathecal space through dural defects.
There are only a few articles describing the aspiration of CSF from the epidural space [1,9,15].One case series describes positive aspiration from epidural catheters after dural puncture for combined spinal-epidural (CSE) anesthesia, where 0.75 cc of freely flowing CSF was aspirated in two cases.Postoperatively, the anesthesia providers cautiously administered spinal-dose anesthetic into the catheters and titrated to effect.To achieve adequate analgesia, they required epidural doses of anesthetic.In short, the catheters were successfully used for epidural anesthesia despite the positive aspiration of CSF.Later, the catheter positions were radiographically confirmed to be in the epidural space, demonstrating that CSF can be aspirated from a properly seated epidural catheter after a dural puncture [9].The CSE cases differ from ours because their aspiration attempt was positive on initial evaluation of the catheter, and the dural puncture was intentional and with a smaller gauge needle.Additionally, their epidural catheters were placed in the lumbar spine.
Another relevant case report described failed spinal anesthesia after multiple attempts to place an epidural catheter [15].In their case, epidural catheter placement was aborted after two ADPs.Spinal anesthesia was then attempted as an alternative but failed despite the robust return of CSF through the spinal needle prior to administration of anesthetic.The authors hypothesized that their spinal anesthetic may have failed for two reasons: either the anesthetic was administered intrathecally and leaked out into the epidural space through the dural defects from ADPs, or the spinal anesthetic was administered into the epidural space, which they mistook for the subarachnoid space due to return of CSF [15].The latter hypothesis supports the notion that ADPs can result in significant leakage of CSF into the epidural space.
A CT myelogram with administration of contrast through the catheter can be utilized to delineate the location of the catheter tip [16].This was not a viable option for our case as the patient's surgery would have needed to be canceled due to time limitations.A case report with a similar clinical scenario as our case described the use of ultrasound pulsed-wave Doppler to localize an epidural catheter tip [17].This is a promising technique, but more research is required to determine its utility.Without radiographic evidence, an epidural catheter with positive aspiration of glucose-positive fluid should be treated as intrathecal until proven otherwise.
Theoretically, if our patient were awake, we could have attempted to administer spinal-dose anesthetic through our epidural catheter and monitor for a response, titrating to effect [8,9].However, our patient was already under general anesthesia when positive aspiration was discovered.As mentioned above, our institution does not have protocols in place for managing thoracic spinal catheters.Because continuous spinal anesthesia is not commonplace at our institution, we felt that the risk of a drug dose error was too great if we left the catheter intrathecally.Had epidural-dose anesthetic been delivered intrathecally in our frail patient, cardio-respiratory arrest would likely have occurred among other catastrophic sequelae [7].
Conclusions
In our case, we postulate that multiple dural defects from ADPs may have caused significant leakage of CSF into the epidural space, resulting in positive aspiration of CSF from the catheter.Alternatively, epidural catheter migration may have occurred due to a change in patient position, diverting the epidural catheter over or into a defect in the dura from the previous insertion attempts.Finally, the catheter may have been initially inserted into the intrathecal space, and both aspiration and test dose were negative for an unknown reason.Regardless of the cause, we elected to remove the catheter due to uncertainty of its positioning, a lack of institutional protocols for managing thoracic spinal catheters, and concern that a drug dose error may occur if the catheter was left in place.Our case suggests that ADPs may be a risk factor for inappropriate communication with the subarachnoid space, which may not be apparent in the initial evaluation of the epidural catheter.This can be assumed to increase the risk of unanticipated high or total spinal block and its life-threatening sequelae. | 2024-07-29T15:09:31.822Z | 2024-07-01T00:00:00.000 | {
"year": 2024,
"sha1": "71f2f83e10d295503861683c58b481e0e5a4f5e7",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/case_report/pdf/249283/20240727-1030311-xxq9jt.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f9981c3e7d0e3342698541931887479330ecc2f1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
39542382 | pes2o/s2orc | v3-fos-license | Adaptive Handover Decision Algorithm Based on Multi-Influence Factors through Carrier Aggregation Implementation in LTE-Advanced System
. Although Long Term Evolution Advanced (LTE-Advanced) system has benefited from Carrier Aggregation (CA) technology, the advent of CA technology has increased handover scenario probability through user mobility. That leads to a user’s throughput degradation and its outage probability. Therefore, a handover decision algorithm must be designed properly in order to contribute effectively for reducing this phenomenon. In this paper, Multi-Influence Factors for Adaptive Handover Decision Algorithm (MIF-AHODA) have been proposed through CA implementation in LTE-Advanced system. MIF-AHODA adaptively makes handover decisions based on different decision algorithms, which are selected based on the handover scenario type and resource availability. Simulation results show that MIF-AHODA enhances system performance better than the other considered algorithms from the literature by 8.3 dB, 46%, and 51% as average gains over all the considered algorithms in terms of SINR, cell-edge spectral efficiency, and outage probability reduction, respectively.
Introduction
In mobile wireless systems, there are several handover decision algorithms (HODAs) which have been proposed based on different parameters such as (i) Received Signal Strength (RSS), (ii) RSS with a threshold, (iii) RSS with hysteresis, (iv) RSS with hysteresis and threshold (parameters (i) to (iv) are discussed in detail from Pollini) [1], (v) RSS with hysteresis and distance [2], (vi) Signal-to-Interference-plus-Noise-Ratio (SINR) [3], and (vii) Interference-to-Interference-plus-Noise-Ratio (IINR) [4].All of these HODAs have been proposed for the purpose of taking an intact handover decision in order to enhance system performance through the user's mobility.However, in [1,3,4], all the HODAs are taken based on a single parameter, while there are other influencing factors which have not been considered.That leads to taking nonintact handover decisions, which in turn degrades a user's throughput and increases its outage probability.Thus, the communication efficiency between the user and serving network is negatively affected.In [2], HODA is taken based on multiple factors, but there are other influencing factors that have not been considered such as the interferences, noise, and resource availability.These effectively impact system performance.Furthermore, the advent of CA technology has added a new handover scenario, which can be performed between the serving component carriers (CCs) under the same sector and the same evolved node B (eNB) to change the primary component carriers (PCCs).This leads to increased handover probability, which in turn leads to increased throughput degradation and user outage probability.This type of handover scenario can be reduced as long as the serving PCC provides acceptable RSS to the served user equipment (UE).Therefore, more efficient HODA is needed, which should contribute for reducing user throughput degradation and high outage probability.
In this paper, MIF-AHODA has been proposed in order to provide a seamless handover process through CA implementation in LTE-Advanced system.MIF-AHODA is automatically
Related Work
HODA is an essential step of the handover procedure in cellular wireless networks.It should be designed carefully in order to take an intact and a proper handover decision to the suitable target cell.That provides a seamless connection between the UE and serving eNB through its roaming within the cells.Anyway, handover decision is taken by the serving eNB based on the measurement report (MR) that is received from the served UE.MR contains the signals levels list of specific neighbor cells, and it can contain other information based on the implemented HODA.However, there are several HODAs that have been proposed [1][2][3][4] based on different parameters, such as HODA based on RSS [1], RSS and distance [2], SINR [3], and IINR [4] with considering the hysteresis level.All these HODAs aim to enhance system performance through the user's mobility within the cells.
In [1], handover decision algorithm is proposed to be taken based on the Received Signal Strength (HODA-RSS).The algorithm triggers handover once the target RSS (RSS ) level becomes sufficiently stronger than the serving RSS (RSS ) by a handover margin level ( RSS ) in dB.That algorithm can be simplified by In [2], handover decision algorithm based on distance and relative Received Signal Strength (HODA-D-RSS) has been proposed in a log-normal fading environment.The handover decision output becomes true and starts for initiating handover procedure once the two following conditions are met; (i) the measured distance between user and target eNB becomes less than that between the user and target eNB by a certain threshold distance and (ii) the average target RSS becomes stronger than that received from the serving eNB by a given hysteresis level.That HODA can be simplified by the following: where Dis and Dis represent the distance from the user to the target and serving eNBs, respectively, while is the distance margin level.
In [3], handover decision algorithm has been designed utilizing SINR (HODA-SINR) as control handover parameters for taking the handover decision.The algorithm allows the served user to trigger the handover once the target SINR quality (SINR ) becomes sufficiently better than the serving SINR quality (SINR ) by a certain hysteresis margin level ( SINR ).For simplicity, this algorithm can be represented by where SINR and SINR represent the SINR of target and serving cells, respectively, while SINR represents the hysteresis SINR margin level in dB.
In [4], an optimal handover decision algorithm is proposed based on Interference to other-Interferences-plus-Noise Ratio (IINR) parameter (HODA-IINR).It is designed from the perspective of throughput enhancement by considering two handover schemes (Fast Cell Selection (FCS) and Soft Handover (SHO)).In case of considering FCS the proposed HODA is represented by → SINR − IINR < −1, where SINR represents SINR from the serving eNB, while IINR represents IINR from the target eNB.In the other case, when SHO is considered the proposed HODA is represented by → SINR − IINR < 0. However, that HODA decides to perform handover only when a throughput gain exists.
These four HODAs take the handover decision based on single parameters (i.e., RSS, Distance, SINR, and IINR).So, they cannot give always a proper handover decision, because there are several influence factors that have not been considered, such as channel condition, Rayleigh fading, interferences, noise, and traffic loads.Also, handover scenario should be considered due to the additional scenario that is added by CA technique, which will be explained in the following section.Therefore, a new handover decision algorithm is needed when CA is considered in LTE-Advanced system.
Handover with CA Technique
The advent of CA technique in LTE-Advanced system increases the number of aggregated CCs that can be deployed at one eNB and assigned to one UE simultaneously.These CCs are classified into two different types.The first one is known as a PCC, while the second type of CCs is called a SCC [5,6].
The PCC is the carrier that is always being active through the active mode operation of UE.It should provide full cell coverage among the active adjacent CCs or provide the best signal quality over all the active CCs [6,7].However, PCC is normally used for exchange control signaling messages and traffic date between a UE and eNB.It is also used for random access procedure and the allocation of the SCC.In addition, Radio Link Failure (RLF) is recorded when the radio link connection over the PCC is failed, and then the Radio Resource Control (RRC) reestablishment procedure is triggered over the PCC too.Also, the Nonaccess Stratum-(NAS-) recovery procedure is triggered if the RCC reestablishment procedure over the PCC is failed within T310 (T310 is the maximum allowed time for recovering connection through the RRC reestablishment procedure) period of time [5,8].
The UE in LTE-Advanced system release 10 and release 11 (rel.10 and rel.11) can be configured with only one CC among the plurality of assigned CCs as a PCC.At the beginning, when the UE sets up the connection to the serving network the PCC is automatically selected by the serving eNB.If only one CC is assigned to the UE, it is configured as a PCC.Otherwise, when several CCs are paired to one UE, one CC among the plural active carriers must be configured as a PCC, while the rest of active CCs should be configured as SCCs [9].In addition, the configured PCC may be selected from fully configured CCs, rather than being fixed to a particular CC [5].The selected PCC can differ between UEs which are served by the same eNB.In other words, one CC (i.e., CC1) can be configured as a PCC for UE1 and configured as a SCC for UE2 as illustrated in Figure 1 [8].
The SCC is an additional component carrier that can be configured and activated by eNB when the UE requests a wider bandwidth in order to provide higher data rate to the served UE.In other words, SCC is an additional component carrier which is used for providing additional resources to the served UE, while it cannot be used for exchange control signaling messages between a UE and eNB.However, SCC can be activated or deactivated according to especial conditions, which can be specified according to the UE's request or according to the instructions of the eNB [5].
Implementing CA technique in LTE-Advanced system adds an additional handover scenario, which can occur between component carriers in the same sector, from PCC (CC1) to SCC (CC2) or from PCC (CC2) to SCC (CC1).In other words, the PCC may be switched from CC1 to CC2 or from CC2 to CC1 to change the PCC.So LTE-Advanced system differs than LTE (rel.8 and rel.9),where in LTE system (rel.8and rel.9) handover occurs between eNBs in different cells or between different sectors under the same eNB only.However, changing the PCC is subjected to several considerations such as looking for the best signal quality or balancing loads between adjacent cells.Switching the CC from PCC to SCC and vice versa is achieved by performing a handover procedure from the PCC (i.e., CC1) to the SCC (i.e., CC2).The handover procedure is performed by UE from the served PCC to the target PCC (which is the SCC) under the same eNB [8].
Consequently, the number of handover scenarios can be increased by implementing CA technique.Thus, there are five handover scenarios that can occur in LTE-Advanced system when CA technology is implemented, which are described in Figure 2 and can be introduced by (i) interfrequency intrasector and intra-eNB handover, (ii) intrafrequency intersector and intra-eNB handover, (iii) interfrequency intersector and intra-eNB handover, (iv) intrafrequency inter-eNB handover, and (v) interfrequency inter-eNB handover [6].All these handover scenarios are considered in this paper.
Intrafrequency means that the target and the serving carrier frequencies are the same, while interfrequency means that the target and serving carrier frequencies are differentiated from each other.Intrasector means that the target and serving sectors are the same and intersector means that the target and serving sectors are differentiated from each other.Intra-eNB means that the target and serving eNBs are the same, and inter-eNB means that the target and serving eNBs are differentiated from each other.
Increasing handover scenarios leads to increasing the handover probability, which is undesired to users since it leads to increasing the throughput degradation and outage probability.Therefore, an optimal handover decision is requested to reduce the handover probability in order to decrease throughput degradation and outage probability.
Proposed Algorithm
In this paper, MIF-AHODA based on SINR with handover hysteresis, threshold, and resource availability has been proposed.MIF-AHODA adaptively makes handover decisions based on different decision algorithms, which are selected based on the handover scenario type and resource availability as illustrated in Figure 3.If the handover scenario type is targeting changing the PCC, the handover decision can be taken based on the SINR with handover hysteresis () and threshold () levels as illustrated in Figure 4(a).Thus, the handover decision algorithm can be expressed as follows: where PCC and represent the SINR over the serving PCC and target CC, respectively.On the other hand, if the handover scenario type is targeting changing the serving sector or serving eNB, the handover decision can be adaptively taken based on two different decision algorithms, which are selected based on the resource availability.In the first decision algorithm, if the serving cell has more resources available than the target cell, the handover decision is taken based on the average SINR over both aggregated CCs (PCC and SCC) with handover hysteresis levels as illustrated in Figure 4(b).Also, SINR over the target PCC ( PCC ) should be greater than the threshold () level ( PCC > ).In the second decision algorithm, if the target cell has more resources available than the serving cell by resource Loads Margin level (LM), the handover decision is taken based on the SINR quality over the PCC with hysteresis and threshold levels only, as it is explained in Figure 4(c).Consequently, the handover decision algorithm can be represented by the following expression: where AS , AS represent the average SINR over all the aggregated CCs of serving and target eNBs, respectively. , represent the resource Loads availability of serving and target eNBs, respectively.LM is assumed to be 10% of the average resource Loads availability of the serving and target eNBs.
System Model
The LTE-Advanced system is modeled based on 3GPP specifications that were introduced in [10].The network consists of 61 macrohexagonal cell layout models with 500 meter inter-site-distance.One eNB located at the centre of each cell with considering three sectors in each cell and each sector configured with two contiguous CCs.20 MHz is considered as carrier bandwidth for each CC.Operating frequencies of CC1 and CC2 are assumed to be 2 and 2.0203 GHz, respectively.The antenna of each CC is pointed toward a different flat side of the hexagonal cell.The transmitted power from all the eNBs for each CC is assumed to be the same.Random numbers of UEs are generated and removed randomly at random uniform positions in the serving and target cells in every Transmission Time Interval (TTI).The UEs' directional movements are selected randomly with a fixed speed throughout the simulation, which contains five different mobile speed scenarios (30, 60, 90, 120, and 140 km/hour).The mobility movement of all users is considered to be inside the first 37 cells which are located in the close positions to the centre cell.Six eNBs are considered as the stations that cause the interference signals for each user during all the simulation time.The Frequency Reuse Factor (FRF) has been assumed to be one.Moreover, the Adaptive Modulation and Coding (AMC) scheme is considered based on the sets of Modulation Schemes (MS) and Coding Rate (CR) that were introduced in [10,11].Handover procedure for LTE-Advanced system that was introduced in [12] is followed with assuming 6 dB as a handover margin level and 600 milliseconds as time-to-trigger (TTT).In addition, the Radio Link Failure (RLF) detection, Radio Resource Control (RRC) reestablishment procedure, and Nonaccess Stratum (NAS) recovery procedure are considered through the simulation in order to achieve high accuracy in the performance evaluation.The vital essential parameters used in this paper are considered based on the LTE-Advanced system profile that were defined by 3GPP specifications in [10-13], as listed in Table 1.
Results and Discussions
In this study, a simulation was used to validate the proposed HODA.The evaluation methodology of 3GPP LTE-Advanced system [10-13] is observed in the simulation as mentioned in Section 3. System performance evaluations achieved by MIF-AHODA and the other considered HODAs are presented in terms of user SINR, spectral efficiency, and user's outage probability as shown in Figures 5, 6, and 7, respectively.Figure 5 shows user SINR in dB based on different handover decision algorithms.The presented SINR represents the average users' SINR over the serving PCC, which is evaluated as the ratio of reference signal received power (RSRP) to the Interferences-plus-Noise-Ratio over each subcarrier assigned to the served user [14].However, the results show that the MIF-AHODA enhanced user SINR by 13.5, 13.4, 3.45, and 3 dB better than the HODAs in the literature which were taken as a base: RSS, RSS-D, SINR, and IINR, respectively.Figure 6 shows a cell-edge user spectral efficiency based on different HODAs.The cell-edge user spectral efficiency is defined as the lower 5% of the evaluated throughput [bps/Hz] that can be received by the user [13,14].However, the presented results show that MIF-AHODA achieves around 79.7, 80.7, 12.7, and 10.7% as average enhancement gains of celledge user spectral efficiency over HODAs based on RSS, RSS-D, SINR, and IINR, respectively.
Figure 7 shows the user's outage probabilities that resulted from the simulation based on different HODAs.The user's outage probability (SINR PCC < ) is recorded when the user's SINR over the serving PCC (SINR PCC ) falls below the threshold level, () [15], whereas the quality of service becomes unacceptable when SINR PCC falls below threshold level.However, Figure 7 shows that MIF-AHODA reduces the user's outage probability by around 80, 70, 30, and 25% as average reduction gains less than that resulting from HODAs based on RSS, RSS-D, SINR, and IINR, respectively.
The enhancements achieved by MIF-AHODA are due to the consideration of multiple influence factors and the optimal proposed algorithm that adaptively selects the suitable handover decision algorithm based on the handover scenario type and resource availability.
In case of a handover scenario type targeting switching the PCC, the handover decision is taken based on SINR with hysteresis and threshold levels (SINR > + ).This algorithm takes true handover decision when the SINR over the serving PCC falls below the threshold plus margin level, as was illustrated in Figure 4(a) and expression (4).This then allows prevention of unnecessary handover procedure that can be performed between the PCC and SCC as long as the SINR over the PCC is greater than the threshold by margin level.Furthermore, this algorithm taking a handover decision before the signal over the serving PCC falls below the threshold level.That leads to decreasing user's throughput degradation and it contributes to avoiding the disconnection probability, which in turn leads to reducing user's outage probability.
In case a handover scenario type is targeting switching a user's connection to a new sector or new eNB, the handover decision can be adaptively taken based on two different algorithms, which are selected based on the resource availability as illustrated in Figures 4(b) and 4(c) and expression (5).If the resource availability of the serving cell ( ) is more than the target cell ( ) by resource margin level (LM), handover decision can be taken based on the average SINR over both PCC and SCC (AS > AS + SINR ).This leads to performing the handover procedure to the best target eNB and can provide better signal quality over both CCs, which in turn leads to providing more resources to the served user during the active mode time.That enhances user throughput and reduces outage probability.On the other hand, if the resource availability of the target cell ( ) becomes more than the serving cell ( ) by resource margin level (LM), handover decision can be taken based on the SINR over the target PCC only SINR PCC > + .This leads to performing an early handover procedure to the target cell that has more resources.That leads to assigning more resources to the served user with acceptable signal quality, which in turn leads to enhanced user throughput and reduces outage probability.
Conclusion
It may be concluded that the proposed MIF-AHODA is a useful algorithm through the implementation of CA technology in LTE-Advanced system.It contributes to enhanced system performance from the perspective of user SINR, spectral efficiency, and reducing the user's outage probability.It is notably enhanced over the legacy RSS HODA, HODA-RSS-D, HODA-SINR, and HODA-IINR.Consequently, the
Figure 1 :
Figure 1: Configuration of CCs for different UEs served by the same eNB.
Figure 3 :
Figure 3: Flowchart of our proposed handover decision algorithm.
time-to-trigger γ: : threshold level of SINR T1: : the beginning of TTT Tn: : the end of TTT X: : greater or less than M M: HO decision based on resources availability | 2018-04-03T04:33:33.547Z | 2014-11-26T00:00:00.000 | {
"year": 2014,
"sha1": "42a6972dec99790bf628ac286a1c7360b6091892",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/jcnc/2014/739504.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "42a6972dec99790bf628ac286a1c7360b6091892",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
17443142 | pes2o/s2orc | v3-fos-license | Non-small cell lung cancer: Whole-lesion histogram analysis of the apparent diffusion coefficient for assessment of tumor grade, lymphovascular invasion and pleural invasion
Purpose Investigating the diagnostic accuracy of histogram analyses of apparent diffusion coefficient (ADC) values for determining non-small cell lung cancer (NSCLC) tumor grades, lymphovascular invasion, and pleural invasion. Materials and methods We studied 60 surgically diagnosed NSCLC patients. Diffusion-weighted imaging (DWI) was performed in the axial plane using a navigator-triggered single-shot, echo-planar imaging sequence with prospective acquisition correction. The ADC maps were generated, and we placed a volume-of-interest on the tumor to construct the whole-lesion histogram. Using the histogram, we calculated the mean, 5th, 10th, 25th, 50th, 75th, 90th, and 95th percentiles of ADC, skewness, and kurtosis. Histogram parameters were correlated with tumor grade, lymphovascular invasion, and pleural invasion. We performed a receiver operating characteristics (ROC) analysis to assess the diagnostic performance of histogram parameters for distinguishing different pathologic features. Results The ADC mean, 10th, 25th, 50th, 75th, 90th, and 95th percentiles showed significant differences among the tumor grades. The ADC mean, 25th, 50th, 75th, 90th, and 95th percentiles were significant histogram parameters between high- and low-grade tumors. The ROC analysis between high- and low-grade tumors showed that the 95th percentile ADC achieved the highest area under curve (AUC) at 0.74. Lymphovascular invasion was associated with the ADC mean, 50th, 75th, 90th, and 95th percentiles, skewness, and kurtosis. Kurtosis achieved the highest AUC at 0.809. Pleural invasion was only associated with skewness, with the AUC of 0.648. Conclusions ADC histogram analyses on the basis of the entire tumor volume are able to stratify NSCLCs' tumor grade, lymphovascular invasion and pleural invasion.
Introduction
Lung cancer is the most common malignant tumor and has become the main cause of cancer mortality [1]. From a clinical point of view, it is important to predict tumor aggressiveness in order to select the proper treatment strategy. Positron emission tomography using 18 F-fluorodeoxyglucose ( 18 F-FDG PET) has been used for evaluating the tumor aggressiveness of nonsmall cell lung cancer (NSCLC) [2,3], but 18 F-FDG PET is not widely used because of its high cost. The use of lung magnetic resonance (MR) imaging is gradually increasing in clinical practice, not only because of its lower cost, but also due to its easy applicability for various pathologic conditions.
Recent developments in diffusion-weighted imaging (DWI) have addressed its potential advantages and applications for the characterization of lung cancer [4,5]. The diffusion coefficient of water in living tissue calculated by an MR examination is expressed as the apparent diffusion coefficient (ADC). Some studies have suggested that the ADC could be used to demonstrate the histological characteristics of lung cancers, and that it may be useful for distinguishing the degree of cell differentiation [6][7][8][9][10][11]. However, the averaged mean ADC is calculated from the largest slice of a tumor, and thus the mean ADC may not represent the full spectrum of histology within a tumor.
Theoretically, an ADC histogram can display ADC values and their distribution within a whole tumor, and such a histogram could be used to analyze the ADC voxel by voxel, thereby providing more precise information than the mean ADC [12]. To the best of our knowledge, there has not been a study that assessed the value of ADC histograms for predicting the aggressiveness of lung cancer. On the other hand, DWI of the lung is technically challenging and not always feasible due to shortcomings such as motion artifacts associated with air-tissue interfaces [4]. Respiratory triggering by the navigator-echo method has recently been explored [13]. The use of the navigator-echo enables decrease number of ghosting and artifacts, and improves the quality of DW images [14].
The purpose of the present study was to investigate the diagnostic accuracy of histogram analyses of ADC values acquired by respiratory triggering and the navigator-echo method for determining the tumor grade, lymphovascular invasion and pleural invasion of NSCLC.
Patients
The institutional ethics committee of Kanazawa Medical University approved this retrospective study and waived the requirement for informed consent (approval number 1003). Between January 2012 and December 2014, 166 patients with NSCLC underwent DWI followed by surgical resection. Of these, tumors measuring 10 mm in diameter (n = 21), tumors with predominant ground-glass opacity (GGO) (n = 15), tumors with large cystic change or necrosis (n = 9), and tumors with obstructive pneumonia or air-containing cavity (n = 51) based on their computed tomography (CT) appearance were excluded from the analysis (S1 Fig). Patients whose DW images showed poor image quality resulting from motion or a severe magnetic susceptibility artifact (n = 10) were also excluded from the data analysis.
As a result, 60 patients (37 males and 23 females, mean age 75.7 years, age range 56-88 years) were enrolled in this study. Table 1 shows the patient and tumor characteristics, including patient age, tumor size, histologic type, pathological grade, lymphovascular invasion, and pleural invasion. DWI All MR examinations were performed using a 1.5-T system (Magnetom Avanto, Siemens, Erlangen, Germany) with a 45 mT/m gradient strength, and a 32-channel body phased-array coil. DWI was acquired using a free-breathing navigator-triggered single-shot, echo-planar imaging sequence with prospective acquisition correction (PACE) in the axial plane as described in detail previously [14]. The data were acquired in the end-expiratory phase. The sequence parameters were as follows: TR 4000-6000 ms (equal to the respiratory cycle of the patient), TE 65 ms, acquisition matrix 128 × 96 (interpolated to 256 × 192), field of view 262 × 350 mm, section thickness of 6-mm with a 2-mm intersection gap, 35 slices by four concatenations, four averages, 144 EPI factors, spectral fat-suppression, parallel imaging with the acceleration factor of 2, and tridimensional gradients with b-values of 0 and 800 s/mm 2 . The acquisition time for PACE DWI was approx. 4-5 min depending on the respiratory cycle.
Image analysis
The ADC maps were generated automatically from each DW image by the MR system software. Regions of interest (ROIs) were manually traced just inside the outer edge of the tumor on all slices of low-b value (b = 0 s/mm 2 ) DW images using Ziostation 2 (Ziosoft, Tokyo) by two radiologists (N.T. and M.D., with 5 and 10 years' experience in chest MR imaging). The data acquired from each slice were summed to generate volumes of interest (VOIs). The contours of each VOI were automatically copied to the exact same location of the corresponding ADC maps. In this way, more accurate VOIs can be obtained compared to those drawn directly on ADC maps. An example of the process of extracting the VOI of a tumor is shown in Fig 1. All ADC values within the VOI were used to compute the average ADC within the tumor. The ADC values were then binned to construct the ADC histogram. The following parameters were calculated from the ADC histogram: (a) mean ADC; (b) 5th, 10th, 25th, 50th, 75th, 90th, and 95th percentiles of the ADC values; (c) skewness; (d); and kurtosis. The nth percentile was the point at which n% of the voxel values from that histogram were observed on the left. Skewness reflects the shift of the median of the distribution from the mean value, and positive skewness indicates that the right tail of the distribution is flatter or longer than the left tail [12]. Kurtosis reflects the peakness of the histogram distribution, and high kurtosis tends to have a distinctive peak near the mean, and to decline rather rapidly and have heavy tails [12].
Statistical analyses
All statistical analyses were done with PASW statistical software (ver. 23.0, SPSS, IBM, Chicago, IL). A p-value <0.05 was considered to indicate a significant difference. All measurements were correlated, and the concordance of the interobserver variability was tested by calculating Spearman and interclass correlation coefficients (ICCs). Agreement was interpreted according to the ICC as: > 0.8, excellent; 0.6-0.8, good; 0.4-0.6, moderate; and <0.4, poor concordant. We used the Jonkheere-Terpstra test to correlate the histogram parameters with the pathological grades of the tumors (grades 1-3). The Mann-Whitney test was used to compare the histogram parameters between high-grade (grade 3) and low-grade (grades 1 and 2) tumors. We also used the Mann-Whitney test to examine the correlations between the presence or absence of lymphovascular invasion and pleural invasion with each histogram parameter. We performed a receiver-operating characteristics (ROC) curve analysis to assess the diagnostic performance of histogram parameters for distinguishing different pathological features when appropriate.
Reproducibility of the ADC measurements
Based on the ICC, the interobserver variability in the ADC measurements showed good to excellent agreement (0.78-0.99).
Differentiation of the pathological grade of NSCLC using ADC histogram parameters
The mean and the 10th, 25th, 50th, 75th, 90th, and 95th percentiles of ADC values showed significant differences among the three pathological NSCLC grades (p = 0.000, 0.013, 0.001, 0.000, 0.000, 0.000, and 0.000, respectively) ( Table 2). Other histogram parameters, including the 5th percentile of ADC, skewness, and kurtosis were not significantly different among the pathological grades.
Other histogram parameters, including the 5th, 10th, and 25th percentiles of ADC, skewness, and kurtosis were not significantly different between the high-and low-grade tumors.
ROC analysis of ADC histogram parameters for predicting high-grade NSCLC
The results of our ROC analysis of the histogram parameters between high-and low-grade tumors showed that the area under the curve (AUC) of the mean, 50th, 75th, 90th, and 95th percentiles of ADC were 0.706, 0.688, 0.713, 0.730, and 0.740, respectively. The 95th percentile ADC achieved the highest AUC, with a cut-off value of 1634.1 × 10 −6 mm 2 /sec, 84.6% sensitivity, and 66.7% specificity (Fig 2).
Differentiation of the presence or absence of lymphovascular invasion and pleural invasion using ADC histogram parameters
The presence of lymphovascular invasion was associated with the mean, 50th, 75th, 90th, and 95th percentiles of ADC, skewness, and kurtosis (p = 0.046, 0.033, 0.002, 0.003, 0.007, 0.02, and 0.001, respectively) ( Table 4). Other histogram parameters, including the 5th, 10th, and 25th percentiles of ADC were not significantly different between the high-and low-grade groups. The presence of pleural invasion was associated only with skewness (p = 0.048) ( Table 4). All of the other histogram parameters were not significantly different between the two groups.
ROC analysis of ADC histogram parameters in predicting lymphovascular and pleural invasion
The ROC curve of the histogram parameters between the presence and absence of lymphovascular invasion showed that AUC values of the mean, 50th, 75th, 90th, and 95th percentiles of ADC, skewness, and kurtosis were 0.694, 0.707, 0.796, 0.788, 0.763, 0.726, and 0.809, respectively. Kurtosis achieved the highest AUC at 0.809, with a cut-off value of 1.0815 × 10 −6 mm 2 / sec, with 61.2% sensitivity and 90.9% specificity (Fig 3). For the skewness, according to the ROC curve, a cut-off value of 0.824 × 10 −6 mm 2 /sec was associated with pleural invasion (AUC 0.648), with 60.0% sensitivity and 73.3% specificity (Fig 4).
Discussion
DWI combined with ADC mapping has been investigated for use in lung cancer cases including mass-lesion detection, characterization, and to assess the patient's treatment response [6][7][8][9][10][11]15]. However, there are two problems to be addressed when considering DWI. First, the ADC values reported in these previous studies were obtained from a large ROI placed only on the largest lesion slice, and this may not reflect the characteristics of the entire tumor. Second, the mean ADC value was the only parameter that most studies adopted for lung cancer analysis. In the present study, we used the VOI encompassing the entire tumor to take into account the lesion texture and heterogeneity, using a histogram analysis. To the best of our knowledge, no study has assessed the utility of ADC histograms for predicting the aggressiveness of lung cancer. The DWI of the lung has some problems, including (1) a low signal-to-noise ratio of the inherently low lung proton density, (2) distortion of the image due to cardiac and respiratory motion, and (3) magnetic susceptibility effects of the airfilled lung tissue subjected to large magnetic field gradients [4,5]. Two approaches have been used for DWI of the lung: breath-hold scanning and free-breathing scanning. Breath-hold scanning requires only a short examination time, but the signal-to-noise ratio is compromised especially at higher b-values, and this approach provides limited spatial resolution. Image acquisition during free breathing may be combined with cardiac triggering and/or respiratory triggering. Cardiac triggering is useful for avoiding pulsation artifacts, but is time-consuming [4]. The use of respiratory triggering improves the quality of DW images compared to the quality obtained using breath-hold imaging [16].
Various techniques have been used for monitoring the respiration such as a strain gauge, elastic breathing belts, and temperature monitoring using face masks, as well as navigator echoes [13,14,16]. The major advantage of the navigator-echo method is that no additional hardware is needed and the patient set-up is easier [13]. Taouli et al. reported that the use of a navigator-echo to trigger a single-shot EPI DWI sequence for the liver improves image quality and liver lesion conspicuity with more precise ADC measurement [14]. Similar to other studies of various types of malignant tumors [17][18][19][20][21], our present findings showed that all histogram-derived percentiles of the ADC based on the entire volume decreased as the tumor grade increased. All of the ADC percentile values except the 5th percentile significantly differentiated high-from low-grade tumors. Our ROC analysis revealed that the 95th ADC percentile is the most beneficial parameter for distinguishing high-from lowgrade tumors (AUC 0.740). In addition, the AUCs of the mean, 75th, and 90th percentiles of ADC were >0.700.
The results of previous studies of the value of the percentile ADC in the differentiation of high-and low-grade malignancies have been controversial. In the studies of uterine cervical cancer, endometrial cancer, prostate cancer, bladder cancer, and brain glioma, low-percentile ADCs proved to be significant for differentiating high-grade from low-grade malignancies [18][19][20]22,23]. They stated that the low-percentile ADCs are correlated with highly cellular components in the tumor and is expected to decrease with higher grade, because high-grade tumors have higher cellularity and subsequently deceased extracellular space and diffusivity of water molecules.
On the contrary, in our present investigation, the high-percentile ADCs showed better diagnostic performance compared to that using low-percentile ADCs. Although the reason for this discrepancy remains unclear, one possible explanation is that high-grade lung cancers have much mucinous fluid, microhemorrhage, tissue disorganization-all of which contribute to reduce motion of water-and lower ADC values compared to low-grade lung cancers. As a result, these parts could be represented to a greater extent by high-percentile ADCs than by low-percentile ADCs.
A histogram-based analysis yields additional diffusion parameters regarding the distribution of ADC values, such as skewness and kurtosis [12]. Our present findings showed that the mean and the 50th, 75th, 90th, and 95th percentiles of ADC as well as skewness and kurtosis can be used to discriminate the presence of lymphovascular invasion. Among these parameters, kurtosis proved to be the most beneficial parameter in the ADC histogram (AUC 0.809). On the other hand, skewness was the only parameter that could discriminate the presence of pleural invasion (AUC 0.648). These results suggest that higher kurtosis and positive skewness are promising predictors of lymphovascular invasion and pleural invasion, respectively.
Apart from the intrinsic limitations of the retrospective nature of our study, several other limitations should be mentioned. First, the study population was relatively small. Further investigations that include larger populations are warranted to strengthen the statistical power of the results. Second, our study population did not include patients with predominant GGO, because the signal intensity of GGO was sometimes weak and could not be detected even in low-b value DWI [24,25]. Third, our study was performed using a 1.5-T MR system with two b-values (b = 0, 800 s/mm 2 ) to acquire the DWI. The possibility of differing results using a higher magnetic field and a different number and magnitude of b-values cannot be excluded.
In conclusion, ADC histogram analyses on the basis of the entire tumor volume using respiratory triggering by the navigator-echo method are able to stratify the NSCLC' tumor grade, lymphovascular invasion and pleural invasion. The 95th percentile ADC was the most promising parameter for the differentiation of high-from low-grade NSCLCs. The kurtosis and skewness were the predictive parameters for the presence of lymphovascular invasion and pleural invasion, respectively. | 2018-04-03T06:07:09.254Z | 2017-02-16T00:00:00.000 | {
"year": 2017,
"sha1": "86fe11a11e27eb2f7e17518f3ac8ce5ea91823ed",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0172433&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "86fe11a11e27eb2f7e17518f3ac8ce5ea91823ed",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
238744823 | pes2o/s2orc | v3-fos-license | The Reactive Sites of Methane Activation: A Comparison of IrC3+ with PtC3+
The activation reactions of methane mediated by metal carbide ions MC3+ (M = Ir and Pt) were comparatively studied at room temperature using the techniques of mass spectrometry in conjunction with theoretical calculations. MC3+ (M = Ir and Pt) ions reacted with CH4 at room temperature forming MC2H2+/C2H2 and MC4H2+/H2 as the major products for both systems. Besides that, PtC3+ could abstract a hydrogen atom from CH4 to generate PtC3H+/CH3, while IrC3+ could not. Quantum chemical calculations showed that the MC3+ (M = Ir and Pt) ions have a linear M-C-C-C structure. The first C–H activation took place on the Ir atom for IrC3+. The terminal carbon atom was the reactive site for the first C–H bond activation of PtC3+, which was beneficial to generate PtC3H+/CH3. The orbitals of the different metal influence the selection of the reactive sites for methane activation, which results in the different reaction channels. This study investigates the molecular-level mechanisms of the reactive sites of methane activation.
Introduction
Methane has attracted attention as the main component of natural gas and the conversion of methane into value-added chemicals is very important [1,2]. However, methane is extremely stable, with high C-H bond strengths (439 kJ/mol), negligible electron affinity and low polarizability [3]. At present, most of the catalytic conversion of methane needs to be carried out under high-temperature or high-pressure conditions [4]. Metal carbides are a kind of molecules with the potential to activate methane and have been studied by several groups [5]. Research on the mechanisms of the activation of methane by metal carbides is of great value and it may be helpful to find new catalysts of metal carbides [6,7]. At the same time, it is very difficult to study the activation mechanisms of methane. In recent years, people have found that the study of the gas-phase reaction of methane is an important means to study related reaction mechanisms [8,9].
Past studies have shown that gas-phase clusters are an ideal model for studying the reaction mechanisms in condensed-phase systems. The study of gas-phase clusters can reveal the specific reaction mechanisms, including the active sites, and provide references for condensed-phase catalytical processes [10]. At present, researchers are concerned about the reactions of methane with metal ions such as Os + [11], Pt + [12], Ta + [13] and Rh(0) [14] and metal oxides such as MgO + [15] [7]. These studies have explored some possible mechanisms for the activation of methane by metal carbide clusters. For example, the study of AuC + reveals a special hydride-transfer mechanism (HT) [22], while the study of FeC 3 shows that methane and atomic clusters generate CC-coupling reaction products at high temperatures and explained the possible mechanism of non-oxidized methane aromatization at the molecular level [6]. The study of FeC 4 + shows that the cluster can activate the C-H bonds of methane via the hydrogen-atom transfer (HAT) mechanism at ambient temperature; the study used the frontier orbital theory to explain the root cause of the HAT reaction [23]. Although there have been some studies on the mechanisms of metal carbides to activate methane in the past, there is still no clear investigation on the reactive sites of methane activation. Therefore, further research is needed to study the reactive sites for the activation of methane. Here, we reported the reactions of MC 3 + (M = Pt and Ir) with methane.
Experimental and Computational Methods
The experiments were performed using an ion trap mass spectrometer equipped with a laser vaporization-supersonic expansion ion source that was reported previously [24,25]. The MC 3 + (M = Pt and Ir) ions were generated by pulsed laser ablation of a rotating and translating metal/carbon (metal:carbon = 1:4) target. The nascent ablated plasma was entrained by a helium carrier gas with a backing pressure of about 0.5 MPa. The ions were mass-selected by a quadrupole and then were sent into a linear ion trap, where the ions were accumulated and cooled by helium gas. The MC 3 + ions reacted with CH 4 , CD 4 and 13 CH 4 , introduced by a pulsed valve. After a 10 ms reaction, the trapped ions were ejected for mass detection.
Theoretical calculations were performed using the Gaussian 09 package [26]. All of the calculations were performed using the BMK functional with the def2-TZVP basis sets [27,28]. Vibrational frequency calculations were employed to identify the nature of reaction intermediates, transition states (TSs) and products. The Molclus program [29] was used to search for the possible stable structures of the MC 3 + , MC 2 H 2 + , MC 4 H 2 + and MC 3 H + (M = Pt and Ir). The low-lying stable isomers were then re-optimized at the BMK/def2TZVP level to confirm the relative energy sequence. Transition-state optimizations were performed with the synchronous transit-guided quasi-Newton (STQN) method and were verified through intrinsic reaction coordinate (IRC) calculations [30,31]. Vibrational frequency calculations were performed to identify the nature of reaction intermediates, transition states (TSs) and products.
Results and Discussion
The mass spectra for the reactions of mass-selected MC 3 + (M = Pt and Ir) ions with He (a1 and a2), CH 4 (b1 and b2), CD 4 (c1 and c2) and 13 CH 4 (d1 and d2) in the ion trap at room temperature are shown in Figure 1. No product ion was observed in the mass spectra when using pure He as reactant gas, while two main peaks, at m/z = 219 and 243 for the Ir-system, and m/z = 222 and 246 for the Pt-system for the reactions with CH 4 , which can be attributed to the product ions with chemical formulas MC 2 H 2 + and MC 4 H 2 + (M = 196 Pt and 193 Ir), were observed to be the major reaction products. A weak mass peak at m/z = 233 assigned to the ion with chemical formula PtC 3 H + for PtC 3 + /CH 4 was also observed, while no IrC 3 H + was found. The mass spectra suggest that two main reaction channels for both systems and one hydrogen-atom abstraction channel for PtC 3 + were observed. The first channel is the formation of the MC 2 H 2 + cation with the release of a neutral C 2 H 2 (reaction 1). The second channel is the generation of the MC 4 H 2 + ion with concomitant elimination of a dihydrogen molecule (reaction 2). The third channel is the generation of the PtC 3 H + ion with the release of CH 3 as shown in reaction 3. Impurity PtC 3 •H 2 O + and IrCO + ions were formed due to the small amount of contaminant in the chamber of the instrument. Isotopic-labeling experiments conducted using the CD 4 sample showed that there were peaks of IrC 2 D 2 + , IrC 4 D 2 + , PtC 2 D 2 + , PtC 4 D 2 + and PtC 3 D + , which demonstrates that the H atoms in the products were all from methane. Both the PtC 2 H 2 + and Pt 13 CCH 2 + product ions were observed to have approximately the same intensity, when using 13 CH 4 , indicating that one or both carbon atoms of the eliminated C 2 H 2 neutral molecule came from the PtC 3 + ion. The intensity of the peak of Ir 13 CCH 2 + is similar with that of IrC 2 H 2 + , which eliminates the background peak. The peaks of Ir 13 CC 3 H 2 + and Pt 13 CC 3 H 2 + were observed, which confirms the reaction channels. In order to gain insight into the reaction mechanisms, the various possible structures of the products MC 3 + (M = Pt and Ir) were obtained by calculations at the BMK/def2-TZVP level and are shown in Figure S1. Figure S2.
In order to gain insight into the reaction mechanism, the potential energy profiles (PESs) were calculated. All of the three Reactions (1)-(3) leading to the stable structures of the products were exothermic. The pathways for the IrC 3 + + CH 4 reaction leading to the IrC 2 H 2 + /C 2 H 2 and IrC 4 H 2 + /H 2 products are shown in Figure 2 and details are given in Figures S3-S4. An encounter complex ( 3 I1), that is −1.51 eV lower in energy than the ground state reactants, is formed initially. The Ir atom serves as the active site of the reaction and intermediate 1 I2 is formed via the first C-H bond activation. Then, the CH 3 moiety is transferred to C 3 to form a C-C bond; meanwhile, one H atom is transferred to the Ir atom ( 1 I2 → 1 TS2 → 1 I3). Subsequently, two hydrogen atoms transfer from the metal Ir atom to the C atom to form two new C-H bonds ( 1 I3 → 1 TS3 → 1 I4 → 1 TS4 → 1 I5 Figure S4. The pathways for the PtC 3 + + CH 4 reaction are shown in Figure 3. Details and other possible pathways are given in Figures S5-S7. The reaction pathway starts from the encounter complex 2 I1, followed by the formation of a stable intermediate 2 I2 via the approaching of CH 4 to the terminal carbon of PtC 3 + ( 2 I1 → 2 TS1 → 2 I2), where the first C-H activation takes place. The CH 3 moiety of 2 I2 is transferred to form 2 I3, where CH 3 is loosely coordinated to PtC 3 H + . The final product P1 (−0.46 eV) is generated with the liberation of CH 3 . The energy of P1 is higher than the transition states from 2 I2 to P2/P3, which can explain the weak peak of PtCH For the reactions of the IrC 3 + and PtC 3 + with CH 4 , we could find the hydrogen abstraction reaction products PtC 3 H + and no IrC 3 H + was observed. Based on our calculations, the reactive sites play a key role. There are two possible sites, the metal and terminal carbon atom (the other two carbon atoms are fully bonded). For IrC 3 + /CH 4 , the barrier to the first C-H activation on the terminal C atom (+0.04 eV; see Figure S4) is higher in energy than 1 TS1 (−0.68 eV). Based on the lack of IrC 3 H + observed, we confirm that the reactive site of the first C-H activation is the Ir atom. For PtC 3 + /CH 4 , the energy required for the first C-H bond activation on Pt and C atoms is +0.01 ( 2 TSX3) and +0.07 eV ( 2 TS1), given in Figure 3 and Figure S6, respectively. Though the Pt atom, as a reactive site, is favorable, the transition state of H atom transfer as the next step (+0.19 eV, 2 TSX4; Figure S4) is higher in energy than 2 TS1. Based on the observation of PtC 3 H + in the experiment, we confirm that the terminal carbon atom can be a reactive site for PtC 3 + /CH 4 . As we all know, metals have a stronger adsorption capacity for methane than nonmetals. The different reactive sites can be explained by the analysis of the orbital. In the PtC 3 + species, platinum uses two of the six valence orbitals (s and d) to form a σand π-bond with the adjacent carbon atom. This leaves seven electrons occupying the four remaining non-bonding orbitals on platinum, such that there are no empty orbitals on the metal. Therefore, it is hard for the C−H bond to donate electron density to the metal and the energy of the transition state 2 TSX3 is a little higher than the 2 TS1 in Figure 3. For the IrC 3 + species, Ir uses two of the six valence orbitals (s and d) to form a bond with the adjacent carbon atom, which leaves six electrons occupying the four remaining non-bonding orbitals on the Ir atom, such that there are enough empty orbitals on the metal, which is beneficial for the C-H bond activation of CH 4 . This can explain why the barrier to the first C-H activation on the terminal C atom (+0.04 eV) is higher than that of 1 TS1 (−0.68 eV). Our work demonstrates that the orbitals of the metal influence the selection of the reactive sites for the activation of methane. The different reactive sites result in different reaction channels.
In conclusion, the activation of methane mediated by MC 3 + (M = Ir and Pt) were comparatively studied at room temperature by gas-phase experiments with theoretical calculations. Mass spectrometric studies on the reactions of the MC 3 + (M = Ir and Pt) ions with CH 4 show that two main reaction channels were observed. The first channel is the formation of the MC 2 H 2 + cation with the release of neutral C 2 H 2 . The second channel is the generation of the MC 4 H 2 + ion with concomitant elimination of a dihydrogen molecule. Besides that, PtC 3 H + could be found in the experiments, while IrC 3 H + could not. Quantum chemical calculations suggest that the MC 3 + (M = Ir and Pt) ions have a linear M-C-C-C + structure. The Ir atom is the reactive site for the reaction of IrC 3 + /CH 4 . The generation of the PtC 3 H + and PESs can confirm that the terminal carbon atom is the reactive site for PtC 3 + /CH 4 . The orbitals of the metal influence the selection of the reactive sites for methane activation, which results in the different reaction channels. Our work is helpful for understanding reactive sites of methane activation.
Supplementary Materials:
The following are available online, Figure S1: The optimized geometries of the [IrC 3 ] + and [PtC 3 ] + isomers at the BMK/def2-TZVP level. The relative energies relative to the global minimum structure (in eV), symmetry and electronic states are shown; Figure S2: H] + (bottom) isomers at the BMK/def2-TZVP level. The relative energies relative to the global minimum structure (in eV), symmetry and electronic states are shown; Figure S3: The detailed potential energy profiles of the reaction of IrC 3 + and CH 4 in Figure 2. The energies are given in eV; Figure S4: The other possible potential energy profiles of the reaction of IrC 3 + and CH 4 . The energies are given in eV; Figure S5. The detailed potential energy profiles of the reaction of PtC 3 + and CH 4 in Figure 3. The energies are given in eV; Calculated geometries and potential energy profiles. This material is available free of charge via the Internet. | 2021-10-14T06:24:04.723Z | 2021-10-01T00:00:00.000 | {
"year": 2021,
"sha1": "e3b49b3089c65df7bfedfb53f758dec0a3e1dd91",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/26/19/6028/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a1bcaadd61615873d52f2a6a52b0a1553f7dde8f",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
49665886 | pes2o/s2orc | v3-fos-license | Food Insecurity in Older Adults: Results From the Epidemiology of Chronic Diseases Cohort Study 3
Introduction: The public health problem of food insecurity also affects the elderly population. This study aimed to estimate the prevalence of household food insecurity and its associations with chronic disease and health-related quality of life characteristics in individuals ≥65 years of age living in the community in Portugal. Methods: The data were collected from the Epidemiology of Chronic Diseases Cohort Study 3 (EpiDoC3)—Promoting Food Security Study (2015–2016), which was the third evaluation wave of the EpiDoC and represented the Portuguese adult population. Food insecurity was assessed using a psychometric scale adapted from the Brazilian Food Insecurity Scale. The data on sociodemographic variables, chronic disease, and management of chronic disease were self-reported. Health-related quality of life were assessed using the European Quality of Life Survey (version validated for the Portuguese population). Logistic regression models were used to determine crude and adjusted odds ratios (for age group, gender, region, and education). The dependent variable was the perceived level of food security. Results: Among older adults, 23% were living in a food-insecure household. The odds of living in a food-insecure household were higher for individuals in the 70–74 years age group (odds ratio (OR) = 1.405, 95% confidence interval (CI) 1.392–1.417), females (OR = 1.545, 95% CI 1.534–1.556), those with less education (OR = 3.355, 95% CI 3.306–3.404), low income (OR = 4,150, 95% CI 4.091–4.210), and those reporting it was very difficult to live with the current income (OR = 16.665, 95% CI 16.482–16.851). The odds of having a chronic disease were also greater among individuals living in food-insecure households: diabetes mellitus (OR = 1.832, 95% CI 1.818–1.846), pulmonary diseases (OR = 1.628, 95% CI 1.606–1.651), cardiac disease (OR = 1.329, 95% CI 1.319–1.340), obesity (OR = 1.493, 95% CI 1.477–1.508), those who reduced their frequency of medical visits (OR = 4.381, 95% CI 4.334–4.428), and who stopped taking medication due to economic difficulties (OR = 5.477, 95% CI 5.422–5.532). Older adults in food-insecure households had lower health-related quality of life (OR = 0.212, 95% CI 0.210–0.214). Conclusions: Our findings indicated that food insecurity was significantly associated with economic factors, higher values for prevalence of chronic diseases, poor management of chronic diseases, and decreased health-related quality of life in older adults living in the community.
Introduction:
The public health problem of food insecurity also affects the elderly population. This study aimed to estimate the prevalence of household food insecurity and its associations with chronic disease and health-related quality of life characteristics in individuals ≥65 years of age living in the community in Portugal.
Methods: The data were collected from the Epidemiology of Chronic Diseases Cohort Study 3 (EpiDoC3)-Promoting Food Security Study (2015-2016), which was the third evaluation wave of the EpiDoC and represented the Portuguese adult population. Food insecurity was assessed using a psychometric scale adapted from the Brazilian Food Insecurity Scale. The data on sociodemographic variables, chronic disease, and management of chronic disease were self-reported. Health-related quality of life were assessed using the European Quality of Life Survey (version validated for the Portuguese population). Logistic regression models were used to determine crude and adjusted odds ratios (for age group, gender, region, and education). The dependent variable was the perceived level of food security.
INTRODUCTION
Populations worldwide are undergoing unprecedented changes in age demographics. In Portugal and other European countries, more than 19% of the individuals are ≥65 years of age (1,2). Age is the most powerful predictor of morbidity and mortality. Multifaceted and numerous mechanisms link age to health status (1). To maximize health and well-being, healthcare systems should be responsive to the diversity and heterogeneity of the health status of older adults (1).
The increasing proportions of countries' oldest populations are associated with greater vulnerability and high-risk of development of chronic diseases and disabilities. These negative health outcomes have direct effects on access to adequate food and result in food insecurity (3). Food security is the condition where "all people, at all times, have physical and economic access to sufficient, safe and nutritious food to meet their dietary needs and food preferences for an active and healthy life" (4). This broad concept encompasses the availability of food and the accessibility and proper use of food. Food insecurity can affect the health and well-being of individuals (4)(5)(6)(7)(8). Nutrition is one of the main determinants of healthy and active aging. Consumption of nutritious food is crucial for physiological well-being and better health and quality of life (9).
The health characteristics of older adults differ from other age groups. The multidimensional phenomenon of food insecurity is also different in this population (10,11). In older populations, food insecurity results from more than financial resource constraints. Functional impairment, not owning a home, isolation, gender, financial vulnerability, and poor health have statistically significant associations with food insecurity. These associations suggest that differences in food use between older and younger populations should be considered. These important risk factors for food insecurity tend to occur together, which results in a much higher risk for food insecurity in older populations (8,(10)(11)(12)(13). As older populations increase in size, accurate assessments of the extension of food insecurity become more essential for public health (14). Previous studies of food insecurity have mostly examined populations of children and non-elderly adults. Little is known about the characteristics of food insecurity in older adults and associations with health outcomes (11,15 Approximately 23% of the global burden of disease is attributable to conditions that affect older people; the main conditions that contribute to this excessive burden are chronic diseases (16). Chronic disease can negatively affect the ability to shop for food, carry foodstuffs home, and prepare meals, which can affect older adults' food security as much as financial vulnerability (5). A poor diet also has severe effects on quality of life and health. Older adults who adopt poor dietary patterns have higher risks of malnutrition, frailty, deteriorating health conditions, and disability compared with those that consume a healthy diet (9,12). Previous findings indicate that food insecurity is particularly prevalent among older adults (11,12,17) and is associated with poor health (5,12), increased depression, disability (18), and poor quality of life (5,19).
National and international initiatives promote active and healthy aging, but there are still opportunities for the results of this approach to be reflected in increased quality of life and health of older adults. Rethinking aging implies a true transverse commitment and reconsideration of the entire set of associated factors (20). One of the main objectives of the Healthy People 2020 Report is to "improve the health, function, and quality of life" (21), so understanding the associations between food insecurity, chronic diseases, and quality of life is fundamental for improvement of health policies and resulting successful promotion of active and healthy aging populations. The purpose of this study was to estimate the prevalence of household food insecurity and its associations with chronic diseases and healthrelated quality of life (HRQoL) factors in individuals ≥65 years of age living in the community.
MATERIALS AND METHODS
This observational study used a cross-sectional analysis of data from a national ongoing prospective cohort study, the Epidemiology of Chronic Diseases Cohort Study (EpiDoC). The data from this population-based study represent the Portuguese adult non-institutionalized (i.e., live in private homes) population in Portugal (Mainland, Madeira, and Azores Islands). The EpiDoC cohort began with the EpiDoC 1 study (EpiReumaPt); two subsequent evaluations have been performed using data from the same subjects [EpiDoC 2 (CoReumaPt) and EpiDoC 3 (Promoting Food Security)] studies. The sampling method used for the EpiDoc 1 study followed a cross-country random route procedure; sampling was stratified by region (North, Centre, Lisbon, Alentejo, Algarve, Madeira, and Azores) and size of locality. Data collection was performed using a computer-assisted telephone interview method. The number of interviews conducted for each stratum was proportional to the actual distribution of the population. Census 2001 results indicated that the eligible Portuguese population ≥18 years old was N = 7,719,986. Therefore, a sample of 10,661 participants was randomly selected (random route method) (22). All 10,661 participants in the EpiDoC 1 study signed an informed consent form for follow-up, and those who provided their telephone number were included in the subsequent EpiDoC cohort followup evaluations (EpiDoC 2 and EpiDoC 3). In the EpiDoC 2 and EpiDoC 3 studies, all participants were contacted by telephone (23,24). Of the 9,003 participants eligible for the EpiDoC 3 (Promoting Food Security) study, 2,366 were lost due to unsuccessful contact and 1,004 participants were lost to followup for other reasons; there were 5,653 participants in the third wave of the study. A total of 1,885 individuals in this Portuguese noninstitutionalised population were ≥65 years of age; the data from these participants were included in the analyses described in this article.
Instruments and Variables
Sociodemographic (age, gender, years of education, and health region) and socioeconomic (income and information on perception of family income) data were collected. Data on self-reported diseases (high cholesterol levels, hypertension, diabetes, gastrointestinal disease, mental disease, cardiac disease, pulmonary disease, cancer, neurological disease, hyperuricemia, and urinary disease) were also assessed. HRQoL was assessed using the European Quality of Life Survey (version validated for the Portuguese population) (25) with five dimensions and three levels (EQ-5D-3L). The EQ-5D descriptive system was converted into a single index using a formula that assigned values to each of the levels in each dimension. These value-sets were derived for use of the EQ-5D for the Portuguese population (25). A higher EQ-5D index score corresponded to a higher quality of life. Selfperception of general health status at the time of the survey was also assessed by adapting the original EQ5D-3L visual analog scale to the question "Considering a scale on which the best state of health you can imagine is 100 and the worst state of health you can imagine is 0, we would like you to tell us how good or bad it is, in your opinion, your state of health today?".
Data on whether there was a reduction in the numbers of visits to a physician due to economic difficulties (Yes/No), whether the respondent took any medication (Yes/No), and whether the respondent stopped taking any medication due to economic difficulties (Yes/No) were also recorded. Body mass index (BMI, body weight in kilograms divided by the square of the height in meters) was calculated using the values for self-reported weight and height. The BMI values were assigned to categories according to World Health Organization criteria (26).
Food insecurity was assessed using a psychometric scale adapted from the Brazilian Food Insecurity Scale; this scale was adapted from the US Household Food Security Survey Module (27)(28)(29). This tool was used to assess the quantitative and qualitative components of food insecurity within the 3 months before the respondent answered the survey. A score ranging from 0 to 14 was obtained as an outcome of the total number of affirmative answers. Each score was used to assign the respondent to one of four categories of food insecurity (i.e., "food security, " "low food insecurity, " "moderate food insecurity, " and "severe food insecurity"; Table 1).
Statistical Analysis
The statistical analysis was performed using the Statistical Package for the Social Sciences -IBM-SPSS for Windows 24.0 R . To assure the representativeness of the sample for the 65 years and older Portuguese population, weighting coefficients were computed and used for the additional statistical analyses. The EpiDoC 3 study participants and non-participants were first compared regarding sociodemographic, socioeconomic, and health status characteristics. We then adjusted the weights based on the corresponding population stratification groups (gender, age group, and health region). Weighted proportions of food insecurity according to age group, gender, health region, years of education, income, and household income perception were calculated. Prevalence estimates for food insecurity were calculated as weighted proportions, consistent with the sample design. After the descriptive analyses were performed, each participant was assigned to the "Food Security" or "Food Insecurity" (including mild, moderate and severe food insecurity) categories to evaluate the relationships between food insecurity and the other study variables. The magnitudes of the associations between the variables were calculated using binary logistic regression models. The dependent variable was food security status ["food insecurity" (event) vs. "food security"]. Independent variables were sequentially added to the model based on existing knowledge about the variables that could affect the event. Crude and adjusted odds ratios (ORs; age group, gender, health region, and years of education) and the corresponding 95% confidence intervals (CIs) were computed. EQ5D-3L and self-perception of general health status data were analyzed as continuous variables. We interpreted these variables considering that the EQ5D-3L values ranged from 0 to 1 and that the self-perception of general health status values ranged from 0 to 100. A significance level of α = 0.05 (two-tailed) was used for all analyses.
Ethical Issues
The EpiDoC 3 study was performed according to the criteria established by the Declaration of Helsinki and revised in 2013 in Fortaleza (31). The study was reviewed and approved by the National (Portuguese) Committee for Data Protection and by the NOVA Medical School Ethics Committee. The participants gave informed signed consent to be included in all phases of the study.
RESULTS
The data were collected using telephone interviews of 1,885 Portuguese people ≥ 65 years of age living in communities in the mainland, or the Madeira or Azores islands, between July 2015 and September 2016. The sample consisted of 55.5% women and 44.5% men (mean ± standard deviation (SD), 74.3 ± 6.8 years (range, 65-104 years). The results for the sociodemographic and socioeconomic characteristics of this older population are presented in Table 2.
Prevalence of Food Insecurity
A Cronbach's α coefficient of 0.949 was found when we analyzed only the adult household-related items of the scale. The first validation of the Household Food Insecurity Scale used a sample from the Portuguese population. It was performed by Gregorio et al. in 2015. They found a Cronbach's α coefficient of 0.865 (27). Twenty-three percent (n = 500) of 65 years and older households reported being food-insecure. Of this food-insecure group, 16.3% were in the low food insecurity, 4.8% were in the moderate, and 2% were in the severe food insecurity, group ( Table 3). There were apparent differences between genders and age groups ( Table 4). Older females were more likely to report living in a food-insecure household (26.5%) compared with older males (18.9%) (OR = 1.545, 95% CI 1.534-1.556). A higher proportion (25.9%) of food insecurity was found in the households of the individuals in the 70-74 years age group (OR = 1.405, 95% CI 1.392-1.417). The highest proportion of severe food insecurity was also found among the individuals in the 70-74 years age group (2.9%) ( Table 3). All percentages were weighted for correcting to population representativeness.
VARIABLES ASSOCIATED WITH FOOD INSECURITY Sociodemographic Characteristics
The results for the analysis of the sociodemographic characteristics of the sample and for the application of a binary logistic regression model for the event (food insecurity, binary variable) are presented in All percentages were weighted for correcting to population representativeness.
Socioeconomic Characteristics
The results for the application of a binary logistic regression model for the event food insecurity are presented in All percentages were weighted for correcting to population representativeness.
Chronic Diseases
The results for the crude and adjusted ORs (i.e., adjusted for age group, gender, education level, and health region) for the presence of household food insecurity and chronic disease are presented in
Management of Chronic Diseases
The results for the application of a binary logistic regression model for the event food insecurity are presented in Table 7. The variables related to management of chronic disease were included in the model, and the results were presented as crude and adjusted ORs (i.e., adjusted for age group, gender, education level, and health region). The results of the analysis of the difficulties experienced by older adults in the management of these chronic diseases indicated that those who reported a decrease in medical visits or who stopped taking medication due to economic reasons had a higher odds (OR = 4.381, 95% CI 4.334-4.428, and OR = 5.477, 95% CI 5.422-5.532, respectively) of living in a food-insecure household, before and after adjustment of the model.
Health Related Quality of Life
The results for the crude ORs and adjusted ORs (i.e., adjusted for age group, gender, education level, and health region) for the presence of household food insecurity and HRQoL are presented in Table 8. The results indicated that before and after adjustment of the model, older adults from food-insecure households had higher odds of a lower HRQoL (EQ5D-3L score: OR = 0.212, 95% CI 0.210-0.214). The analysis of the self-perception of general health status (thermometer scale from the EQ5D-3L questionnaire adapted to one question) revealed that older adults who reported a better health status had lower odds of living in a food-insecure household, before and after adjusting the model (OR = 0.976, 95% CI 0.976-0.976).
DISCUSSION
This study aimed to estimate the prevalence of food insecurity and its association with chronic diseases and HRQoL in individuals ≥65 years of age living in households in the community. To our knowledge, a study with these objectives has not been published. We found a food insecurity prevalence of 23%. Few studies of food insecurity in Portugal have been performed, and none have specifically examined the older adult population. A national study of the EpiDoc cohort revealed a food insecurity prevalence of 19.3% in the adult population (≥18 years) (23). The results of this current study indicated that this problem is more prevalent in the older population compared with the adult population. Other analyses adult populations found values for prevalence of food insecurity between 8.1% and 50.7% (32)(33)(34). However, comparing the results of our study and published studies should be done carefully because the methodological approaches, sources of collected data, and the times of data collection were different. All percentages and means were weighted for correcting to population representativeness. ** BMI Classes: Underweight <18,5kg/m 2 ; Normal weight 18,5-24,9kg/m 2 ; Pré-obesity 25-29,9kg/m 2 ; Obesity ≥30kg/m 2 Similarly, direct comparison of the values for prevalence found in our study with estimates from other countries is not always feasible because the methods of data collection differ. However, some studies used similar methodological approaches to our study. The results of a study performed by Goldberg et al. of a sample of 2,045 older adults ≥60 years of age who were included in the National Health and Nutrition Examination Survey (NHANES, USA) 2007-2008 indicated that >9% of older adult households were food-insecure (35). Russell et al. found that of a sample of 3,509 older adults in Australia, 13% reported food insecurity (5). Other studies performed in United States and Australian populations revealed a higher prevalence of food insecurity in disadvantaged urban areas (6,8,36). Compared with our study, these two studies found lower values for prevalence of food insecurity. However, the differences might be due to differences in the participants' ages (i.e., the other studies included participants <65 years of age). The differences could also be due to differences in the sources of collected data (i.e., national vs. regional samples), the socioeconomic disparities between each location (37)(38)(39), the socio-economic characteristics of the population (40) or because our sample was collected after a period of economic crisis (41). The data used for other studies were collected before the period of global economic recession (42).
Consistent with the results of previous studies, our study revealed that older adults in food-insecure households were more likely to be women, have less education, and live in households with a lower per capita income (5,11,12,34,37,(43)(44)(45). The income perception is the factor with the most robust association with food insecurity in older adults; it clearly All percentages were weighted for correcting to population representativeness. All means were weighted for correcting to population representativeness.
represents the relationship between economic factors and food insecurity (5,13,46). Our findings for the self-reported noncommunicable diseases suggested that older adults with chronic diseases (i.e., diabetes, pulmonary disease, cardiac disease, digestive disease, mental disease, and urinary disease) had increased odds of living in a food-insecure household. Other studies have also found associations between food insecurity and higher values for prevalence of diabetes (34,(47)(48)(49)(50)(51), pulmonary disease (51), mental disease (12,18,48,52,53), cardiac disease (48,54,55) and poor health outcomes (56). The higher prevalence can be explained by the results of a study that found that food insecurity requires changes in the household food supply that reduce diet quality (57). Disturbance of the nutrient and dietary patterns linked with food insecurity may clarify the higher rates of chronic illness experienced by food-insecure individuals (55,57).
There are other putative reasons for the higher prevalence of chronic diseases in older adults from food-insecure households. For example, a review performed by Gucciardi et al. revealed that due to the cost of out-of-pocket health care expenditures such as purchase of prescribed medications and supplies, food insecurity affects compliance with the self-management recommendations given to individuals with diabetes (50). Seligman et al. found that persons in food-insecure households may replace the consumption of healthy foods with less expensive foods that have poor nutritional value and a higher energy density. These individuals will then consume in excess of their energy needs and these intakes can be associated with the development of diabetes (49). This finding is consistent with the finding of another study of older adults (58).
The significant association between food insecurity and pulmonary disease revealed by our study is consistent with the results of studies performed in other countries that indicate that being a smoker is a predictor for reporting characteristics associated with food insecurity (5,51).
Our study also revealed that a higher proportion of older adults from food-insecure households reported lower rates of being underweight and higher rates of pre-obesity and obesity ( Table 6). The results of previous studies have been inconsistent in this area, especially in the older adult populations where this association is weak (12,45,55,(59)(60)(61). Consistent with our findings, some studies found that food insecurity is associated with greater BMI values (pre-obesity and obesity) in older adults (18,62). However, other studies have not found this association (12) or it has only been found in populations of older females but not in populations of older males (45,61). These conflicting results may be related to the evidence that compared with direct assessment, self-reported measures of weight and height in older adults could give biased results (18,45,63). The aging process is accompanied by modifications in body composition (e.g., reductions in height resulting from compression of the vertebrae, loss of muscle tone, and variations in body weight). Older adults tend to report the values they maintained in adulthood (18,45,60,64). Therefore, these findings suggest that the foodinsecurity association with obesity reported among adults may have different characteristics in older adults.
Some studies have found that food-insecure older adults report lower investment in their health (8). Food insecurity, cost-related medication nonadherence, and a decrease in medical visits are three related problems with negative consequences and public health implications (65). Our study found that older adults who reported a decrease in medical visits or who stopped taking medication for economic reasons had 4.5 and 5.5 higher odds, respectively, of living in a food-insecure household.
Our findings suggested that those who reported that they stopped taking medication due to economic reasons had higher odds of living in a food-insecure household. In Portugal, the acquisition of medication is associated with different levels of reimbursement by the state (range, 15% to 95%). There is typically some expenditure by the elderly for medication. Elderly people who must purchase several medications may not be able to do so, especially if they have a low income. Afulani et al. assessed the association between food security and cost-related medication under use in a sample of 10,401 American older adults (National Health Interview Survey 2011-2012). Consistent with our findings, they found that older adults from foodinsecure households were more likely to report cost-related medication under use than the adults living in food-secure households (66). Bengle et al. examined the relationship between cost-related medication nonadherence and food insecurity in a sample of 1,000 low-income Georgian older adult participants of the Older Americans Act Nutrition Program. They found that food-insecure participants were three times more likely to practice cost-related medication nonadherence than their counterparts (65). In a 2008 study of older Georgians, Bhargava et al. found that food-insecure individuals were more likely to report a poorer health status, had more chronic disease, and tended to have lower health care expenditures compared with their counterparts with similar health status. Taken together, these findings suggest that food-insecure older adults may be unable to meet healthcare needs and practice healthy food consumption behaviors (6). Other studies have found similar relationships between food insecurity and lack of adherence to medical treatments and drug therapies (8,13,34,50).
In older adults, food insecurity and higher prevalence of chronic diseases contribute to declines in functionality and corresponding decreases in quality of life (9,19). The relationships between nutrition and morbidity and mortality in aging are well-established, but more research on HRQoL and food insecurity in older adults is needed. Our findings indicated that older adults from food-insecure households reported a poor HRQoL ( Table 8). These findings are consistent with those of a study of an older Australian cohort of community-living persons ≥49 years of age; this study revealed associations between food insecurity and poor HRQoL characteristics (19). Older adults living in food-insecure households report at least some inability to obtain enough food due to economic limitations and that consequently they reduced their diet quality or variety (28,67). Diet quality and variety are significant determinants of HRQoL in the aging population (68,69).
Our study also found that older adults from food-insecure households self-reported a lower health status value ( Table 8). This finding is consistent with studies that found that than compared with food-secure older adults, food-insecure older adults self-report a poor health status (5,8,13,65).
Our study had some limitations. First, it was based on a cross-sectional analysis that limited the ability to explore casual relationships and establish the temporal sequence of associations. Second, the method used to control the potential effects of sociodemographic variables was based on self-reported conditions, which might contributed to underestimates of the diagnosis of chronic conditions and BMI values (12,18,45,63). Third, the food insecurity scale "evaluates the food security situation of the household members as a set and not necessarily the condition of any specific household member" (28,29). Thus, it was not possible to define the food insecurity status of each older adult living in the household. Despite these limitations, our findings are a valuable resource in understanding key healthrelated factors associated with food insecurity in older adults. They can be used during development of preventive public health strategies and policies. To our knowledge, this study is one of the first to examine the prevalence of household food insecurity and its associations with chronic diseases and HRQoL of older adults living in the community. The strengths of our study include the use of a randomly selected sample that represented the population of Portugal. The instrument used to record information on food insecurity had high internal consistency measures for this sample. Our study also examined a relationship that has been overlooked in older populations: the association between management of chronic diseases and food insecurity. The results of this study contribute to understanding the interrelated issues that affect health status and disease evolution.
CONCLUSIONS
In conclusion, our results have implications for clinical practice and public policy development. For an increasingly aging population, the greater odds of food insecurity among households of older adults and the associated factors indicates the importance of considering this problem as one of the main public health challenges.
Ensuring that older adults have enough food to meet their needs may be an important way to help them enjoy good health and remain active while aging. Food insecurity in older adults is an undesirable problem that requires additional attention. Food insecurity is associated with higher odds of chronic disease, poor self-management of chronic disease, and lower HRQoL. Because it is determined by economic factors, food insecurity is socially and ethically unacceptable. Therefore, during a period marked by significant financial stress, implementation of strategies aimed at ensuring the food security of the aging population is needed. We suggest that other evaluation and monitoring studies should be performed and further investigation using longitudinal data are needed to determine the health consequences of food insecurity among older adults. These results would assist health professionals and policymakers to better understand the barriers to achieve improved health in this population.
ETHICS STATEMENT
EpiDoC 3 study was performed according to the principles established by the Declaration of Helsinki and revised in 2013 in Fortaleza. The study was reviewed and approved by the National Committee for Data Protection and by the NOVA Medical School Ethics Committee. Participants provided informed consent to contribute in all phases of the study. | 2018-07-12T13:05:26.295Z | 2018-07-12T00:00:00.000 | {
"year": 2018,
"sha1": "692b4a55b98c372e9c116043dc183577e3201b1d",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmed.2018.00203/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9fdc701db29d7c428ca762b23548e8879b782e68",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Sociology",
"Medicine"
]
} |
228903739 | pes2o/s2orc | v3-fos-license | Effect of the ethanolic extract and essential oil of Ferulago angulata (Schlecht.) Boiss. on protein, physicochemical, sensory, and microbial characteristics of probiotic yogurt during storage time
Abstract Background The use of functional food, such as probiotic products, is important due to their health benefits against various diseases. Phenolic and aromatic compounds originating from medical plants can contribute to the growth of probiotic bacteria. Methods The ethanolic extract (0.2% and 0.4%) and essential oil (0.01% and 0.03%) of Ferulago angulata (FAEE and FAEO, respectively) were added to probiotic yogurt (Lactobacillus acidophilus and Bifidobacterium bifidum bacteria) to investigate their effects on the survival of probiotic bacteria during storage time (21 days) and assess its physicochemical, protein, and organoleptic properties. Results Upon increasing the concentration of FAEE and FAEO, the value of total phenol content, acidity, viscosity, and water absorption of yogurt treatments increased, and the pH, syneresis, and solubility of treatments showed a decreasing trend (p < .05). Also, adding 0.01% FAEO and 0.2% FAEE improved the organoleptic properties of yogurt (p < .05) compared to the control treatment. The survivability of the investigated probiotic bacteria demonstrated a decreased trend during storage in all treatments, but at the end of the study, the number of both probiotic bacteria in all treatments was significantly higher than that of the control samples. Conclusion Based on the results of protein, physicochemical, microbial, and sensory tests of herbal probiotic yogurts, the addition of 0.03% essential oil is the best way to realize the goals of the research.
| INTRODUC TI ON
Probiotics are live microorganisms which, when administered in adequate amounts, confer a health benefit on the host (FAO/ WHO, 2001). The popularity of probiotics has continuously grown, and various probiotic food products have been marketed, including probiotic yogurts (Sarvari et al., 2014).
The probiotic yogurt, having probiotic effects, is a product with adjuvant microorganisms. There are numerous advantages associated with consuming fermented dairy products containing probiotic bacteria (Aswal et al., 2011). However, to deliver their health benefits, probiotics must be present in food products above a threshold level (>6 log cfu/g) at the time of consumption in order to survive the passage through the upper and lower parts of the gastrointestinal (GI) tract (Marinaki et al., 2016). Nevertheless, during the storage of probiotic products, the survivability of these bacteria shows a decreasing trend due to several factors, for example, the low pH of fermented foods, hydrogen peroxide produced by some lactobacilli, and high oxygen content (Kim et al., 2019;Sarvari et al., 2014). The most commonly used probiotic supplements contain the species of Lactobacillus and Bifidobacterium and are part of the healthy human intestinal microbiota (Nashaat AL-Saadi, 2016).
Herbal products (spices, essential oils, and extracts) have been used as a source of functional flavoring agents (Azizkhani & Parsaeimehr, 2018), bioactive antioxidants, and other compounds, such as phenolic compounds, and can be incorporated as nontraditional additives in fermented milk products, including yogurt (Mahmoudi et al., 2016). There are several studies about the health benefits of herbs, including antimicrobial, antioxidant, anti-inflammatory, and anticarcinogenic properties (Azizkhani & Parsaeimehr, 2018).
Combining probiotics with herbal products may provide further antimicrobial-therapeutic properties. However, as herbs are antimicrobials, they may affect the viability of probiotic microorganisms.
In vitro studies testing herbs on the growth of selected probiotics demonstrated that herbal products significantly enhance the growth of probiotics while inhibiting pathogens (Be et al., 2009;Sutherland et al., 2009). Also, the use of herbal extracts can exert a strong effect on food properties, including structural, functional, and nutritional changes in proteins. Several factors can influence the action and reaction of phenolic compounds, most notably pH, protein type and concentration, and the structure of phenolic compounds. (Ozdal et al., 2013). One way to enhance the viability of probiotic bacteria is the addition of medicinal plants to dairy products which, while increasing the viability of these bacteria and the shelf life of these products, does not adversely affect the organoleptic properties of the products. (Yerlikaya, 2014).
Ferulago angulata known as Chavir (Azarbani et al., 2014) is a native plant in some parts of Iran (Sodeifian et al., 2011). The genus Ferulago belongs to the Apiaceae family and is used in folk medicine for sedative, tonic, digestive, and antiparasitic effects (Taran et al., 2010). There have been some published reports on its significant antibacterial, antioxidant, and antidiabetic properties, and, traditionally, it was added to dairy and oil ghee to prevent decay as a strong preservative and to increase the shelf life of dairy products besides adding a pleasant taste to them (Alizadeh et al., 2019;Azarbani et al., 2014).
This study aimed to investigate the effect of the ethanolic extract and essential oil of F. angulata on the protein, physicochemical, microbial, and sensory properties of probiotic yogurt and the viability of probiotic bacteria during storage time. and Bifidobacterium bifidum (Bb-12)) were obtained from Chr. Hansen, LTD, Denmark, and directly added to milk. All the microbial media and chemical materials were purchased from Merck, Germany.
| Plant collection
The aerial parts of F. angulata subsp. were collected in July 2019 from Rayen Mountains, Kerman Province, Iran. The plant was identified and authenticated by the Agricultural Research and Promotion Center of Kerman.
| Extraction procedure
To extract the essential oil (EO), the aerial parts F. angulata were air-dried at ambient temperature in the shade; 150 g of them was distilled by a Clevenger-type apparatus for 3 hr, FAEO was extracted and dried over anhydrous sodium sulfate, and stored at 4°C until analysis (Javidnia et al., 2006).
To extract the ethanolic extract (EE), the air-dried parts of F. angulata were pulverized into the powdered form. The dried powder (30 g) was extracted by the maceration method with ethanol (EtOH), and separately at room temperature, and the solvents from the combined extracts were evaporated by the rotary system (Mottaghipisheh et al., 2014).
| Analysis method
To analyze the components of FAEE and FAEO, a gas chromatograph (Agilent 7890A) coupled with mass spectrometry (Agilent 5975 C) GC/MS (HP-5MS) equipped with a column (30 m in length, with an internal diameter of 0.25 mm and a thin layer thickness of 0.25 µm) was employed. The temperature profile was as follows: At first, the temperature of the oven was fixed on 45°C for 1 min and then increased to 300°C with a temperature rate of 5°C/min. The helium input flow rate was 1 ml/L (Sodeifian et al., 2011). The extract constituents were identified by the comparison of their retention indices relative to (C7-C20) n-alkanes and by comparison of their mass spectra with those of the internal reference mass spectra library (NIST and Wiley). The percentage of volatile compositions was calculated from the GC peak areas (Azarbani et al., 2019).
| Disk diffusion antibacterial activity
The antibacterial activity of the samples was evaluated against Bacillus subtilis, Staphylococcus aureus, and Escherichia coli by the disk diffusion method (Azarbani et al., 2014). For this purpose, the agar diffusion method was used. The bacteria were cultured for 24 hr on the Mueller Hinton agar, and a suspension was prepared in 0.5 McFarland dilution (OD 625 nm = 0.1) in the Mueller Hinton broth. Then, 5 ml of each bacterial suspension was cultured with the spread plate method using a sterile swap, and blank disks containing 2,560 μg/ml of each EE/EO diluted with DMSO were placed on the culture medium. Subsequently, the inhibitory zone diameter was measured after 24 hr of incubation at 37°C. A tetracycline disk was used as the control disk (Moghtader et al., 2013).
| Minimum inhibitory concentration (MIC) and minimum bactericidal concentration (MBC) test
The MIC and MBC test of FAEE and FAEO was determined by the tube dilution method described by Tabatabaei Yazdi et al. (2014) with some modifications. The bacterial suspensions were prepared the same as the disk diffusion test. Briefly, the bacterial dilation was prepared in nine sterile tubes. Eight tubes were used for serial dilution and one for control. All the bacteria were incubated at 37°C for 48 hr. After incubation, the tubes were examined for turbidity caused by the growth of inoculated microorganisms. All tubes with no growth were sampled and cultured to determine MBC. The tubes containing the lowest concentrations of FAEE and FAEO and in which no growth was observed in the relevant plate were considered as MBC (Tabatabaei Yazdi et al., 2014).
| Yogurt preparation
Milk was mixed with skim milk powder (2%) and milk protein concentrate (0.5%) to standardize the fat (3.2%) and protein content to the desired level (10.5 w/w; Lee & Lucey, 2010). It was pasteurized at 90°C for 5 min and cooled to 43°C. Then, the yogurt starter (0.6 gr) and probiotic bacteria (0.4 gr) were added to milk (800 gr; EL Omari et al., 2020). To prepare the herbal probiotic yogurt, different concentrations of FAEE and FAEO were added to the milk (0.2% and 0.4% for FAEE, 0.01% and 0.03% for FAEO) in 100 g packages and then incubated at 43°C until reaching the pH value of 4.6. They were subsequently cooled down until 4°C (Sadeghi et al., 2017). The probiotic yogurt without FAEE and FAEO was selected as the control sample. After the production of probiotic yogurts, the samples were
| Determination of total phenolic compounds (TPC)
The TPC was determined by Folin-Ciocalteu reagent. Briefly, 1 ml of the yogurt extract (a mixture of 10 gr of yogurt samples with 2.5 ml of distilled water) was mixed with 1 ml of 95% ethanol, 5 ml of distilled water, and 0.5 ml of the Folin-Ciocalteu reagent, and the contents of the tube were mixed thoroughly. Then, 1 ml of 1N 50% Na 2 CO 3 was added, and the sample was incubated in the dark for 120 min at room temperature. Finally, the absorbance was measured at 725 nm with a spectrometer ( Infrared (IR) or Fourier-transform infrared (FTIR) spectroscopy has a large application range, from the analysis of small molecules or molecular complexes to the analysis of cells or tissues. It has also been increasingly applied to the study of proteins. This concerns the analysis of protein conformation, protein folding, and molecular details from protein active sites during enzyme reactions using reaction-induced FTIR difference spectroscopy (Berthomieu & Hienerwadel, 2009). Herein, a Bruker Tensor 27 FTIR spectrometer equipped with a KBr beam splitter and DLaTGC detector was utilized. The samples were placed onto a silicon sample carrier and left to dry in ambient air for 30 min prior to data collection.
| Solubility of yogurt protein
The method used by Brückner-Gühmann et al. (2019) with some modifications was adopted to analyze the solubility of yogurt protein samples. The yogurt samples were suspended at a concentration of 5% (w/w) in distilled water by magnetic stirring at room temperature for 1 hr. The pH was adjusted as required to pH 4 with 1N HCl. The suspension was centrifuged at 10,000 g for 10 min, and the protein content in the supernatant, as well as in the suspension before centrifugation, was determined accord-
| Water absorption capacity
The water absorption capacity (WAC) of yogurt protein was determined using the protocol described by Rodríguez-Ambriz, Martínez-Ayala, Millán and Davila-Ortiz (2005). The WAC (%) of the sample is calculated using by dividing the weight of the water absorbed by the weight of the protein sample (Al-Shamsi et al., 2018).
| Determination of the pH and titratable acidity of yogurts
The pH of yogurts was measured using a pH-meter 766, and titratable acidity was determined by titration using 0.1 mol/L NaOH and phenolphthalein as an indicator, and is expressed as g of lactic acid per 100 g of yogurt (Marinaki et al., 2016).
| Syneresis measurement
Twenty-five grams of unstirred yogurt was spread evenly on a Whatman No. 1 filter paper in a funnel. After 2 hr at 4°C, the volume of the serum isolated from yogurt in cc was recorded and expressed as the rate of syneresis (Ashrafi yourghanloo & Gheybi, 2019).
| Viscosity measurement
The viscosity of yogurt samples was determined at 4°C using spindle number 5 at a shear rate of 60 rpm (Ashrafi yourghanloo & Gheybi, 2019).
| Sensory evaluation
The sensory evaluation was conducted through consumer taste panels using a 5-point hedonic scale (1 = least acceptable, 5 = extremely good). To perform this sensory evaluation, an eight-person panel was used, in which the sensory evaluation of yogurt was performed using the general scoring method obtained by multiplying the scores given to the sensory indices in the relevant coefficients. The final evaluation indicator is the overall evaluation, and the maximum sum of sensory ratings is 50 (Sekhavatizadeh et al., 2015).
| Survival of probiotic bacteria
The selective count of B. bifidum was performed using the TOSpropionate agar medium supplemented with mupirocin lithium salt and sodium propionate. The plates were incubated anaerobically at 37°C for at least 72 hr. The MRS/CL/CIP Agar medium containing clindamycin and ciprofloxacin was utilized for the selective count of L. acidophilus. After incubation, viable numbers were enumerated using the surface culture technique (Sarvari et al., 2014).
| Mold and yeast test
To count the number of molds and yeasts, after 21 days of storage in the refrigerator, each probiotic yogurt sample was cultured in the YGC medium and the plates were aerobically incubated in a refrigerated incubator at 25°C for 3-5 days. After this period, the colonies were counted (El Omari et al., 2020).
| Statistical analysis
The SPSS statistical software version 17 was used to analyze the data. One-way analysis of variance (ANOVA) was performed to compare the means, and the Duncan test was used to examine the difference between the means at p < .05. The Excel software was used to TA B L E 1 Components of Ferulago angulata essential oil plot the curves, and the hedonic five-point method was adopted to analyze the sensory data.
| GC-MS analysis
The essential oil yields were 0.6% (v/w) based on the dry weights of the samples. The chemical composition of the FAEO is reported in Table 1, and 14 components were identified (72.42%). The major compounds of the oil were (Z)-beta-ocimene (14.22%), alpha-pinene (12.61%), and germacrene B (10.52%).
In the study by Rezazade et al. (2003), F. angulata volatile oil, which had been dried in the shade, was extracted by the water vapor distillation method and examined by a chromatographic gas device connected to a mass spectrometer detector, and its components were identified. In this work, 33 compounds that constituted 89.7% of the components were identified, of which 77.1% were monoterpenes and 12.6% were sesquiterpenes. The main identified compounds were alpha-pinene, bornyl acetate, and cis-ocimene, which were almost consistent with this study.
| Inhibitory zone diameter
The results in Table 2 The results of the evaluation of pH changes during sample storage showed the effect of essential oil concentration on pH variation, such that this amount decreased with increasing the essential oil concentration.
Also in a study by Sadeghi, Akhondzadeh Basti, Noori, Khanjari and Partovi (2012) on green cumin essential oil, the best concentration of essential oil in terms of the effect of inhibiting the growth of staphylococcus and creating the desired taste in the product was 0.015%, which is consistent with the present study.
| MIC and MBC analysis
The results of the MIC and MBC tests are given in Table 4 showed that in all probiotic yogurts containing
| FTIR test
According is close to the pH of milk, solubility of OPC and OPI was found to be around 30%.
| Analysis of the solubility and WAC of yogurt protein
In all probiotic yogurts containing FAEE and FAEO, the WAC of the protein increased compared to the control treatment (p < .05).
The addition of FAEE compared to FAEO significantly increased the WAC of protein in probiotic yogurts (p < .05). The highest protein WAC was measured in probiotic yogurts containing 0.4% of the extract (4.66 ± 0.80).
WAC is related to the ability of a protein to hold water in the yogurt gel structure and according to results of Kim et al. (2019) who worked on effects of lotus (Nelumbo nucifera) leaf on quality and antioxidant activity of yogurt during refrigerated storage, and WAC of LL yogurts was higher than that of the control during storage (p < .05).
| pH, titratable acidity, viscosity, and syneresis of yogurts
According to Table 6, on the first and the 21st days, the pH of different herbal probiotic yogurt treatments did not differ significantly (p < .05). The results show that the pH changes in herbal probiotic yogurts were significant in the control treatment over time (t = 3.062 and p = .038). Over time, the pH in the control treatment ranging from 4.25 ± 0.10 to 4.00 ± 0.10 decreased. In other treatments, pH changes were not significant over time.
The control treatment had the highest acidity on the first day (120 ± 1D) and on the 21st day (126 ± 2D; p < .05). The addition of FAEE was more effective in increasing the acidity of probiotic yogurts than FAEO (p < .05). The addition of FAEO from 0.01% to 0.03% caused a decrease in acidity (p < .05), but there was no significant difference in the acidity of probiotic yogurts containing 0.2% and 0.4% of the extract (p < .05). The highest acidity was measured on the first and 21st days in probiotic yogurts containing 0.4% FAEE (Table 6).
In all probiotic yogurts containing essential oils and extracts of the F. angulata plant, the viscosity increased compared to the control treatment. The addition of FAEO was more effective in increasing viscosity compared to FAEE (p < .05). There was no significant difference in the viscosity of probiotic yogurts containing 0.2% and 0.4% F. angulata (p < .05). The treatment containing 0.01% of essential oil had the highest viscosity on the first and 21st days (0.51 ± 0.02 pa.s & 0.37 ± 0.07 pa.s). Over time, viscosity was significantly reduced in all probiotic yogurts containing the extracts and essential oils of F. angulata (p < .05; Table 6).
The rate of syneresis in treatments containing 0.01 and 0.03% FAEO was zero, while the control treatment had the highest (1.55 ± 0.07 cc) syneresis rate. Over time, the syneresis rate was significantly increased in all probiotic yogurts containing extracts and essential oils of F. angulata (p < .05; Table 6).
Adding plant extracts to probiotic yogurt leads to a significant increase in acidity compared to the control sample, which is because fermenting yogurt with plant extracts increases the metabolic activity of yogurt bacteria and acidity due to the production of organic acids of lactic acid bacteria. In addition, over time, the acidity of all treatments increases significantly, which is due to the increase in storage time and the continuation of the process of lactose fermentation by the starter and probiotic bacteria, increasing acidity due to the accumulation of acids such as lactic acid and formic acid (Ghalemousiani et al., 2017). In this study, the relationship between pH and acidity in different treatments of probiotic yogurt was reversed. In other words, by increasing pH, acidity decreased.
In this study, the viscosity of the product increased with the addi- They also reported that by adding 5% and 10% of dill extract to yogurt, the amount of syneresis decreases and increases, respectively, compared to the control sample.
| Analysis of sensory evaluation
According to the results, the oral texture of probiotic yogurts containing 0.4% of extract and 0.03% of FAEO was not significantly different from the control treatment (p < .05). In these treatments, the oral texture score was higher than probiotic yogurts containing 0.2% The results are shown as mean ± standard deviation. extract and 0.01% FAEO (p < .05). In evaluating the sensory characteristics of the nonoral tissue, after the control treatment, the highest score was obtained by probiotic yogurts containing 0.01% and 0.03% essential oil of F. angulata. In terms of the general acceptance of the evaluators, the most desirable treatment was the probiotic yogurt containing 0.03% of FAEO. Comparison of sensory properties of herbal probiotic yogurts has been shown in Figure 3.
Azizkhani and Parsaeimehr (2018) worked on probiotic survival, antioxidant activity, and sensory properties of yogurt flavored with herbal (peppermint, basil, and zataria) essential oils and concluded that peppermint and basil samples showed both good antiradical activity and sensory acceptability. In the sensory tests, yogurt samples were evaluated for appearance, flavor, texture, and overall acceptability. The mean scores for the appearance of basil and peppermint treated yogurt were higher than the control yogurt. The mean scores for the appearance of probiotic yogurt with basil and peppermint were within the acceptable range, but there was no significant difference between the types of yogurt (p > .05). Also, the mean scores for zataria yogurt were significantly lower than the control (p < .05).
| Survival of probiotic bacteria
The microbiological analysis of the yogurt samples in Tables 7,and 8 demonstrates the viability of the probiotic culture during storage.
The survival of L. acidophilus in the probiotic yogurt containing the extract and essential oil of the F. angulata plant was significantly higher than in the control treatment (p < .05). The highest survival rate of L. acidophilus was measured on the first and 7th days in probiotic yogurts containing 0.1%, and on the 14th and 21st days in probiotic yogurts containing 0.03%, respectively. In general, L. acidophilus decreased in all treatments over time. In the treatment of the plant probiotic control and yogurt containing 0.2 FAEE, the percentage of survival changes on the 14th and 21st days was not significant (Table 7).
The survival of B. bifidum in probiotic yogurts containing the extracts and essential oils of the F. angulata plant was significantly higher than the control treatment (p < .05). On the first, 7th, and 21st days, the highest survival of B. bifidum was measured in probiotic yogurts containing 0.03% of FAEO and on the 14th day in probiotic yogurts containing 0.4% of FAEE. In general, in all treatments, the survival time of the bacterium B. bifidum decreased, and the lowest life expectancy was measured on the 21st day and the highest life expectancy on the first day (Table 8).
According to the tables, it can be concluded that the best time to consume yogurts is until the 14th day, because the number of probiotic bacteria after that falls below 10 6 CFU/ml. Research also shows that, in general, the rate of loss of B.
bifidum is higher than that of L. acidophilus and other lactic acid probiotics, and its growth and proliferation rate are lower in the product. This can be attributed to the higher sensitivity of these bacteria to oxygen, high acidity, and low pH (Marhamatizadeh et al., 2010).
| Analysis of mold and yeast count
According to Table 9, the highest number of mold and yeast counts (1.9 × 10 3 ±5 × 10 CFU/g) was measured in the control treatment.
The results are shown as the mean ± standard deviation.
mold and yeast count of probiotic yogurts decreased. The lowest (1.8 × 10 2 ±3 × 10 CFU/g) mold and yeast counts were measured in probiotic yogurts containing 0.4% FAEE. Adding FAEE and FAEO to yogurt also reduced the counted mold and yeast due to their antimicrobial composition (p < .01). Generally, according to the results of protein, physicochemical, microbial, and sensory tests of plant probiotic yogurts, it can be concluded that adding 0.03% essential oil is the best treatment and according to the amount of probiotic bacteria, the best time to consume it is until the 14th day.
ACK N OWLED G M ENT
The Pegah Pasteurized Milk Factory and Iranian Food Laboratory are appreciated for their cooperation.
CO N FLI C T O F I NTE R E S T
The authors declare that there is no conflict of interest regarding the publication of this paper. (1)
TA B L E 9
Results of mold and yeast counting of herbal probiotic yogurts
The results are shown as mean ± standard deviation. | 2020-11-05T09:08:23.244Z | 2020-11-04T00:00:00.000 | {
"year": 2020,
"sha1": "d19d38bb2a9a2024ddcba9c05bdca74a7f5bb316",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/fsn3.1984",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7e6776b72a8aeacd37ce37f189a4f040c39a3de5",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
202194916 | pes2o/s2orc | v3-fos-license | THE ECOLOGICAL STATE OF RURAL RESIDENTIAL AREAS OF KYIV REGION IN THE INTENSIVE LIVESTOCK FARMING ZONE
The ecological state of atmospheric air, soil and water of rural residential areas in the intensive livestock zone is estimated. The excessive MPC in the air was detected by the average daily content of nitrogen dioxide (NO2) in the Dymer village in 1,6 times and hydrogen sulfide (H2S) in three human settlements: Gavrylivka – in 1,25 times, Gostomel – in 2,5 and Dymer – in 1,25 times. Also, excessive MPC in water was detected for nitrate content in the Rykun village in 3,5 times and the Tarasivschina village in 5,8 times. In human settlements Tarasivschina and Sinyak found non-compliance of water wells of households with the State sanitary norms and regulations 2.2.4-171-10 on the contents of E. coli. It is found that the key reasons of environment contamination involve the next one: non-compliance with the requirements of the State Building Codes B.2.2-12-12:2018 of the population, eutrophication processes of surface water bodies, industrial emissions, and excessive poultry manure application into the soil of households from poultry farms.
Таврійський науковий вісник № 107
Гостомель -в 2,5 и Дымер -в 1,25 раза. Также обнаружено превышение ПДК в воде по содержанию нитратов в с. Рыкунь -в 3,5 раз и с. Тарасовщина Introduction. Intensive livestock systems in Ukraine are characterized by the use of significant land resources, high density of livestock per unit area of agricultural landup to tens of thousands of heads per 100 hectares, consumption of a significant amount of natural resources and manure production and emissions [1, p. 132]. Livestock farms can produce more waste than can be utilized locally, which poses serious risks for environmental pollution and human health in adjoining residential areas.
To the negative ecological consequences of intensive livestock farming in Ukraine may be included the following: contamination of surface water bodies, soils and groundwater by production wastes; the formation of significant volumes of sewage waters, that are saturated with xenobiotics; atmospheric air pollution by harmful gases and dust emissions; microbiological contamination of soil and air; distribution of ectoparasites [2; 3, p. 51 ; 4].
The reasons of the negative ecological state of rural residential areas in the intensive livestock farming zone may be the violation of technologies of waste management and non-rational use of manure nutrients by large livestock enterprises and non-compliance with the requirements of the State Building Codes of people's households.
Since more than 60% of agricultural products are produced on the Ukrainian market by private farms, the environmental status of these territories should be monitored at the national level [5, p. 89].
Given the above, it is necessary to monitor residential areas, especially those not previously investigated, for example in the area of intensive livestock.
Materials and methods. The research was carried out on the rural residential areas of 10 human settlements of Vyshgorod district in Kyiv region, namely, Voronkivka, Dymer, Demydiv, Gostomel, Gavrylivka, Lytvynivka, Rakivka, Rykun, Synyak and Tarasivshchyna, which are situated near the large poultry farm complex LTD "Complex Agromars" that is located in Gavrylivka village.
In order to carry out an analysis, samples of water and soil selected in accordance with the acting State Standard of Ukraine The chemical analysis of iron, nitrates and ammoniacal nitrogen content in water samples, and sanitary and microbiological analysis of water and soil was conducted by the Vyshgorod Interdistrict Laboratory Researches Department of the State Institution "Kyiv Regional Laboratory Center of the Ministry of Health of Ukraine".
The value of the maximum permissible concentration (MPC) of harmful substances in the atmosphere of human settlements determined according to the "State sanitary rules for the atmosphere air protection of residential areas (from chemical and biological substances contamination)".
The ecological condition of well water (content of Fe, NH 4 , NO 3, and E. coli) was evaluated according to the requirements of the State sanitary norms and regulations 2.2.4-171-10 "Hygienic requirements for drinking water intended for human consumption".
The sanitary-microbiological state of the soil estimated according to the generally accepted methodological recommendations [6,7].
The indices of atmospheric pollution were determined as the ratio of the average daily levels of NO 2 , SO 2 , H 2 S and NH 3 in the air to their MPC, expressed in relative units.
Results and discussion. According to the data of the State Statistics Service of Ukraine, in all types of households of Kyiv region, concentrated 3,5% of cattle, 8,1% of pigs and 14,4% of poultry of the total number of agricultural livestock in Ukraine. Therefore, Kyiv region takes the 1st place according to the pig stock among other regions and takes the 2nd place according to the number of poultry, respectively, and by the level of anthropogenic impact on the environment from livestock waste products. A considerable proportion of livestock is concentrated in the people's households -on average of 52,7% in Ukraine and 25,2% in Kyiv region (Table 1). The analysis of air pollution level was conducted in the following settlements: Dymer, Gavrylivka, Gostomel, Tarasivshchyna, and Rykun. The excessive MPC in the air was detected by the average daily content of nitrogen dioxide (NO 2 ) in the Dymer village in 1,6 times and hydrogen sulfide (H 2 S) in three human settlements: Gavrylivkain 1,25 times, Gostomel -in 2,5 and Dymer -in 1,25 times (Table 2). A slight excess of the average daily MPC by NO 2 content in the air is directly connected with emissions from industrial enterprises, automobile transportation, heat and power plants, housing and communal services. The reasons for high H 2 S content in the air may be detected in the human settlements, swamps, enriched ponds, and the river Kizka.
By conducting the sanitary-microbiological analysis of soil of private plots in the human settlements of Dymer, Gavrylivka, Demydiv, Rakivka, Tarasivshchyna Voronkivka, Rykun and Dymer, the pathogenic microflora was not revealed.
According to the requirements of the State Building Codes B.2.2-12-12:2018, household outbuildings (sheds for cattle, poultry, and other animals), batch composting grounds, toilet accommodations, garbage cans, special storages for fertilizer and toxic chemicals should be located at least 20 m from the well. In the case where these standards are not maintained, all household outbuildings and surrounding sites is a potential source of pollution and deterioration of physical and chemical, as well as sanitary and biological indicators of drinking water quality. The majority of the population does not even know and accordingly does not follow the State building regulations of planning and development of territories.
It is found that drinking water by Fe and NH 4 content from the water supply of households in Gavrilovka village not exceed MPC (Fig. 1).
Fig. 1. The analysis of drinking water by Fe and NH4 content from the water supply of households in Gavrylivka village of Vyshgorod district of Kyiv region
Most of the investigated households raise livestock and grow the crop on the private agricultural plots. Poultry dung and manure are used as fertilizers and in unreasonably high doses are applicated into the soil. The residents buy poultry dung in the company LTD "Complex Agromars". The content of toilet accommodations is also thrown into the gardens and dig into the soil.
The wells of households are not deep (4-15 m) and the predominant soils of private agricultural lands have light granulometric composition, which does not delay the products of life products of vital functions of people and animals, resulting in pollution of water of wells by nitrates because of their high migration capacity in the soil (Fig. 2).
Waste from the Canalizations are pumped out and removed to specialized places, nobody does not clean and disinfect the wells. Conclusions. It was determined, that the main reasons of environmental contamination of the territory of investigated rural residential areas are non-compliance with requirements of the State Building Codes, the eutrophication processes of surface water bodies, industrial emissions and excessive application of poultry dung into the soil of household plots. In particular, excessive MPC in the atmosphere was detected by the average daily content of nitrogen dioxide and hydrogen sulfide, in water -by nitrates and E. coli content. | 2019-09-10T20:24:07.755Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "69a7a1938629cc01e9daccea62fc8cc7903dc3e9",
"oa_license": null,
"oa_url": "https://doi.org/10.32851/2226-0099.2019.107.45",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "563dcf04ffac12445924fe8133c8131d711c854d",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geography"
]
} |
155118557 | pes2o/s2orc | v3-fos-license | Permeability Evolution of Shear Failing Chalk Cores under Thermochemical Influence
Development of petroleum reservoirs, including primary depletion of the pore pressure and repressurization during water injection naturally, leads to changes in effective stresses of the formations. These changes impose mechanical deformation of the rock mass with subsequent altering of its petrophysical properties. Besides mechanical compaction, chalk reservoirs on the Norwegian Continental Shelf also seem susceptible to mineralogical and textural changes as an effect of the injecting fluid’s chemical composition and temperature. Understanding such chemical and thermal effects and how they interplay with the mechanical response to changes in effective stresses could contribute to improved prediction of permeability development during field life. This article presents results from mechanical testing of chalk cores of medium-porosity (32%) outcrop chalk (Niobrara Formation, Kansas) in triaxial cells. The experimental setup allows systematic combinations of fluctuating deviatoric stress, temperature (50 and 130 °C), and injecting fluid (calcite-equilibrated sodium chloride, calcite-equilibrated sodium sulfate, and reactive synthetic seawater) intended to replicate in situ processes, relevant to the North Sea chalk reservoirs. Deviatoric loading above yield resulted in a shear failure with a steeply dipping fracture of the core and a simultaneous increase in permeability. This occurred regardless of the brine composition. The second and third deviatoric loadings above yield did not have the same strong effect on permeability. During creep and unloading, the permeability changes were minor such that the end permeability remained higher than the initial values. However, sodium sulfate-injected cores retained most of the permeability gain after shear fracturing compared to sodium chloride and synthetic seawater series at both temperatures. Synthetic seawater-injected cores registered the most permeability loss compared to the other brines at 130 °C. The results indicate that repulsive forces generated by sulfate adsorption contribute to maintain the fracture permeability.
INTRODUCTION
Water-related improved oil recovery (IOR) techniques have a documented effect on hydrocarbon production in naturally fractured chalk reservoirs, and the success story of seawater injection at the Ekofisk field (North Sea, Norway) is closely linked to fracture permeability. With the matrix permeability evaluated in the range of 1−5 mD, the Ekofisk field chalk accommodates a natural fracture system that enhances the total reservoir permeability by a factor of 50. 1 After several years of primary oil recovery by pore pressure depletion, the effective stress (overburden minus pore pressure) increased, causing the compaction mechanisms of the reservoir chalk to resume. This did not only lead to severe seafloor subsidence and challenges for the production facilities but also lead to significant changes in the physical and mechanical properties of the rock. Yet, despite years of compaction, loss of permeability was not detected. 2,3 Sulak 2 stated that, even though matrix permeability declined with matrix compaction, the effect was of no apparent consequence to the effective permeability. This seeming paradox indicated that fracture permeability, in another order of magnitude compared to the matrix permeability, dominates the effective permeability evolution. Additionally, Teufel 3 pointed out that the deviatoric nature of the reservoir stress state seems to govern the permeability of fractures such that steeply dipping fractures aligned with the maximum horizontal stress, as seen at Ekofisk, will suffer the least permeability loss. The study also underlined the different permeability behaviors under hydrostatic conditions, where permeability declines steadily with increasing hydrostatic stress.
This behavior is clearly demonstrated 4−7 and does not capture permeability changes under deviatoric stresses, typical for reservoirs.
Stress anisotropy was also one of the reasons why field-wide implementation of seawater injection at the Ekofisk field in 1987 was remarkably successful. Teufel and Rhett 8 showed that decreasing effective mean stress by water injection while maintaining a large shear stress leads to increasing fracture density and fracture surface area and to significantly increased reservoir permeability.
Maintaining high fracture permeability, on the other hand, is not without challenges as production-related changes such as primary depletion of the pore pressure and repressurization during water injection continuously alter the effective stresses of the successions. 9,10 Such stress history affects the physical properties of porous rocks and their mechanical strength, 11,12 generating a complex porosity−permeability behavior in porous rocks. Studies of fracture permeability response to fluctuating effective stress 13−15 in porous rocks show less permeability changes with each stress cycle, and in any case, the loss of permeability seems irreversible. However, these studies only focus on stress-related permeability and do not incorporate other permeability-altering parameters, specific to carbonates. Ekofisk field observations of continued high compaction rates in certain areas even after repressurization and at constant pressure indicated the activity of other compaction mechanisms than only increasing effective stress and identified the chalk contact with nonequilibrium cold seawater as a likely cause. 16 Field observations and laboratory studies remark that the temperature of the injecting fluid also contributes to continuous changes in the reservoir stress state; cold injecting fluid cools down the rock, causing it to contract and leading to a decrease in effective stress and increased permeability. 8,17 In a study of mechanical behavior of chalk cores exposed to temperature cycling, Voake et al. 18 observed more irreversible strain in chalk cores that experienced temperature fluctuations compared to cores tested at constant temperature.
Additionally, the fluid temperature plays an important role in how the fluid interacts with the reservoir rock. Chalk reservoirs found on the Norwegian Continental Shelf seem susceptible to mineralogical and textural changes, and their mechanical stability is dependent on the injected fluid's chemical composition and temperature. 7,19−26 Studies show that high temperature enhances the rock−fluid interactions and their effects are more obvious than at lower temperature. Particularly, ions such as Mg 2+ and SO 4 2− , which are present in seawater, are responsible for the main chemical and mineralogical alterations in chalk (adsorption, calcite dissolution, and precipitation of new minerals). While seawater-like brines injected at increasing temperatures have a positive effect on the IOR potential, 27 they also reduce the mechanical strength of chalk, affect its elastic properties, 28 and lead to precipitation of permeability-inhibiting minerals such as anhydrite and gypsum. 23 In the above review, most of the research has been conducted on whole cores to study the effect of systematically changing effective stress, temperature, and brine composition and see the resulting impact in terms of rock−fluid interactions, compaction, porosity, and permeability. Kallesten et al. 29 modeled chemical creep compaction in a fractured core subject to hydrostatic stresses but concluded that more experimental data were needed to characterize the fracture. A limited number of studies have considered fractured cores, deviatoric stress, or doing repeated stress cycles. Unique to the present study is that we generate fractured cores under deviatoric stress conditions and conduct repeated stress cycles while flooding inert or reactive brines at different temperatures, relevant to the chalk fields on the Norwegian Continental Shelf. The main aim is to see how different experimental conditions under stress cycles are able to affect the permeability of the system by investigating the interplay between varying parameters (stress state, temperature, and brine chemistry) and their combined effect on permeability evolution in chalk at such conditions. The test setup is designed to replicate the production-related dynamics of a chalk reservoir by considering shear-fractured outcrop chalk cores exposed to cyclic deviatoric stress states while systematically changing either the test temperature or the injected brine. In this way, different combinations of the experimental setup highlight the individual contribution of temperature, brine chemistry, and cyclic deviatoric stress on permeability evolution. The results will help answer the following: • How do deviatoric stress cycles affect permeability?
• Is there any difference in permeability evolution when superimposing temperature or chemistry changes to the deviatoric stress cycles? • Are these individual effects concurring or competing?
A better understanding of coexisting factors that affect reservoir permeability could bridge the gap toward an improved prediction of permeability development during field life.
EXPERIMENTAL PROCEDURES AND METHODS
This article presents results from mechanical testing of 12 chalk cores in triaxial cells. Such tests allow systematic changes of stress, temperature, and injecting fluid, intended to replicate in situ processes at reservoir conditions.
2.1. Sample Set. The sample set consists of 12 outcrop chalk cores (Niobrara Formation, Kansas/Utah) of medium porosity for a chalk (32%, ±1%). Experimental and theoretical studies demonstrate that mechanical strength of chalk is orientationdependent and that chalk deformation behavior differs when main stress is applied parallel or perpendicular to bedding. 30,31 To ensure comparability between the tests, all cores are drilled from the same block collection and in the same direction. Further, the cores are cut to an average length of 75 mm and lathed to a diameter of 38.1 mm. The length−diameter ratio of approximately 2 should accommodate the steeply dipping shear fracture plane typical for deviatoric loading. 32 2.2. Injecting Brines. The injecting brines define three test series: calcite-equilibrated sodium chloride (CE-NaCl), calciteequilibrated sodium sulfate (CE-Na 2 SO 4 ), and reactive synthetic seawater (SSW). By equilibrating the two former brines with calcite, no mineralogical interactions are expected, although surface reactions would be more likely. CE-NaCl is inert in all practical aspects and serves as a standard in comparison to CE-Na 2 SO 4 and SSW brines. The concentrations of NaCl and Na 2 SO 4 are designed to match the ionic strength of SSW ( Table 1). The injection rate is 2 pore volumes (PVs)/day. After testing, the cores are flooded with 4 PVs distilled water (DW) to avoid salt precipitation.
2.3. Porosity and Permeability Calculation. For porosity calculation, the dry shaped cores were saturated with distilled water under vacuum conditions. The initial porosity (φ) is given by the ratio between pore volume (the difference in initial saturated and dry masses (M sat and M dry , respectively) divided by the density of distilled water (ρ dw )) and initial bulk volume (V bulk, i ) The effective permeability (k) calculation is based on Darcy's law, 33 assuming a steady laminar fluid flow and symmetric axial and radial deformations where μ is the fluid viscosity as a function of salinity and temperature (cP; after El-Dessouky and Ettouny 34 ), L i is the initial length of the core (cm), ΔL is the change in core length (cm), Q is the flow rate (cm 3 /s), D i is the initial diameter of the core (cm), ΔD is the mean change in core diameter (cm), and ΔP is the pressure drop over the core during flooding (atm). Uncertainties involved in the permeability calculation include fluid viscosity μ ± 2%, change in core length ΔL ± 0.7%, change in diameter ΔD ± 1%, flooding rate Q ± 2%, and differential pressure ΔP ± 0.075%. Permeability uncertainty Δk was estimated at ±3% by applying error estimation method shown in eq 3 where x, ..., z are the measured values of the parameters with their respective Δx, ..., Δz uncertainty. 6 A summary of the core properties before the mechanical testing (length, diameter, dry mass, saturated mass, porosity, permeability, and specific surface area) is listed in Table 2 and marked with index i (initial).
Triaxial Cell
Tests. The triaxial cell is equipped with an outer heating jacket and a regulating system (Omron E5CN) with precise proportional integral derivative (PID) temperature control (±1.0°C). The system includes two Quizix QX-2000HC pumps that control the axial and confining pressures independently and a fluid injection pump (Gilson 307HPLC) as well as a backpressure regulator that controls the pore pressure.
The cores are saturated with distilled water prior to cell mounting; they are isolated from the oil bath in the confining chamber by a heat shrinkage sleeve (fluorinated ethylene propylene, FEP; 0.5 mm in wall thickness). An extensometer surrounds the core at mid-length and measures the changes in diameter throughout the test (radial strain). Changes in the cores' length (axial strain) are monitored by an external axial linear variable displacement transducer (LVDT) placed on top of the cell piston.
Each series is performed at 50°C, below the threshold for sulfate effects on chalk 35 and at 130°C (reservoir temperature at Ekofisk) to trace the thermochemical effects on shear failing chalk cores injected with various brines. The low temperature tests require an initial heating to 130°C to make sure that the shrinking sleeve is tight and uniform around the core, comparable to the tests at high temperature. During this stage, the injecting fluid is distilled water and brine injection only starting after the system had cooled down to 50°C to avoid rock−brine interactions at high temperature.
Throughout the test, the confining pressure and pore pressure are constant at 1.2 and 0.7 MPa, respectively. In this way, the changes in effective stresses are a function of the increasing or decreasing axial stress. The cell piston is lowered carefully, and after core contact, it applies a pressure (0.5 MPa) that slightly overcomes the piston friction but causes negligible core deformation. The deviatoric tests start only when the triaxial cell has reached the right test temperature, the cores are brineflooded with at least 2 PVs, the differential pressure is stable, and the system is in equilibrium (axial, radial strain rates close to 0).
2.5. Stress Cycles. The cores undergo the same stress sequence three times. The first step is axial loading at a constant injection rate (0.01 mL/min) above yield (defined by initiation of nonlinear stress−strain relationship) until shear failure (fracture formation) at which the value of axial stress drops. Immediately after, the axial pump is set to apply a constant
ACS Omega
http://pubs.acs.org/journal/acsodf Article pressure, slightly below the failure point, allowing the core to deform (creep) over 3 days. The axial pressure then returns to the starting point (0.5 MPa) at the same rate as loading. The second and third stress cycles, following the same procedure as the first, start after the system had reached equilibrium. In the second and third cycles, the creep phases are shorter (1 day).
RESULTS
Deviatoric loading above yield resulted without exception in a steeply dipping shear fracture along the core (Figure 1, left). This is typical for compressive triaxial tests at low confining pressure, which allows the core to expand radially and facilitates well-organized shear failure between grains. 36 Figure 1 shows experiment KE20 (CE-NaCl series, 50°C) to exemplify the typical permeability response (green curve) to first deviatoric loading above yield (red curve). Deviatoric loading within the yield curve (where the stress−strain relationship is linear) generally caused permeability decreases in all cores, in average, of 10% in the first cycle and 5% during second and third cycles compared to permeability values at the beginning of the respective cycles. The shear fracture event, recognized by a drop in axial stress, initiated an abrupt increase in permeability, and generally, this occurred during the first loading. However, although all cores experienced the same deformation during the first loading, the mechanical strength of the cores and the degree of permeability rise differed. These differences should indicate the roles of injecting brine composition or temperature, the only two differing parameters in the test setup.
3.1. CE-NaCl Injection. Figure 2 shows permeability versus axial stress during deviatoric loading (left column) and permeability evolution in time during creep (right column) for the CE-NaCl test series. A clear distinction is seen between low temperature (blue curves) and high temperature (red curves) tests. As seen in Figure 2a, both cores tested at 50°C (KE20 and KE4, blue lines) fractured at lower axial stress (11)(12) than those tested at 130°C (red lines, KE44 and KE45, 15−17 MPa). The shear fracture caused a drop in axial stress together with a clear rise in permeability (45% in average) and occurred mainly in the first loading. Test KE4 (50°C), however, experienced a brief, unexpected pressure fluctuation in the beginning of the first loading at 5 MPa (Figure 2a), which might be related to an instrument error. This may have temporarily stabilized the core so that, at the end of the first deviatoric loading, the drop in axial stress from 11 to 9 MPa was an effect of microcracking, a precursor to fracturing. 37 A more pronounced shear fracture developed during the second loading cycle, leading to 1.75% axial deformation and 0.63% radial expansion, together with permeability rise by 36% (Figure 3b, blue triangles).
The mechanical strength decreased in all tests in the second loading (Figure 2b,c). The maximum axial stress was 3−4 MPa less compared to the first loading at 130°C, while at 50°C, the difference is approximately 5 MPa. Permeability increased in all tests after second loading but, unlike the first loading, by only less than 10% (excluding KE4).
Further, after the third loading, permeability decreased by 10% at 50°C. Figure 2c (blue lines) shows that, after an initial decline during loading, the measured permeability remained constant. At 130°C, on the other hand, it remained almost unchanged throughout the loading (Figure 2c, red lines).
The axial strain was the highest when the fracture occurred and it varied between 0.75 and 1.75% (Table 3), after which the cores experienced an average of 0.40% axial deformation at both temperatures. Radial strain does not correlate with the change in permeability so that more radial expansion does not necessarily lead to higher permeability.
Permeability seems unaffected by the degree of creep. Axial deformation during creep is a function of the applied axial stress, a value that was subjectively chosen for each core at each creep phase (Table 4). For example, in the KE44 test, the axial stress was 30% higher than KE45 at the same temperature, and it deformed three times more axially and six times more radially than KE45 during the first creep phase. Yet, permeability in both these tests is almost unchanged throughout the entire creep time (Figure 2d, red lines). The same trend is observed also in the creep phases 2 and 3, where the permeability does not register any significant changes. The two low temperature tests (KE4 and KE20), however, seem to decrease slightly during the first creep (Figure 2b, blue lines), but a direct comparison between the two tests is difficult because of the permeability fluctuations in KE4, which had not yet fractured properly at this point. During the second and third creep, on the other hand, the permeability only decreased somewhat in KE4 following the delayed clear shear fracture, but KE20 permeability stayed constant, similar to the high temperature tests (Figure 2e,f).
3.2. CE-Na 2 SO 4 Fluid Injection. The results from the CE-Na 2 SO 4 tests ( Figure 3) show a similar permeability pattern as the CE-NaCl series. Permeability declined during the axial loading and increased with fracture formation in the first cycle (Figure 3a). Unlike the CE-NaCl tests, there is no clear maximum axial stress correlation between the high and low test temperatures. During the first loading cycle, the maximum axial stress was similar in three of the four tests, between 10 and 12 MPa, while KE22 withheld a maximum axial stress of 14 MPa. However, the high temperature tests registered the highest permeability increase when the shear fracture occurred, approximately double compared to the intact core permeability. At lower temperature, permeability increased with 50−80%. When comparing the first loading of the duplicate tests at each temperature, the magnitude of permeability rise correlates with the axial and radial strain: the higher the strain, the higher the permeability rise (Table 5). Left: Typical example of axial stress−strain relationship (red curve) and permeability evolution (green curve) during first deviatoric loading above yield, here, data from the KE20 experiment (CE-NaCl series, 50°C ). Permeability generally decreases within the yield curve; the shear fracture event causes a concomitant drop in axial stress and an increase in permeability.
The mechanical strength decreased in all tests from the first cycle to the second and third cycles, regardless of test temperature (Figure 3b,c). Permeability increased in all experiments after the second loading cycle, this time by a factor of 2−4 more in the low temperature tests compared to the high temperature. This also corresponds to higher axial strain (0.62− 0.98%) and radial strain (−0.2% in average) at low temperature compared to high temperature tests, which deformed in an average of 0.25% axially and below −0.1% radially at the same stage ( Table 5).
The stress conditions in all three creep phases were similar (Table 6), and the cores sustained approximately 7 MPa constant axial stress and registered minor radial strain in all tests (between −0.02 and −0.06%). The axial strain varied somewhat with the temperature in the first creep phase so that, at low temperature, the axial strain was approximately double compared to the axial strain at 130°C, but this had no clear effect on permeability.
The permeability generally declined after the first creep, mostly during the first creep day, after which it stabilized until the end of the creep phase (Figure 3d). Creep phases 2 and 3 (Figure 3e,f) induced minimal changes in permeability so that most of the permeability gain during the first loading cycle was retained throughout the test.
3.3. Synthetic Seawater Injection. The SSW-injected cores show similar permeability behavior as the CE-NaCl and CE-Na 2 SO 4 series. As in the CE-Na 2 SO 4 series, temperature did not play a distinguishing role in the maximum axial stress during the first loading cycle. Three of the tests fractured after a peak of approximately 12 MPa (Figure 4a), although performed at a different temperature. Core KE73 (130°C, red diamonds) fractured after a maximum axial stress of 14 MPa. Although, generally, permeability increased after the first loading in all tests, there is no clear pattern in the rise magnitude ( Table 7).
The strain−permeability correlation is seen here as well, where tests that showed higher permeability increase also experienced higher radial strain (Table 7). Core KE51 stands out with the highest permeability rise after the first loading (126%).
SSW flooding weakened the chalk cores similarly at both temperatures, showing an approximately 3−5 MPa decline in maximum axial stresses between the first loading and the subsequent loading phases. As in the CE-NaCl and CE-Na 2 SO 4 Figure 2. Permeability response to (a−c) axial stress and (d−f) creep in CE-NaCl experiments at 50°C (blue lines) and 130°C (red lines); the highest permeability increase is associated with core fracturing (a, and for KE4, b) after which no notable permeability change was observed. Creep under deviatoric conditions had a minimal effect on permeability evolution. The test temperature affected the mechanical strength during loading but did not play a decisive role in permeability during creep.
ACS Omega
http://pubs.acs.org/journal/acsodf Article series, permeability did not change significantly during deviatoric loadings 2 and 3. Creep permeability trends were nearly parallel in all SSW experiments (Figure 4d−f), generally decreasing. End perme-ability was lowest in core KE51 (130°C, Figure 4f, red bullets), which, throughout the test, lost more than half of the permeability gain after fracturing in loading 1. Core KE47 (50°C ) deformed at a high rate in all three creep cycles (Table 8),
ACS Omega
http://pubs.acs.org/journal/acsodf Article and the test was ultimately interrupted as the core collapsed. Yet, despite the accelerating creep conditions, the permeability decline rate is comparable to the other tests in this series.
DISCUSSION
The main drive in permeability change is the shear fracturing.
Occurring mainly in the first deviatoric loading, the fracture serves as a "highway" for brine flow that induces a simultaneous permeability rise. The fracture dip angle is high and close to parallel to fluid flow due to the low confining pressure. The magnitude of permeability increase at this stage generally determines the cores' end permeability. Figure 5 graphically displays the cumulative permeability evolution in all experiments, taking into account the unloading sequence as well. The first data point is the beginning of the first deviatoric cycle, and each line segment represents the permeability behavior during consecutive stress states. At both temperatures, end permeability of CE-Na 2 SO 4 -flooded cores is the highest. Megawati et al. 35 suggests that sodium sulfate brine injection at 130°C does not cause new mineral precipitation such as anhydrite, but sulfate adsorption on chalk grains will change the chalk's surface charge and cause a disjoining force close to granular contacts. The disjoining force following sulfate adsorption may in fact explain why all CE-Na 2 SO 4 -flooded cores registered the highest permeability increase ( Figure 5, blue lines).
On the other hand, SSW-flooded cores (yellow lines) gained least permeability at 130°C, approximately half compared to CE-Na 2 SO 4 series at the same temperature. That is most likely because SSW poses a different and more complex thermochemical scenario. According to previously mentioned studies, 23,30 SSW injection at 130°C leads to both sulfate adsorption and mineralogical changes. Ca 2+ substitution with Mg 2+ and anhydrite (CaSO 4 ) precipitation contribute further to chalk weakening and alter fracture permeability. Particularly, anhydrite precipitation is a common permeability inhibitor, and Figure 5 shows that end permeability of both SSW-injected cores at 130°C (yellow lines) lies below the other two series (CE-Na 2 SO 4 , blue lines, and CE-NaCl, green lines).
Another notable aspect from Figure 5 is that the degree of permeability rise related to fracturing (load 1) correlates with test temperature: overall, permeability increased 70%, in average, at the beginning of the tests at high temperature and only 40% average at low temperature. There was also higher variation in permeability rise among high temperature tests, compared to low temperature tests, where four out of six tests had an almost identical permeability increase. Additionally, end
ACS Omega
http://pubs.acs.org/journal/acsodf Article permeability at 130°C increased, in average, by a factor of 1.7, while at 50°C, the average increasing factor was 1.3. Both stress and thermochemical test conditions impact the cores' mechanical strength to different degrees, generally decreasing throughout the test. Repetitive deviatoric loadings following the fracture formation do not seem to affect permeability as much as the shear fracturing event. This is in agreement with other studies 13,14 that, although performed under different thermochemical conditions than the present study, report that a second deviatoric stress cycle had a less effect on permeability evolution compared to the first cycle. The cycles naturally weaken the cores, with each loading−unloading adding more fatigue and deformation to the rock and altering its elastic properties. 11 The effect of temperature and injecting brine on the cores' mechanical strength is shown in Table 9.
The CE-Na 2 SO 4 -injected cores at 50°C registered the least mechanical strength decline (2−3 MPa) compared to the SSW (5 MPa) and CE-NaCl (4 MPa) series. At 130°C, however, the CE-NaCl-injected cores retain most of their initial strength, while CE-Na 2 SO 4 and, particularly, SSW series become weaker. This observation is in agreement with the previous studies 23, 30 on SSW influence on the mechanical stability of chalk.
Additionally, higher temperature intensifies the rock interactions with the injection brine. 7,19,23,24,30,35 Creep conditions did not seem to override the permeability gain during fracturing. This is in agreement with other studies 3,6 that observed a low permeability decline in deviatoric conditions. Table 10 summarizes the key changes recorded during each experiment. The net axial and radial strains are calculated as the percentage axial compaction and radial expansion, respectively, at the end of the test relative to the initial length and diameter respectively shown in Table 2.
In CE-NaCl and CE-Na 2 SO 4 test series, the low temperature tests suffered more axial compaction than the high temperature tests. This correlates well with the end permeability: permeability increased only slightly in CE-NaCl-flooded cores at 50°C (0 and 21%) but more clearly at higher temperature (66%). A similar pattern appears in CE-Na 2 SO 4 tests. Low temperature cores experienced more compaction (2.5 and 2.8%) and less permeability increase (41 and 73%) than the cores flooded with CE-Na 2 SO 4 at 130°C (1.6% axial strain and a permeability increase by 95 and 122%). This indicates that, when flooding equilibrium brines that do not alter chalk mineralogy, brine temperature can play a significant role in mechanical compaction (strain), a key permeability-controlling factor.
This correlation is not clear in SSW series. Experiment KE47 failed before all stress sequences were complete so that the high compaction (4.3%) and high end permeability (+80%) are irrelevant in this argument. At 130°C, KE51 deforms more axially (2.9%) than the second test at the same temperature (KE73, 1.3%) and end permeability is consequently lower with higher compaction, as in the previous two series.
There is little variation in net radial strain among the tests (between −0.2 and −0.8%, excluding KE47), indicating that flooding brine and temperature do not decisively affect fracture aperture, and any radial strain is rather stress-induced. Ultimately, there is no immediate correlation between the net radial strain and end permeability.
CONCLUSIONS
This study focuses on permeability evolution in fractured chalk cores exposed to cyclic deviatoric stress and thermochemical influence relevant to reservoir conditions. The test setup included three series, each with specific injecting brine: two calcite-equilibrated brines (CE-NaCl and Na 2 SO 4 ) and one
ACS Omega
http://pubs.acs.org/journal/acsodf Article nonequilibrium brine (SSW). Each test series was performed at 50 and 130°C. Such flooding experiments highlight the interplay between the parameters. The results showed that stress-induced fracturing is the main permeability drive as permeability changed decisively only as a response to fracturing of the core. The deviatoric stress state together with low confining pressure induced shear fracturing at a steep angle (over 70°), close to the flooding direction. Subsequent deviatoric loadings had a little effect on permeability in all tests, regardless of injecting brine and test temperature. During creep, permeability generally declined slightly or remained unchanged. Our results indicate that, once chalk has fractured, the effective permeability is insensitive to compaction cycles and reactive flow, both at high and low temperatures. The CE-Na 2 SO 4 test series stand out with the highest final permeability at both temperatures. This is likely a result of sulfate adsorption on the chalk grain surface creating enough disjoining force at granular contact to preserve permeability. Additionally, calcite equilibration of the brine prevented calcium displacement and, consequently, anhydrite precipitation.
If flooding CE-Na 2 SO 4 through fractured chalk seems to sustain the permeability gain related to shear fracturing, then flooding reactive brine such as SSW at a high temperature has a competing effect on permeability evolution. SSW test series registered most permeability loss at 130°C, most likely due to chemical alteration, possibly precipitation of anhydrite.
However, all cores had a positive net permeability change. Despite fracturing and exposure to different stress states, temperatures, and brine conditions, core permeability at the end of the test seems to remain within the same order of magnitude as the original value, ranging between the initial value and double of the initial value. This indicates strong insensitivity to changes in reservoir conditions.
The results are repeatable, confirming permeability behavior for this experiment setup. Future studies should investigate the effects of longer-term stress cycles or several stress cycles at actual reservoir stress conditions on permeability evolution in fractured chalk cores. Additional analyses such as scanning electron microscopy can then verify the chemical alteration caused by reactive fluid flow in chalk, as suggested by Minde et al., 38 and determine mineral alteration along the fracture or in the matrix. Although challenging to obtain, similar flooding experiments on actual reservoir chalk from the North Sea would provide important data for validating outcrop chalk test results and refining permeability models for the North Sea reservoir chalk. Especially, the importance of the adsorption mechanism to maintain fracture permeability in the field should be investigated further. | 2019-05-17T14:19:51.284Z | 2019-04-08T00:00:00.000 | {
"year": 2019,
"sha1": "f7274b64dbc52b627b379f40a04159f7046bd880",
"oa_license": "CCBY",
"oa_url": "https://uis.brage.unit.no/uis-xmlui/bitstream/11250/2753787/1/acsomega.9b04470.pdf",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "e3bea590061743547f997c1042b5cec6948ab486",
"s2fieldsofstudy": [
"Geology",
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Medicine",
"Geology"
]
} |
214774912 | pes2o/s2orc | v3-fos-license | A Survey on Conversational Recommender Systems
Recommender systems are software applications that help users to find items of interest in situations of information overload. Current research often assumes a one-shot interaction paradigm, where the users' preferences are estimated based on past observed behavior and where the presentation of a ranked list of suggestions is the main, one-directional form of user interaction. Conversational recommender systems (CRS) take a different approach and support a richer set of interactions. These interactions can, for example, help to improve the preference elicitation process or allow the user to ask questions about the recommendations and to give feedback. The interest in CRS has significantly increased in the past few years. This development is mainly due to the significant progress in the area of natural language processing, the emergence of new voice-controlled home assistants, and the increased use of chatbot technology. With this paper, we provide a detailed survey of existing approaches to conversational recommendation. We categorize these approaches in various dimensions, e.g., in terms of the supported user intents or the knowledge they use in the background. Moreover, we discuss technological approaches, review how CRS are evaluated, and finally identify a number of gaps that deserve more research in the future.
INTRODUCTION
Recommender systems are among the most visible success stories of AI in practice. Typically, the main task of such systems is to point users to potential items of interest, e.g., in the context of an e-commerce site. Thereby, they not only help users in situations of information overload [126], but they also can significantly contribute to the business success of the service providers [57].
In many of these practical applications, recommending is a one-shot interaction process. Typically, the underlying system monitors the behavior of its users over time and then presents a tailored set of recommendations in pre-defined navigational situations, e.g., when a user logs in to the service. Although such an approach is common and useful in various domains, it can have a number of potential limitations. There are, for example, a number of application scenarios, where the user preferences cannot be reliably estimated from their past interactions. This is often the case with high-involvement products (e.g., when recommending a smartphone), where we even might have no past observations at all. Furthermore, what to include in the set of recommendations can be highly context-dependent, and it might be difficult to automatically determine the user's current situation or needs. Finally, another assumption often is that users already know their preferences when they arrive at the site. This might, however, not necessarily be true. Users might also construct their preferences only during the decision process [152], when they become aware of the space of the options. In some cases, they might also learn about the domain and the available options only during the interaction with the recommender [154].
The promise of Conversational Recommender Systems (CRS) is that they can help to address many of these challenges. The general idea of such systems, broadly speaking, is that they support a task-oriented, multi-turn dialogue with their users. During such a dialogue, the system can elicit the detailed and current preferences of the user, provide explanations for the item suggestions, or process feedback by users on the made suggestions. Given the significant potential of such systems, research on CRS already has some tradition. Already in the late 1970s, Rich [127] envisioned a computerized librarian that makes reading suggestions to users by interactively asking them questions, in natural language, about their personality and preferences. Besides interfaces based on natural language processing (NLP), a variety of form-based user interfaces 1 were proposed over the years. One of the earlier interaction approaches in CRS based on such interfaces is called critiquing, which was proposed as a means for query reformulation in the database field already in 1982 [144]. In critiquing approaches, users are presented with a recommendation soon in the dialogue and can then apply pre-defined critiques on the recommendations, e.g., ("less $$") [15,49].
Form-based approaches can generally be attractive as the actions available to the users are pre-defined and non-ambiguous. However, such dialogues may also appear non-natural, and users might feel constrained in the ways they can express their preferences. NLP-based approaches, on the other hand, for a long time suffered from existing limitations, e.g., in the context of processing voice commands. In recent years, however, major advances were made in language technology. As a result, we are nowadays used to issuing voice commands to our smartphones and digital home assistants, and these devices have reached an impressive level of recognition accuracy. In parallel to these developments in the area of voice assistants, we have observed a fast uptake of chatbot technology in recent years. Chatbots, both rather simple and more sophisticated ones, are usually able to process natural language and are nowadays widely used in various application domains, e.g., to deal with customer service requests.
These technological advances led to an increased interest in CRS during the last years. In contrast to many earlier approaches, we however observe that today's technical proposals are more often based on machine learning technology instead of following pre-defined dialogue paths. However, often there still remains a gap between the capabilities of today's voice assistants and chatbots compared to what is desirable to support truly conversational recommendation scenarios [117], in particular when the system is voice-controlled [161,165].
In this paper, we review the literature on CRS in terms of common building blocks of a typical conceptual architecture of CRS. Specifically, after providing a definition and a conceptual architecture of a CRS in Section 2, we discuss (i) interaction modalities of CRS (Section 3), (ii) the knowledge and data they are based upon (Section 4), and (iii) the computational tasks that have to be accomplished in a typical CRS (Section 5). Afterwards, we discuss evaluation approaches for CRS (Section 6) and finally give an outlook on future directions.
DEFINITIONS AND RESEARCH METHODOLOGY
In this section we discuss relevant preliminaries to our work. First, we provide a general characterization and conceptual model of CRS. Second, we discuss our research methodology.
Characterization of Conversational Recommender Systems
There is no widely-established definition in the literature of what represents a CRS. In this work, we use the following definition.
Definition 2.1 (Conversational Recommender System-CRS).
A CRS is a software system that supports its users in achieving recommendation-related goals through a multi-turn dialogue.
One fundamental characteristic of CRS is their task-orientation, i.e., they support recommendation specific tasks and goals. The main task of the system is to provide recommendations to the users, with the goal to support their users' decision-making process or to help them find relevant information. Additional tasks of CRS include the acquisition of user preferences or the provision of explanations. This specific task orientation distinguishes CRS from other dialogue-based systems, such as the early ELIZA system [158] or similar chat robot systems [151].
The other main feature of a CRS according to our definition is that there is a multi-turn conversational interaction. This stands in contrast to systems that merely support question answering (Q&A tools). Providing one-shot Q&A-style recommendations is a common feature of personal digital assistants like Apple's Siri and similar products. While these systems already today can reliably respond to recommendation requests, e.g., for a restaurant, they often face difficulties maintaining a multi-turn conversation. A CRS therefore explicitly or implicitly implements some form of dialogue state management to keep track of the conversation history and the current state.
Note that our definition does not make any assumptions regarding the modality of the inputs and the outputs. CRS can be voice controlled, accept typed text, or obtain their inputs via form fields, buttons, or even gestures. Likewise, the output is not constrained and can be voice, speech, text, or multimedia content. No assumptions are also made regarding who drives the dialogue.
Generally, conversational recommendation shares a number of similarities with conversational search [115]. In terms of the underlying tasks, search and recommendation have in common that one main task is to rank the objects according to their assumed relevance, either for a given query (search) or the preferences of the user (recommendation). Furthermore, in terms of the conversational part, both types of systems have to interpret user utterances and disambiguate user intents in case natural language interactions are supported. In conversational search systems, however, the assumption often is that the interaction is based on "written or spoken form" [115], whereas in our definition of CRS various types of input modalities are possible. Overall, the boundary between (personalized) conversational search and recommendation systems often seems blurry, see [86,139,172], in particular as often similar technological approaches are applied. In this survey, we limit ourselves to works that explicitly mention recommendation as one of their target problems.
Conceptual Architecture of a CRS
A variety of technical approaches for building CRS were proposed in the last two decades. The specifics of the technical architecture of such solutions depend on the system's functionality, i.e., whether or not voice input is supported. Still, a number of typical interoperating conceptual components of such architectures can be identified, as shown in Figure 1.
Computational Elements. One central part of such an architecture usually is a Dialogue Management System (also called "state tracker" or similarly in some systems). This component drives the process flow. It receives the processed inputs, e.g., the recognized intents, entities and preferences, and correspondingly updates the dialogue state and user model. After that, using a recommendation and reasoning engine and background knowledge, it determines the next action and returns appropriate content like a recommendation list, an explanation, or a question to the output generation component.
The User Modeling System can be a component of its own, in particular when there are long-term user preferences to be considered, or not. In some cases, the current preference profile is implicitly part of the dialogue system. The Recommendation and Reasoning Engine is responsible for retrieving a set of recommendations, given the current dialogue state and preference model. This component might also implement other complex reasoning functionality, e.g., to generate explanations or to compute a query relaxation (see later). Besides these central components, typical CRS architectures Fig. 1. Typical architecture of a conversational recommender system (see also [142]).
comprise modules for input and output processing. These can, for example, include speech-to-text conversion and speech generation. On the input side-in particular in the case of natural language input-additional tasks are usually supported, including intent detection and named entity recognition [66,99], for identifying the users' intentions and entities (e.g., attributes of item) in their utterances.
Knowledge Elements. Various types of knowledge are used in CRS. The Item Database is something that is present in almost all solutions, representing the set of recommendable items, sometimes including details about their attributes. In addition to that, different types of Domain and Background Knowledge are often leveraged by CRS. Many approaches explicitly encode dialogue knowledge in different ways, e.g., in the form of pre-defined dialogue states, supported user intents, and the possible transitions between the states. This knowledge can be general or specific to a particular domain. The knowledge can furthermore either be encoded by the system designers or automatically learned from other sources or previous interactions. A typical example for learning approaches are those that use machine learning to build statistical models from corpora of recorded dialogues. Generally, domain and background knowledge can be used by all computational elements. Input processing may need information about entities to be recognized or knowledge about the predefined intents. The user modeling component may be built on estimated interest weights regarding certain item features, and the reasoning engine may use explicit inference knowledge to derive the set of suitable recommendations.
Research Method: Identifying Relevant Works
We followed a semi-systematic approach to identify relevant papers. We first queried several digital libraries 2 using pre-defined search strings such as "conversational recommender system", "interactive recommendation", "advisory system", or "chatbot recommender". The returned papers were then manually checked for their relevance based on titles and abstracts. Papers considered relevant were read in detail and, if considered to be in the scope of the paper, used as a starting point for a snowballing procedure. Overall, the paper selection process surfaced 121 papers on CRS that we considered in this work. 3 Looking at the type of these papers, the majority of the works described technical proposals for one of the computational components of a CRS architecture. A smaller set of papers described demo systems. Another smaller set were analytical ones which, for example, reviewed certain general characteristics of CRS.
Generally, we only included papers that are compliant with our definition of a CRS given above. We therefore did not include papers that discussed one-shot or multi-step question-answering systems [133,166], even when the question or task was about a recommendation. We also did not consider general dialogue systems like chatbot systems, which are not task-oriented, or systems that only support a query-response interaction process like a search engine without further dialogue steps, e.g., [31]. Furthermore, we did not include dialogue-based systems, which were task-oriented, but not on a recommendation task, e.g., the end-to-end learning approaches presented in [159] and [76], which focus on restaurant search and movie-ticket booking. Furthermore, we excluded a few works like [50] or [174], which use the term "interactive recommendation", which however rather refers to a system that addresses observed user interest changes over time, but is not designed to support a dialogue with the user. Other works like [138] or [174] mainly focus on finding good strategies for acquiring an initial set of ratings for cold-start users. While these works can be seen as supporting an interactive process, there is only one type of interaction, which is furthermore mostly limited to a profile-building phase. Finally, there are a number of works where users of a recommendation systems are provided with mechanisms to fine-tune their recommendations, which is sometimes referred to as "user control" [61]. Such works, e.g., [163], in principle support user actions that can be found in some CRS, for example to give feedback on a recommendation. The interaction style of such approaches is however not a dialogue with the system.
INTERACTION MODALITIES OF CRS
The recent interest in CRS is spurred both by developments in NLP and technological advances such as broadband mobile internet access and new devices like smartphones and home assistants. Our review of the literature however shows that the interaction between users and a CRS is neither limited to natural language input and output nor to specific devices.
Input and Output Modalities
The majority of the surveyed papers explicitly or implicitly support two main forms of inputs and outputs, either as the only modality or combined in a hybrid approach: • Based on forms and structured layouts, as in a traditional web-based (desktop) application.
• Based on natural language, either in written or spoken form.
Approaches that are exclusively based on forms (including buttons, radio-buttons etc.) and structured text for the output are common for almost all (except [47]) critiquing-based approaches, e.g., [7,39,52,89,123], as well as for web-based interactive advisory solutions as presented, e.g., in [41,55,60]. In such applications, users typically follow pre-defined dialogue paths and interact with the application by filling forms or choosing from pre-defined options. The final output typically is a structured visual representation in the form of a list of options.
On the other hand, approaches that are entirely based on natural language interactions include task-oriented dialogue systems like the early proposal from [127], the explanation-aware conversational system proposed in [109], as well as more recent (deep) learning-based approaches, e.g., [46,48,77]. Spoken-text-only approaches are often implemented on smart speakers like Amazon Alexa or Google Home, e.g., [4,36]. Compared to form-based approaches, these solutions usually offer more flexibility in the dialogue and sometimes support chit-chat and mixed-initiative dialogues. Major challenges can, however, lie in the understanding of the users' utterances and the identification of their intents. But also the presentation of the recommendations can be difficult, in particular when more than one option should be provided at once.
Hybrid approaches that combine natural language with other modalities are, therefore, not uncommon. For example, systems that support written natural language dialogues often rely on list-based or other visual approaches to present their results [73,172]. The work presented in [167], on the other hand, supports a hybrid visual/natural language interaction mechanism, where recommendations are displayed visually, and users can provide feedback to certain features in a critiquing-like form in natural language. Yet other systems support voice input, but present the recommendations in textual form [47,142], because it can be difficult to present more than one recommendation at a time through spoken language without overwhelming the users. Chatbot applications, finally, often combine natural language input and output with structured form elements (e.g., buttons) and a visually-structured representation of the recommendations [53,62,100,114].
Besides written or spoken language and fill-out forms, a few other alternative and applicationspecific modalities for inputs and outputs can be found. The dialogue system presented in [150], for example, supports multiple types of inputs, including visual inputs on a geographic map, pen gestures like zooming, or handwritten input. The work proposed in [18] furthermore tries to process non-verbal input, like body postures, gestures, facial expressions, as well as speech prosody to estimate the user's emotions and attitudes in order to acquire implicit feedback and preferences.
In terms of the outputs, several approaches use interactive geographic maps, often as part of a multi-modal output strategy [5,39,73,150]. The applicability of map-based approaches is limited to certain application domains, e.g., travel and tourism, but can help to overcome various challenges regarding the user experience with conversational systems [125]. The use of embodied conversational agents (ECAs) [19] as an additional output mechanism is also not uncommon in the literature [41,52] because of the assumed general persuasive potential of human-like avatars [2,38]. Various factors can impact the effectiveness of such ECAs. In [43], for example, the authors analyze the effects of non-verbal behavior (e.g., the facial expressions) on the effectiveness of an ECA in the context of a dialogue-based recommender system. Research on the specific effects of using different variants of an ECA in the context of recommender systems is, however, generally rare.
Finally, a few works exist where users interact with a recommendation system within a virtual, three-dimensional space. In [33,34], the authors describe a virtual shopping environment where users interact with a critiquing-based recommender and can, in addition, collaborate with other users. Supporting group decisions is also the goal of the work presented in [1]. In this work, however no 3D visualization is supported, and the focus of the work is mostly to enable the conversation between a group of users supported by a recommender system. Figure 2 provides an overview of common input and output modalities found in the literature.
Application Environment
Stand-alone and Embedded Applications. CRS can both be stand-alone applications or part of a larger software solution. In the first case, recommendation is the central functionality of the system. Examples for such applications include the mobile tourist guides proposed in [7,60,86], the interactive e-commerce advisory systems discussed in [41,58], or the early FindMe browsing and shopping systems [14,15]. In the second case, that of an embedded application, the CRS does not (entirely) stand on its own. Often, the CRS is implemented in the form of chatbot that is embedded within e-commerce solutions [32,164] or other types of web-portals [21]. In some cases, the CRS is also part of a multi-modal 2D or 3D user experience, like in [33] and [43]. A special case in this context is the use of a CRS on voice-based home assistants (smart speakers) [4,36]. In such settings, providing recommendations is only one of many functionalities the device is capable of. Users might therefore not actually perceive the system as primarily being a recommender.
Supported
Devices. An orthogonal aspect regarding the application environment of a CRS is that of the supported devices. This is particularly important, because the specific capabilities and features of the target device can have a significant impact on the design choices when building a CRS. The mentioned smart speaker applications, for example, are specifically designed for hardware devices that often only support voice-based interactions. This can lead to specific challenges, e.g., when it comes to determining the user's intent or when a larger set of alternatives should be presented to the users. The interaction with chatbot applications, on the other hand, is typically not tied to specific hardware devices. Commonly, they are either designed as web applications or as smartphone and tablet applications. However, the choice of the used communication modality can still depend on the device characteristics. Typing on small smartphone screens may be tedious and the limited screen space in general requires the development of tailored user interfaces.
The applicability of CRS is not limited to the mentioned devices. Alternative approaches were, for example, investigated in [18,37]. Here, the idea is that the CRS is implemented as an application on an interactive wall that could be installed in a real store. A camera is furthermore used to monitor and interpret the user's non-verbal communication actions, in particular facial expressions and gestures. An alternative on-site environment was envisioned in [170]. Here, the ultimate goal is to build a CRS running on a service robot, in this case one that is able to elicit a customer's food preferences in a restaurant. Yet another application scenario, that of future in-car recommender systems, is sketched in [83]. Given the specific situation in a driving scenario, the use of speech technology often is advisable [22], which almost naturally leads to conversational recommendation approaches, e.g., for driving-related aspects like navigation or entertainment [8,9].
Interaction Initiative
A central design question for most conversational systems is who takes the initiative in the dialogue. Traditionally, we can differentiate between (i) system-driven, (ii) user-driven, and (iii) mixed-initiative systems. When considering CRS primarily as dialogue systems, such a classification can in principle be applied as well, but the categorization is not always entirely clear.
Critiquing-based systems are often considered to be mainly system-driven, and sometimes mixedinitiative, e.g., in [148]. In such applications, the users are typically first asked about their preferences, e.g., using a form, and then an initial recommendation is presented. Users can then use a set of pre-defined or dynamically determined critiques to further refine their preferences. While the users in such applications have some choices regarding the dialogue flow, e.g., they can decide to accept a recommendation or further apply critiques, these choices are typically very limited and the available critiques are determined by the system. Another class of mostly system-driven applications are the form-based interactive advisory systems discussed in [41]. Here, the system guides the user through a personalized preference elicitation dialogue until enough is known about the user. Only after the initial recommendations are displayed, the user can influence the dialogue by selecting from pre-defined options like asking for an explanation or by relaxing some constraints.
The other extreme would be a user-driven system, where the system takes no proactive role. The resulting dialogue therefore consists of "user-asks, system-responds" pairs, and it stands to question if we would call such an exchange a conversational recommendation. Such conversation patterns are rather typical for one-shot query-answering, search and recommendation systems that are not in the scope of our survey. As a result, in the papers considered relevant for this study, we did not find any paper that aimed at building an entirely user-driven system in which the system never actively engages in a dialogue, e.g., when it does not ask any questions ever. A special case in that context is the recommender system proposed in [82], which monitors an ongoing group chat and occasionally makes recommendations to the group based on the observed communication.
This observation is not surprising because every CRS is a task-oriented system aiming to achieve goals like obtaining enough reliable information about the user's preferences. As a result, almost all approaches in the literature are mixed-initiative systems, although with different degrees of system guidance. Typical chatbot applications, for example, often guide users through a series of questions with pre-defined answer options (using forms and buttons), and at the same time allow them to type in statements in natural language. In fully NLP-based interfaces, users typically have even more freedom to influence how the dialogue continues. Still, also in these cases, the system typically has some agenda to move the conversation forward.
Technically, even a fully NLP-based dialogue can almost entirely be system-driven and mostly rely on a "system asks, user responds" [172] conversation pattern. Nonetheless, the provision of a natural language user interface might leave the users disappointed when they find out that they can never actively engage in the conversation, e.g., by asking a clarification question or explanation regarding the system's question.
Discussion
A variety of ways exist in which the user's interaction with a CRS can be designed, e.g., in terms of the input and output modalities, the supported devices, or the level of user control. In most surveyed papers, these design choices are, however, rarely discussed. One reason is that in many cases the proposed technical approach is mostly independent of the interaction modality, e.g., when the work is on a new strategy to determine the next question to ask to the user. In other cases, the modalities are pre-determined by the given research question, e.g., how to build a CRS on a mobile.
More research therefore seems required to understand how to make good design choices in these respects and what the implications and limitations of each design choice are. Regarding the chosen form of inputs and outputs, it is, for example, not always entirely clear if natural language interaction makes the recommendation more efficient or effective compared to form-based inputs. Pure natural language interfaces in principle provide the opportunity to elicit preferences in a more natural way. However, these interfaces have their limitations as well. The accuracy of the speech recognizer, for example, can have a major impact on the system's usability. In addition, some users might also be better acquainted and feel more comfortable with more traditional interaction mechanisms (forms and buttons). According to the study in [54], a mix of a natural language interface and buttons led to the best user experience. Moreover, in [102], it turned out that in situations of disambiguation, i.e., when a user has to choose among a set of multiple alternatives, mixed-interaction mode (NLP interface with buttons) can make the task easier for users. Overall, while in some cases the choice of the modalities is predetermined through the device, finding an optimal combination of interaction modalities remains challenging, in particular as individual user preferences might play a role here.
More studies are also needed to understand how much flexibility in the dialogue is required by users or how much active guidance by the system is appreciated in a certain application. Furthermore, even though language-based and in particular voice-based conversations have become more popular in recent years, certain limitations remain. It is, for example, not always clear how one would describe a set of recommendations when using voice output. Reading out more than one recommendation seems impractical in most cases and something that we could call "recommendation summarization" might be needed.
Despite these potential current limitations, we expect a number of new opportunities where CRS can be applied in the future. With the ongoing technological developments, more and more devices and machines are equipped with CPUs and are connected to the internet. In-store interactive walls, service robots and in-car recommenders, as discussed above, are examples of visions that are already pursued today. These new applications will, however, also come with their own general challenges (e.g., privacy considerations, aspects of technology acceptance) and application-specific ones (e.g., safety considerations in an in-car setting).
UNDERLYING KNOWLEDGE AND DATA
Depending on the chosen technical approach, CRS have to incorporate various types of knowledge and background data to function. Clearly, like any recommender, there has to be information about the recommendable items. Likewise, the generation of the recommendations is either based on explicit knowledge, for example recommendation rules or constraints, or on machine learning models that are trained on some background data. However, conversational systems usually rely on additional types of knowledge about the user intents that the CRS supports, the possible states in the dialogue, or data such as recorded and transcribed natural language recommendation dialogues that are used to train a machine learning model. In the following sections, we provide an overview on the different types of knowledge and data that were used in the literature to build a CRS.
User Intents
CRS are dialogue systems designed to serve very specific purposes in the context of information filtering and decision making. Therefore, they have to support their users' particular information needs and intents that can occur in such conversations. In many CRS, the set of user intents that the system supports is pre-defined and represents a major part of the manually engineered background knowledge on which the system is built. In particular in NLP-based approaches, detecting the current user's intent and selecting the system's response is one of the main computational tasks of the system, see also Section 5. In this section, we will therefore mainly focus on NLP-based systems.
The set of user intents that the system supports varies across the different CRS that are found in the literature, and the choice of which intents to support ultimately depends on the requirements of the application domain. However, while a subset of the intents that the system supports is sometimes specific to the application as well, there are a number of intents that are common in many CRS. In Table 1, we provide a high-level overview of domain-independent user intents that we have found in our literature review. The order of the intents in Table 1 roughly follows the flow of a typical recommendation dialogue. This overview is also intended to serve as a tool for the designers of CRS to check if there are any gaps in their current system with respect to potential user needs that are not well supported.
Research on what are relevant user intents is generally scarce, and we only found 11 papers that explicitly discussed user intents. Among these 11, only few of them, e.g., see [16,100,164], considered the majority of the domain-independent intents shown in Table 1. Others like [65,105,142] only discuss certain subsets of them. Yet another set of papers focused on very application-specific intents in the context of group recommendation [1,103]. Table 1. High-level overview of selected domain-independent user intents found in the literature.
Intent Name Intent Description Initiate Conversation
Start a dialogue with the system.
Chit-chat
Utterances unrelated to the recommendation goal.
Provide Preferences
Share preferences with the system.
Revise Preferences
Revise previously stated preferences.
Ask for Recommendation
Obtain system suggestions.
Obtain Explanation
Learn more about why something was recommended.
Obtain Details
Ask about more details of a recommended object. Feedback on Recommendation Give feedback on the provided recommendation(s).
Restart
Restart the dialogue.
Accept Recommendation
Accept one of the recommendations. Quit Terminate the conversation.
Starting, re-starting, and ending the dialogue. In NLP-based CRS, either the system or the user can initiate the dialogue. In a user-driven conversation, the recommendation seeker might, for example, explicitly ask for help [100] or make a recommendation request [162] to start the interaction. One typical difficulty in this context is to recognize such requests when the dialogue starts with chit-chat. Once the recommendation dialogue is moving on, it is not uncommon that users want to start over, i.e., begin the session from scratch and "reset their profile" [100]. Previous studies found that such an intent was found in 5.2% of the dialogues [16] or that 36.4% of the users had this intent in a conversation [65]. Finally, at the end of the conversation, the user has either found a recommendation useful and accepts it in some form (e.g., by purchasing or consuming an item) or not. In either case, the CRS has to react to the intent in some form by redirecting the user accordingly, e.g., to the shopping basket, or by saying goodbye.
Chit-chat. Many NLP-based systems support chit-chat in the conversation. In the study in [164], nearly 80% of the recorded user utterances were considered chit-chat. This number indicates that supporting chit-chat conversations can be a valuable means to create an engaging user experience. Furthermore, the study in [164] showed that chit-chat can also help to reduce user dissatisfaction, even though this part of the conversation is irrelevant to achieving the interaction goal.
Preference Elicitation. Understanding the user's preferences is a key task for any CRS. Preference information can be provided by the user in different ways. In an initial phase of the dialogue, the user might specify some of the desired characteristics of the item that she or he is interested in or even provide strict filtering constraints. In [105], this process is termed as "give criteria". In later phases, the user might however also want to revise the previously stated preferences. Note that some authors also consider answering-to a system-provided question or proposal for a constraint [25,145]-as a dialogue intent during preference elicitation [156]. Since in NLP-based systems a user may respond in an arbitrary way, it is clearly important for the system to disambiguate an answer by the user from other utterances. Such an "Answer" intent nonetheless is different from the other intents discussed here, as the intent is a response to the system's initiative of asking.
Also later in the process, preferences can be stated by the user in different ways after an initial recommendation is made by the system. In critiquing-based approaches, the users can, for example, add additional constraints in case the choice set is too large, relax some of the previously stated ones, or state that they already know the item [56,123,142]. Generally, a system might also allow the user to inspect, modify, and delete the current profile (supporting a "show profile" intent) [100]. By analyzing the interaction logs of a prototypical voice-controlled movie recommender, e.g., in [65], the authors found that many users (41.1%) at some stage try to refine their initially stated preferences. In particular in case of unsatisfactory system responses, some users might furthermore also have the intent to "reject" [156] a recommendation or "restate" their preferences. In the study presented in [16], this however happened only in 1.5% of the interactions.
Obtaining Recommendations and Explanations. There are various ways in which users might ask for recommendations and additional information about the items. Asking for recommendations often happens at the very beginning of a dialogue, but this event can also occur after the user has revised the preferences. In case a currently displayed list of options is not satisfactory, users also might ask the system to "show more" options [16] or ask for a similar item for comparison. For each of the items, the user might want to learn more about its details or ask for an explanation, e.g., why it was recommended [100]. Finally, an alternative form of requesting a recommendation is to ask the system about its opinion ("how about") regarding a certain item, see e.g., [164].
User Modeling
The interactive elicitation of the user's current preferences or needs and constraints is another central task of CRS. As discussed above, this can be done through different modalities and by supporting various ways for users to express what they want. The acquired preferences are typically recorded in explicit form within a user profile, based on which further inferences regarding the relevance of individual items can be made. There are two main ways of representing the preference information in such explicit models: • Preference expressions or estimates regarding individual items, e.g., ratings, like and dislike statements, or implicit feedback signals. In [39], for example, users are initially presented with a number of tourism destinations and asked which of them match their preferences. • Preferences regarding individual item facets. These facets can either relate to item attributes (e.g., the genre of a movie) or the desired functionalities. For the latter class of approaches, the goal of the CRS is sometimes referred to as "slot filling", i.e., the recommender seeks to obtain values for a set of pre-defined facets. Sometimes, also the preference strength, e.g., must or wish, can be relevant [123]. While different approaches for mining possible facets from structured and unstructured text documents were proposed in the literature [157], the set of facets is often manually engineered based on domain expertise. Furthermore, in case the facets refer to functional requirements as in [55,160], additional knowledge has to be encoded in the system to match these requirements with the recommendable items. In [55], for example, the user of an interactive camera recommender is asked about the type of photos she or he wants to take (e.g., sports photography), and a constraint-based system is then used to determine cameras that are suited for this purpose.
Finally, a few works exist that do not assume the existence of a set of engineered item features. In [119], for example, an approach is proposed for preference elicitation, where user repeatedly specify preferences on items and the system then finds items that are similar in terms of unstructured features like keywords or tags. Similarly, other types of non-engineered features (tags, key phrases, or latent representations) were used in the preference elicitation approaches proposed in [81,84] and [149].
Besides such ephemeral user models that are constructed during the ongoing session, some approaches in the literature also maintain long-term preference profiles [4,123,142,154]. In the critiquing approach in [123], for example, the system tries to derive long-term and supposedly more stable preferences (e.g., for non-smoking rooms in restaurants) from multiple sessions. In the content-based recommendation approach adopted in [142], a probabilistic model is maintained based on past user preferences for items. In general, a key problem when recommending based on two types of models (long-term and short-term) is to determine the relative importance of the individual models. One so far unexplored option could lie in the consideration of contextual factors such as seasonal aspects, the user's location, or the time of the day.
Finally, there are also approaches that try to leverage information about the collective preferences of a user community, in particular for cold-start situations [101]. If nothing or little is known yet about the user's preferences, a common strategy is to recommend popular items, where item popularity can be determined based on user ratings, reviews, or past sales numbers as in [47]. The feedback obtained for these popular items can then be used to further refine the user model.
Dialogue States
To be able to support a multi-turn, goal-oriented dialogue with their users, CRS have to implement appropriate means to keep track of the state of the dialogue to decide on the next conversational move, i.e., the next action. In many CRS implementations and in particular in knowledge-based approaches, dialogue management is based on a finite state machine, which not only defines the possible states but also the allowed transitions between the states [85,86,153,155]. In the Advisor Suite framework [55,58], for example, the entire recommendation logic including the dialogue flow was modelled with the help of graphical editors. Figure 3 shows a schematic overview of such a dialogue model. It consists of (i) a number of dialogue steps that to acquire the user's preferences through questions, and (ii) special dialogue states in which the system presents the results, provides explanations, or shows a comparison between different alternatives. The possible transitions are defined at design time, but which path is taken during the dialogue is determined dynamically based on decision rules. Another example for a work that is based on a pre-defined set of states and possible transitions is the interactive tourism recommender system proposed in [86]. In their case, the transitions at run-time are not determined based on manually engineered decision rules, but learned from the data using reinforcement learning techniques, where one goal is to minimize the number of required interaction steps.
Technically, there are different ways of explicitly representing such state machines. Some tools, as the one mentioned above, use visual representations, others rely on textual and declarative representations like "dialogue grammars" [13] and case-frames [12]. Google's DialogFlow 4 , as an example of a commercial service, uses a visual tool to model linear and non-linear conversation flows, where non-linear means that there are different execution paths, depending on the user's responses or contextual factors. Finally, in some cases, the possible states are simply hard-coded as part of the general program logic of the application.
In some works, and in particular in early critiquing-based ones which are based on forms and buttons [122,123,134], only a few generic dialogue states exist, which means that no complex flow has to be designed. After an initial preference elicitation stage, recommendations are presented, and the system offers a number of critiques that the user can apply until a recommendation is accepted or rejected. Dialogue state management is therefore in some ways relatively light-weight. The main task of the system in terms of dialogue management is to keep track of the user responses and, in case of dynamic critiquing, make inferences regarding the next critiques to offer.
Similarly, in some NLP-based conversational preference elicitation systems such as [29,172], there are mainly two phases: asking questions, in this case in an adaptive way, and presenting a recommendation list. In other NLP-based systems, the possible dialogue states are not modeled explicitly as such, but implicitly result from the implemented intents. For example, whether or not there is a dialogue state "provide explanation" depends on the question whether a corresponding intent was considered in the design phase of the system.
Finally, in the NLP-based end-to-end learning CRS proposed in [75], the dialogue states are in some ways also modeled implicitly, but in a different way. This system is based on a corpus of recorded human conversations (between crowdworkers) centered around movie recommendations. This corpus is used to train a complex neural model, which is then used to react to utterances by users. Looking at the conversation examples, these conversations, besides some chit-chat, mainly consist of interactions where one communication partner asks the other if she or he likes a certain movie. The sentiment of the answer of the movie seeker is then analyzed to make another recommendation, again mostly in the form of a question. The dialogue model is therefore relatively simple and encoded in the neural model. It seemingly does not support many other types of intents or information requests that do not contain movie names (e.g., "I would like to see a sci-fi movie").
Background Knowledge
Besides the discussed knowledge regarding the set of supported user intents or the possible dialogue states, CRS are based on additional types of knowledge and data. This knowledge for example includes information related to the items (e.g., attributes and ratings), corpora of logged natural language conversations for learning, and additional knowledge sources used for entity recognition.
Item-related Information.
Like any recommender, also a conversational system has to have access to a database with information about the recommendable items. Such a database can contain item ratings, meta-data that can be presented to the user (e.g., the genre of a movie or the director), community-provided tags, or extracted keyphrases. These item attributes can furthermore serve as a basis for other computational tasks, e.g., to compute the personalized recommendations, to generate explanations, or to determine which questions can be asked to the user.
In the examined papers, we found that researchers used a number of different databases. Some works are based on typical rating datasets, e.g., from MovieLens or Netflix, whereas other researchers created their own datasets or relied on preexisting datasets from different domains. In Table 2, we provide examples of datasets containing item-related information. It can be observed that it is not uncommon, e.g., in critiquing-based applications, that researchers solely rely on datasets which they created or collected for the purpose of their studies, i.e., there is limited reuse of datasets by other researchers. One main underlying reason is that in most papers we analyzed, researchers did not publicly share their datasets. Table 2. Examples of datasets containing item-related information.
Domain
Description Movies Traditional movie rating databases from MovieLens, EachMovie, Netflix, used for example in [75,174,174]. Electronics A product database with more than 600 distinct products was collected from various retailers [47].
A smartphone database consisting of 1721 products with multiple features [34]. An Amazon electronics review datasetcontaining millions of products, user reviews and product meta-data [172]. A dataset consisting of 120 personal computers, each with 8 features [134]. Travel More than 100 sightseeing spots in Japan with 25 different features [53].
A database of restaurants in the San Francisco area covering 1,900 items with multiple features like cuisine, ratings, price, location, or parking [142]. Search logs and reviews of 3,549 users of a restaurant review provider, focusing on locations in Cambridge [29]. A travel destinations dataset, crawled from online platforms containing 5,723,169 venues in 180 cities around the globe [39]. A restaurants dataset crawled for Dublin city, which consists of 632 restaurants with 28 different features [92].
Food Recipes
A food recipe dataset containing dishes and their ingredients [170]. E-commerce A product database of 11M products and logged data from the search engine of an e-commerce website was collected. The logged data consists of 3,146,063 unique questions [164]. Music A music dataset crawled from multiple online sources, containing 2,778 songs with 206k explanatory statements and 22 user tags [173].
Dialogue
Corpora Created to Build CRS. NLP-based dialogue systems are usually based on training data that consist of recorded and often annotated conversations between humans (interaction histories). A number of initiatives were therefore devoted to create such datasets that can be used to build CRS. Other researchers, in contrast, rely on dialogue datasets that were created or collected for other purposes. Generally, these corpora can be obtained with the help of crowdworkers [64,75,139], by annotating interviews [16,18,109], or by logging interactions with a chatbot like in [63]. Table 3 shows examples of such datasets used in recent research.
Note that in some cases when building a CRS, these dialogue corpora are combined with other knowledge bases [75,170]. In [75], for example, both a dialogue corpus and MovieLens data are used for the purposes of sentiment analysis and rating prediction. Such a combination of datasets can be necessary when there is not enough relevant information in the dialogues.
Logged Interaction Histories.
Building an effective CRS requires to understand the conversational needs of the users, e.g., how they prefer to provide their preferences, which intents they might have, and so on. One way to better understand these needs is to log and analyze interactions between users and a prototypical system. These logs then serve as a basis for further research. Table 3. Examples of dialogue corpora created or used to build CRS.
Domain
Name Description Movies ReDial Crowdworkers from Amazon Mechanical Turk (AMT) were used to collect over 10,000 dialogues centered around the theme of providing movie recommendations [75]. A paired mechanism was used where one person acts as a recommendation seeker and the other as a recommender.
CCPE-M
A Wizard-of-Oz (WoZ) approach is taken to elicit movies preferences from crowdworkers within natural conversations. The dataset consists of over 500 dialogues that contain over 10,000 preference statements [116]. GoRecDial This dataset consists of 9,125 dialogue interactions and 81,260 conversation turns collected through pairs of human workers; here also one plays the role of a movie seeker and the other as a recommender [64]. bAbI In [100], the authors used a general movie dialogue dataset provided by Facebook Research [40] to build a CRS. The dataset contains task-based conversations in a question-answering style. It consists of 6,733 and 6,667 dialogue conversations for training and testing respectively. Restaurants and Travel
CRM
An initial dataset containing 385 dialogues is collected using a pre-defined dialogue template through AMT [139]. Using this dataset, a larger synthetic dataset of 875,721 simulated dialogues is created. ParlAI A goal-oriented, extended version of the bAbI dataset that was collected using a bot and users. It consists of three datasets (training, development and testing), each comprising 6,000 dialogues. [63]. MultiWOZ A large human-human dialogue corpus, which covers 7 domains and consists of 8,438 multi-turn dialogues around the themes of travel & planning recommendation [162]. Fashion MMD A dataset consisting of 150,000 conversations between shoppers and a large number of expert sales agents is collected. 9 dialogue states were identified in the resulting dataset [129].
Multi-domain
OpenDialKG Chat conversation between humans, consisting of 15,000 dialogues and 91,000 conversation turns on movies, books, sports, and music [97].
Differently from the dialogue corpora discussed above, these datasets were often not primarily created to build a CRS, but to better understand the interaction behavior of users. In [154,155], for example, the interactions of the user with a specific NLP-based CRS were analyzed regarding dialogue quality and dialogue strategies. In [16,18], user studies were conducted prior to developing the recommender system to understand and classify possible feedback types by users. In some approaches like [18,114] researchers annotated and labeled such datasets for the purpose of model training and system initialization. However, such logged histories are-except for [114]-typically much smaller in size than the dialogue corpora discussed above, mostly because they were collected during studies with a limited number of participants. Examples of datasets obtained by logging system interactions and user studies are shown in Table 4.
Lexicons and World
Knowledge. Researchers often use additional knowledge bases to support the entity recognition process in NLP-based systems. In [73,80], for instance, information was harvested from online sources such as Wikipedia or Wikitravel to develop dictionaries for the purpose of entity-keyword mapping. Similarly, the WordNet corpus was used in [73] to determine the semantic distance of an identified keyword in a conversation with predefined entities. More examples for the use of lexicons and world knowledge are shown in Table 5. Table 4. Examples of datasets obtained from logged system interactions and user studies.
Domain Description Movies
A dialogue dataset involving 347 users was collected in [65] during the experimental evaluation of a recommender system. A subset of the ReDial dataset was analyzed and annotated in [16] to classify the user feedback types in 200 dialogues at the utterance level.
A dialogue corpus was collected in [154] for the purpose of dialogue quality analysis consisting of 226 complete dialogue turns with 20 users. A user study was conducted in [155], where a movie seeker and a human recommender converse with each other. The dialogue corpus consists of 2,684 utterances and 24 complete dialogues. Travel A dataset containing preferences for hotel, flight, car rental searches was collected in [4] involving 200 users of a content-based recommender system that supports multiple tasks (i.e., hotel, car, flight booking) in the same dialogue. Fashion A user study was conducted using a virtual shopping system. A non-verbal feedback (e.g., gestures, facial expressions, voices) dataset involving 345 subjects was collected and then annotated for model training [18]. E-commerce A dataset containing conversation logs of users with a chatbot of an online customer service center (Alibaba.com) was collected in [114]. It consists of over 91,000 Q&A pairs as a knowledge base used for the information retrieval task. Table 5. Examples of the use of lexicons and world knowledge.
Source Name Description
Wikipedia A dataset crawled from online sources (Wikipedia and Wikitravel) for the purpose of entity recognition in the travel domain [73].
WordNet
WordNet is used in order to compute the semantic distance between entities and keywords mentioned in the conversation [73,80]. Wikiquote A quote dataset crawled from two online sources, wikiquote.com and the Oxford Concise Dictionary of Proverbs [70]. Citysearch In [80], a dataset of 137,000 users reviews on 24,000 restaurants was harvested from two online sources (citysearch.com and menupages.com to generate a dictionary of mappings between semantic representations of cuisines and dialogue concepts.
Discussion
Our discussions show that CRS can be knowledge-intensive or data-intensive systems. Differently from the traditional recommendation problem formulation, where the goal is to make relevance predictions for unseen items, CRS often require much more background information than just a user-item rating matrix, in particular in the context of dialogue management.
Pre-defined Knowledge vs. Learning Approaches. In CRS approaches that use forms and buttons as the only interaction mechanism, the interaction flow is typically pre-defined in the form of the possible dialogue states, the set of supported user intents, and the user profile attributes to acquire. NLP-based systems, in contrast, are usually more dynamic in terms of the dialogue flow, and they rely on additional knowledge sources like dialogue corpora and answer templates as well as lexicon and word knowledge bases. Nonetheless, these systems typically require the manual definition of additional background knowledge, e.g., with respect to the supported user intents.
Pure "end-to-end" learning only from recorded dialogues seems challenging. In most existing approaches the set of supported interaction patterns is implicitly or explicitly predefined, e.g., in the form of "user provides preferences, systems recommends". To a certain extent, also the collection of human-to-human dialogues can be designed to support possible system responses like in [75], where the crowdworkers were given specific instructions regarding the expected dialogues. As a result, the range of supported dialogue utterances can be relatively narrow. The system presented in [75], for example, cannot handle a query like "good sci-fi movie please".
Intent Engineering and Dialogue States. In case a richer dialogue and additional functionalities are desirable, the definition of the supported user intents usually is a central and often manual task during CRS development. Compared to general-purpose dialogue systems and home assistants, however, the set of user intents that will be supported is often relatively small. We have identified some common intent categories in Section 4.1. Depending on the domain, also very specific intents can be supported, e.g., asking for a style tip in a fashion recommender system [105]. Furthermore, yet another set of possible user intents has to be supported in CRS that are designed for group decision scenarios. Typical user intents can, for example, relate to the invitation of collaborators [103] or to a request for a group recommendation. Furthermore, there might be user utterances that relate to the resolution of preference conflicts and voting among group members [1,91,103].
Generally, the set of user intents that the system supports determines how rich and varied the resulting conversations can be. Not being able to appropriately react to user utterances can be highly detrimental to the quality perception of the system. For example, being able to explain the recommendations that the system makes is often considered as a key feature to make decisionmaking easier or to increase user trust in a recommender system. A user of an NLP-based system might therefore be easily disappointed by the conversation if the system fails to recognize and respond to a request for an explanation.
A key challenge therefore is to anticipate or learn over time which intents the users might have. Depending on the application and used technology, the design and implementation of an intent database (e.g., using Google's DialogFlow engine) can lead to substantial manual efforts and require the involvements of professional writers to achieve a certain naturalness and richness of the conversation. At the same time, the rule-based modeling approach ("if-this-then-that") as implemented by major solution providers can easily lead to large knowledge bases that are difficult to maintain, leading to a need for alternative modeling approaches [140].
COMPUTATIONAL TASKS
Having discussed possible user intents in recommendation dialogues, we will now review common computational tasks and technical approaches for CRS. We distinguish between (i) main tasks, i.e., those related more directly to the recommendation process, e.g., compute recommendations or determine the next question to ask, and (ii) additional, supporting tasks.
Main Tasks
Broadly speaking, CRS carry out four general types of tasks (or: system actions) during conversations [26,101]: Request, Recommend, Explain, and Respond, see Figure 4. However, not every CRS necessarily implements all of them. System-driven CRS (as described in Section 3.3) usually drive the conversation by requesting user preferences on attributes and allowing users to give feedback on recommendations through multiple interaction cycles. User-driven systems, in contrast, can take a more passive role, and mainly respond to conversational acts by the user. In mixed-initiative systems, e.g., those based on natural language interfaces, all types of actions can be found.
Request.
A number of CRS follow a "slot-filling" conversation approach where the system seeks to acquire preference information about a pre-defined set of item attributes or facets. One main computational task in this context is to determine the next question to ask, often with the goal to increase dialogue efficiency, i.e., to minimize the number of required interactions (see also Section 6). Various methods to determine the order of the facets were proposed in the literature [20,132,142,170]. In an early system [142], specific weights were used to rank the item attributes for which the user has not expressed preferences yet. Entropy-based methods also consider the potential effects on the remaining item space of each attribute. They aim to identify the next question (attribute) that helps mostly to narrow down the candidate (item) space [20,96,104,132,166], sometimes including feature popularity information [96]. Considerations like this are typically also the foundation of typical dynamic and compound critiquing systems [25,90,111,121,134,148,171].
In compound critiquing systems, in particular, the user is not asked about feedback for one single attribute, but for more than one within one interaction, e.g., "Different Manufacturer, Lower Processor Speed and Cheaper". Finally, in some systems, possible sequences of questions asked to the users are pre-defined in the form of state machines [55,58]. At run-time, the dialogue path is then chosen based on the users' inputs in the ongoing session. Instead of using heuristics for attribute selection and static dialogue state transition rules, a number of more recent systems rely on learning-based approaches, e.g., using reinforcement learning [86,139,146]. In [139], for example, the authors use a deep policy network to decide on the system action. Based on the current dialogue state, as modeled by a belief tracker, the system either makes a request for a pre-defined facet or generates a recommendation to be shown to the user. An alternative learning-based way to determine the question order was proposed in [30]. In their work, the authors design a recommender for YouTube that leverages past watching histories of the user community and a Recurrent Neural Network architecture to rank the questions (topics) that are shown to the user in a conversational step.
An alternative to asking users about attribute-based preferences is to ask them to give feedback on selected items. This can be done either by asking them to rate individual items (e.g., by like/dislike statements) or by asking them to express their preference for item pairs or entire sets of items [81]. The computational task in this context is to determine the most informative item(s) to present to the user. Possible strategies include the selection of popular or diverse items in the cold-start phase, items that are different in terms of their past ratings or attributes, or itemsets that represent a balance of popularity and diversity [17,93,101,120]. However, not only item features might be relevant for the selection of the items. In [17], the authors found that a user's willingness to give feedback on an item can depend on additional factors. Specifically, they identified several situations in which the feedback probability may be higher, e.g., when the system's predicted rating deviates from the user's past experience of the item. In more recent works, again learning-based approaches are more common. The authors of [29,174], for example, employed bandit-based approaches to either (i) determine the next item to be shown for eliciting the user's absolute feedback (i.e., like or dislike), or (ii) to select a pair of items for obtaining the user's relative preference regarding these two items.
Recommend.
The recommendation of items is the core task of any CRS. From a technical perspective, we can find collaborative, content-based, knowledge-based, and hybrid approaches in the literature. Differently from non-conversational systems, the majority of the analyzed CRS approaches mainly relies solely on short-term preference information. However, there are also approaches that additionally consider long-term preferences of a user, e.g., to speed up the elicitation process [82,103,125,130,139,142,154].
In the context of critiquing-based and knowledge-based systems, different strategies are applied to filter and rank the items. For the filtering task, often constraint-based techniques [42] are applied that remove items from the candidate set which do not (exactly) match the current user's preferences. The items that remain can then be sorted in different ways [169]. In the system proposed in [171], for example, the user preference model is updated after a user critique by adjusting the weights of the attributes that are involved in the critique. Then, Multi-Attribute Utility Theory (MAUT) [67] was used to calculate the utility of each candidate item for generating top-K recommendations for the user. An alternative ranking approach was applied in [130], where a history-guided critiquing system was proposed that aims to retrieve recommendation candidates from other users' critiquing sessions that are similar to the one of the current user. In [39], a critiquing-based travel recommender system was implemented that computes recommendations based on the relevance of item attributes to user preferences based on the Euclidean Distance.
Some works consider both long-term and short-term preferences of users when making recommendations [4,82,123,130]. The Adaptive Place Advisor system [142] represents an early example of combining short-term and long-term preferences. Here, the user's current query is expanded by considering the probability distribution of the user's past preference for item attributes, based on her/his short-term constraints (within a conversation) and long-term constraints (over many conversations). This expanded query was then used to retrieve and rank the items for recommendation. In [130], the authors proposed to leverage the successful recommendation sessions in the previous conversations to improve the efficiency of the current session (i.e., to shorten its length).
More recent works rely on machine learning models and background datasets for the recommendation task. One common approach is to train a model on the traditional user-item interaction matrix, e.g., based on probabilistic matrix factorization [29], and to then combine the user's current interactions with the trained user and item embeddings. In another approach [4], the authors rely on a content-based method based on item features and the user profile in the cold-start stage, and then switch to a Restricted Boltzmann Machine collaborative filtering method once a sufficient number of preference signals is available. In [172], a hybrid multi-memory network with attention mechanism was trained to find suitable recommendations based on item embeddings and the user's query embedding. Here, the item embedding was based on the item's textual description, and the user's query embedding encoded the user's initial request and the follow-up conversations during the interaction. A hybrid model was also proposed in [139], which used Factorization Machines to combine the dialogue state-represented with an LSTM-based belief tracker for each item facet-user information, and item information to train the recommendation model. In the video recommender system presented in [30], finally, an RNN-based model was built for making recommendations, based on the topics selected by the users and their watching history.
In some cases, application-specific techniques were applied for the recommendation task. In [167,168], for example, the CRS features a visual dialogue component, where users can give feedback based on the images, e.g., "I prefer blue color". To implement this functionality, the system proposed in [167] implemented a component that encoded item images and user feedback using a convolutional neural network, and then combined these encodings as an input to both a response encoder and a state tracker. Furthermore, various types of user behaviors (i.e., viewing, commenting, clicking) on the visually represented recommendation were considered in a bandit approach to balance exploration and exploitation.
Explain.
The value of explanations in general recommender systems is widely recognized [51,106,143]. Explanations can increase the system's perceived transparency, user trust and satisfaction, and they can help users make faster and better decisions [45]. However, according to our survey, few papers so far have studied the explanation issue specific to CRS.
In the context of critiquing-based systems, [110] examined the trust-building nature of explanations. In this work, an "organization-based" explanation approach was evaluated, where the system showed multiple recommendation lists to the user, each of them labelled based on the critiquingbased selection criteria, e.g., "cheaper but heavier". A more recent interactive explanation approach for a mobile critiquing-based recommender was proposed in [69], where the textual explanations to be shown to the user were determined based on the user's preferences and constructed from pre-defined templates.
Providing more information about a recommended item, e.g., in the form of pros and cons, is a typical approach when providing explanations. Generating such item descriptions in a user-tailored way in the context of CRS was proposed in [43] and [150]. In such approaches, the users' feedback during the conversation can influence which attributes are mentioned in the item descriptions shown to the user in the recommendation phase. Furthermore, the user preferences can be considered to order the arguments and to help determine which adjectives and adverbs to use in the explanation [43].
In [101], two kinds of explanations were implemented in a CRS for movies. One was simply based on the details of a given movie, whereas the other connects the given user preferences with item features through a graph-based approach to create a personalized explanation. Another graph-based approach following similar ideas was proposed in [97], where a knowledge-augmented dialogue system for open-ended conversations was discussed. In this approach, relevant entities and attributes in a dialogue context were retrieved by walking over a common fact knowledge graph, and the walk path was used to create explanations for a given recommendation. In [109], finally, a human-centered approach was employed. By analyzing a human-human dialogue dataset, the authors identified different social strategies for explaining movie recommendations. They then accommodated the social explanation in a conversational recommender system to improve the users' perception of the quality of the system. However, for the main task of explaining, we found that little CRS-specific research exists so far, and only a smaller set of the proposed CRS in the literature support such a functionality.
5.1.4
Respond. This category of tasks is relevant in user-driven or mixed-initiative NLP-based CRS, where the user can actively ask questions to the system, actively make preference statements, or issue commands. The system's goal is to properly react to user utterances that do not fall in the above-mentioned categories "Recommend" and "Explain". Two main types of technical approaches can be adopted to respond to such user utterances. One approach-also commonly used in chatbotsis to map the utterances to pre-defined intents, such as the ones mentioned in Table 1, e.g., Obtain Details or Restart. The system's answers to these pre-defined intents can be implemented in the system with the help of templates. In the literature, various user utterances are mentioned to which a CRS should be able to respond appropriately. Examples include preference refinements, e.g., "I like Pulp Fiction, but not Quentin Tarantino" [101], requests for more information about an item, or a request for the system's judgement regarding a certain item, e.g., "How about Huawei P9 ?" [164].
An alternative technical approach is to select or generate the system's responses by automatically training a machine learning model from dialogue corpora and other knowledge sources like in "end-to-end" learning systems, e.g., [27,63,64,75]. In an open-domain dialogue system described in [114], for example, an information retrieval model was used to retrieve an initial set of candidate answers from a Q&A knowledge base (an online customer service chat log). Then, an attentive sequence-to-sequence model was used to rank the candidate answers in order to determine answers with scores that are higher than a pre-defined threshold. If no existing answer was considered suitable, a sequence-to-sequence based model was used to generate the system's response.
Another example for such an approach is described in [105]. In this multi-modal recommender system, one RNN model was used to generate general responses such as greetings or chit-chat, and a knowledge-aware RNN model was trained to answer more specific questions. For instance, when the user asks for style-tip: "Will T-shirt complement any of these sandals?", the system may respond with "Yes, T-shirt will go well with these sandals" [105].
Finally, some approaches were proposed in the literature to deal with very specific dialogue situations. One example of such a situation is a conversational breakdown, where the system is unable to understand the user's input [98]. Possible repair strategies, such as politeness and apology strategies, were examined in the area of human-robot interaction to mitigate the negative impact of such a breakdown [71,74,136]. Various repair strategies based on communication theory, e.g., repeating or asking for clarifications, or strategies from explainable machine learning, e.g., explaining which parts of the conversation were not understood, can in principle be applied [6].
Supporting Tasks
Depending on the system's functionality, a number of additional and supporting computational tasks may be relevant in a CRS.
Natural Language Understanding. In NLP-based CRS, it is essential that the system understands the users' intents behind their utterances, as this is the basis for the selection of an appropriate system action [118]. Two main tasks in this context are intent detection and named entity recognition, and typical CRS architectures have corresponding components for this task. In principle, intent detection can be seen as a classification task (dialogue act classification), where user utterances are assigned to one or multiple intent categories [137]. Named entity recognition aims at the identification of entities in a given utterance into pre-defined categories such as product names, product attributes, and attribute values [164].
Although intent detection and named entity recognition have been extensively studied in general dialogue systems [137], there are few studies specific to CRS according to our survey, possibly due to the lack of a well-established taxonomy and large-scale annotated recommendation dialogue data. In an early approach [142], manually-defined recognition grammars were used to map user utterances to pre-defined dialogue situations, which is comparable to using pre-defined intents as described above in the context of the Respond task. An example for a more recent approach can be found in [164]. Here, a natural language understanding component for intent detection, product category detection, and product attribute extraction was implemented in a dialogue system for online shopping. For instance, from the utterance "recommend me a Huawei phone with 5.2 inch screen" the system should derive the intent recommendation, the product category cellphone, as well as the brand and the display size. To solve these tasks, the authors first collected product-related questions from queries posted on a community site, and then extracted intent phrases (e.g., "want to by" and "how about") by using two phrase-based algorithms. A multi-class classifier was trained for intent detection of new user questions. As for product category detection, the authors employed a CNN-based approach that took the detected intent into account to identify the category of a mentioned product in a given utterance.
Neural networks were used also in other recent intent and entity recognition approaches [105,146]. For example, a Multilayer Perceptron (MLP) was used to predict the probability distribution on a set of pre-defined intent categories in [105]. A sequence-to-sequence model was used in [166] to reframe the user's query (e.g., "How to protect my iphone screen") into keywords (e.g., "iphone screen protector") that are then used in the recommendation process to identify candidate items.
Another supporting task in some applications is sentiment analysis, see, e.g., [54,75,100,105,173]. One typical goal in the context of CRS is to understand a user's opinion about a certain item. For example, whenever an item-e.g., a movie-is mentioned in an utterance, the sentiment of the sentence can be used to approximate the user's feelings about the item. This sentiment can then be considered as an item rating, which can subsequently be used for recommending other items using established recommendation techniques.
Specific Recommendation Functionality. Depending on the technical approach to generate recommendations, specific computational subtasks may be helpful or required to support the recommendation process. We will give examples from the context of critiquing-based approaches here. In [11,145], for example, the goal is to make "query suggestions" to users, where the term "query" is equivalent to a critique or constraint on the item features. In the mentioned approaches, the query suggestions (or modifications) are based on an extended analysis of the satisfiability of a query (i.e., will the suggested query lead to any results) or dominance relations between possible query suggestions. Generally, such query suggestions can be particularly helpful for users who have difficulties expressing their preferences. In the field of information retrieval, many approaches to query suggestion were proposed to assist users in expressing their information needs, see, e.g., [135] for a more recent example. Limited work however exists so far to apply such ideas to the context of CRS.
A related problem in constraint-based CRS is that in some cases the user's expressed preferences lead to the situation that either too many items remain for recommendation or that no item is left. Different approaches were proposed in the literature for query relaxation. In [142], for example, a relatively simple strategy was adopted to remove some constraints. More elaborated strategies were proposed in [56,94] and [124]. In this latter work [124], the authors also introduce the concept of "query tightening". Here, the idea is to add more constraints on item attributes in case the number of relevant items returned by the system would lead to a choice overload problem. Generally, like for the query suggestion approaches described above, similar query revision approaches (relaxation and tightening) were not explored to a large extent in the context of NLP-based CRS. An exception is the concept for a chatbot presented in [104], where the system tries to first identify the cause of the unsuccessful query and then asks the user to remove some preferences and to rank the item features by importance. Finally, instead of returning an empty result and asking the user to revise the preferences, approaches exist that automatically relax constraints and inform the user, e.g., that "There are 10 cameras less than 300 euro but their resolution is between 1 and 4 mega-pixels" [55,90].
Discussion
Our analysis shows that a wide range of technical approaches are used in the literature to support the main computational tasks of a CRS. For the problem of computing recommendations, for example, all sorts of approaches-collaborative, content-based, hybrid-can be used within CRS. However, for the main task of explaining, we found that little CRS-specific research exists so far, and only a smaller set of the proposed CRS in the literature support such a functionality.
Another observation is that dialogue management is often sketched as a conceptual architectural component, but it is then implemented either in a rather static way with pre-defined transitions, e.g, see [86,172], or done implicitly during the intent recognition and mapping phase or determined by the choices of the preference acquisition strategy, e.g., slot-filling [162]. In some cases, the possible dialogue states are furthermore quite limited, e.g., the system can either decide to ask questions or to provide a recommendation [139]. Technically, in a few cases intent recognition and dialogue flow management are based on commercial tools, e.g, see [5,36].
In general, with the growing spread of chatbot applications, several commercial companies such as Google, Microsoft, Facebook and IBM, have released frameworks or public APIs that implement some of the mentioned computational tasks and allow developers to create their own chatbots. These tools include Google's DialogFlow system, Facebook's Wit.ai, and IBM Watson Assistant 5 and provide functionalities such as speech recognition, voice control, the identification of pre-defined intents from natural language utterances, dialogue flow management, response generation to specific intents, and the deployment of applications to commercial platforms. Examples of research works that used these services include [3,5,21,36,62,65,133]. Frameworks for the development of conversational systems are also provided by Microsoft through its Bot Framework and by Amazon for its Alexa assistant and smart speakers. A CRS for the travel domain that uses Amazon Echo smart speakers was, for example, presented in [4]. In general, however, these frameworks and services usually do not implement functionality that is specific to recommendation problems, but are designed to build general-purpose conversational systems.
Besides companies, also some researchers release their NLP-based CRS to the public. Examples include the VoteGoat movie recommender [36] and the ConveRSE framework for chatbot development [54,101].
EVALUATION OF CONVERSATIONAL RECOMMENDERS
In general, recommender systems can be evaluated along various dimensions, using different methodological approaches [131]. First, when a system is evaluated in its context of use, i.e., when it is deployed, we are usually interested mostly in specific key performance indicators (KPIs) that measure-through A/B testing-if the system is achieving what it was designed for, e.g., increased sales numbers or user engagement [57]. Secondly, user studies (lab experiments) typically investigate questions related to the perceived quality of a system. Common quality dimensions are the suitability of the recommendations, the perceived transparency of the process or the ease-of-use, see also [113]. Offline experiments, finally, do not involve users in the evaluation, but assess the quality based on objective metrics, e.g., the accuracy of predicting heldout ratings in a test set, by measuring the diversity of recommendations, or by computing running times.
The same set of quality dimensions and research methods can also be applied for CRS. However, when comparing algorithm-oriented research and research on conversational systems, we find that the main focus of the evaluations is often a different one. Since CRS are highly interactive systems, questions related to human-computer interaction aspects are more often investigated for these systems. Furthermore, regarding the measurement approaches, CRS evaluations not only focus on task fulfillment, i.e., if a recommendation was suitable or finally accepted, but also on questions related to the efficiency or quality of the conversation itself.
Overview of Quality Dimensions, Measurements, and Methods
Through our literature review, we identified the following main categories of quality dimensions investigated in CRS: (1) Effectiveness of Task Support: This category refers to the ability of the CRS to support its main task, e.g., to help the users make a decision or find an item of interest. (2) Efficiency of Task Support: In many cases, researchers are also interested to understand how quickly a user finds an item of interest or makes a decision. In each of these dimensions, a number of different measurements are considered in the literature. Task effectiveness, for example, can be both measured objectively (through accuracy measures, acceptance or rejection rates) or subjectively (through surveys related to choice satisfaction or perceived recommendation quality). Task efficiency is very often measured objectively through the number of required interaction steps and shorter dialogues are usually considered favorable. The quality of the conversation is most often analyzed in terms of subjective assessments, e.g., with respect to fluency, understandability, or the quality of the responses. Finally, specific measurements for subtasks include intent recognition rates or the accuracy of the state recognition process.
From a methodological perspective, we found works that entirely relied on offline experiments, works that relied exclusively on user studies, and studies that combined both offline experiments with user studies. Reports on fielded systems and A/B tests are rare. Examples of such works that discuss deployed systems include [20,30,32,55,60,104,114,164]. However, the level of detail provided for these tests is often limited, partially informal, or only considers certain aspects like processing times. Finally, we also found works without any evaluation or where the evaluation was mostly qualitative or anecdotal [4,73,160].
In the experimental evaluations, all sorts of materials-in particular prototype applications-and datasets were used. As discussed in Section 4, at least an item database is needed. Depending on the technical approach, also additional types of knowledge and data are used, such as logged conversations between humans, explicit dialogue-related knowledge such as supported intents etc.
In Figure 5, we provide an overview of the most common evaluation dimensions and evaluation approaches, and give examples for typical measurements and datasets. In the following sections, we will discuss some of the more typical evaluation approaches in more detail.
Review of Evaluation Approaches
6.2.1 Effectiveness of Task Support. In traditional recommender systems, the most common evaluation approach is to determine-through offline evaluations-how accurate an algorithm is at predicting some known, but withheld user preferences. The underlying assumption is that systems with higher accuracy are more effective, e.g., in helping users find what they need. Objective accuracy measures such as the RMSE or the Hit Rate are sometimes also used to evaluate CRS. However, there are typically no long-term preferences available for conversational systems and the system only learns about the user preferences in the ongoing usage session. Therefore, alternative evaluation protocols are typically applied that rely on simulated users or user studies. Furthermore, researchers sometimes use specific objective metrics besides accuracy, and they also frequently rely on subjective quality assessments from users [27,75,139]. The objective and subjective quality measures are discussed below in detail.
Objective Measures. Accuracy measures like Average Precision, the Hit Rate or RMSE were for example used as part of the evaluations in [18,29,101] or [146]. In [29], a framework for interactive preference elicitation was proposed that learns which questions should be asked to users in the cold-start phase. To evaluate different strategies, the authors use real and simulated user profiles and report the average precision of the recommendations after each question-answering round. Similarly, a user simulator was used for the evaluation of a dialogue-based facet-filling recommender system based on deep reinforcement learning and end-to-end memory networks in [146] and [139]. The simulator in [146] was based on real user utterances extracted from a dataset about restaurant reservations [63]. The objective measures included the recommendation accuracy (median of ranking and success rate), as well as the proportion of the simulated users who accepted the recommendations. In [139], the "online" experiments were based on a dataset collected through crowdworkers and the objective measures included Average Reward of the reinforcement learning strategy and the Success Rate (conversion rate), i.e., the fraction of successful dialogues. The authors of [101] present a domain-independent CRS framework, and they use the Hit Rate to assess the effectiveness of different system components such as the recommendation algorithm or the intent recognizer. To make the measurements, they use the above-mentioned bAbI dataset as a ground truth, where each example contains the user preferences, the recommendation request and the recommended item. A similar evaluation approach based on ground truth information derived from different real-world dialogues and accuracy measures (RMSE, Recall, Hit Rate) was adopted in [27,64,75]. In such approaches, the system typically analyzes (positive) mentions of items (movies) in the ongoing natural language dialogue and use these preferences for the prediction task.
The focus of [18] was on implicit feedback in CRS, where this feedback was obtained from non-verbal communication acts. To assess the effectiveness of using such signals, the accuracy of rating predictions by a content-based recommender was evaluated using MAE and RMSE. In their approach, the ground truth for the evaluation was previously collected in a user study. In some ways, this approach is similar to [101] in that the effects of the performance of a side task-here, the interpretation of non-verbal communication acts-on the system's overall recommendation quality are investigated.
Given the possible limitations of pure offline experiments in the context of CRS, user studies are also frequently applied to gauge the effectiveness of a system. In the context of a critiquingbased system [23,24], for example, decision accuracy was objectively measured by the fraction of users who changed their mind when they were presented with all available options after they had previously made their selection with the help of the CRS. In [86], in contrast, the authors used task completion rates and add-to-cart actions as proxies, which measure how often users had at least one item in their cart and how many items they added on average, respectively.
Subjective Measures. Differently from objective measures that, e.g, record the user's decision behavior when interacting with the system or determine prediction accuracy using simulations, subjective measures assess the user's quality perception of the system. Such measurements can be important because even common accuracy measures are often not predictive of the quality of the recommendations as perceived by the users. 6 In the reviewed literature on CRS, various quality factors were examined that are also commonly used for non-conversational recommenders, e.g., those discussed in the evaluation frameworks in [68] and [113].
For the critiquing-based systems discussed in [23,24], the authors therefore not only used decision accuracy (as an objective measure) but also assessed the different factors like decision confidence, and purchase and return intentions. User satisfaction, either with the system's recommendations or the system as a whole, was additionally investigated in earlier critiquing approaches such as [112,125] and in other comparative evaluations [101,165]. The perceived recommendation quality was assessed in the speech-controlled critiquing system in [47], and in [152] the authors looked at user acceptance rates. In [62,81] and [109], finally, the authors considered several dimensions in their questionnaire like the match of the recommendations with the preferences (interest fit), the confidence that the recommendations will be liked, and trust.
6.2.2 Efficiency of Task Support. Traditionally, in particular critiquing-based CRS approaches are often evaluated in terms of the efficiency of the recommendation process. Specifically, one goal of generating dynamic critiques is to minimize the number of required interactions until the user finds the needed item or accepts the recommendation. Such evaluations are often done offline with simulated user profiles. One assumption, also in approaches that are not based on critiquing, is that the simulated users act rationally and consistently, i.e., they will not revise their preferences during the process.
Examples of works that measure interaction cycles in critiquing approaches include [47,86,89,91,122,125,147,171]. The number of required interaction stages was also one of usually multiple evaluation criteria for chatbot-like applications, e.g., [53,62,101,154], and a shopping decision-aid in [152]. In the context of learning-based systems, the number of dialogue turns in a two-stage interaction model was measured in [139]. The usage of such measures is however rather uncommon for natural language, learning-based dialogue systems.
Besides the number of interaction stages, task completion time is sometimes used as an alternative or complementary way of objectively measuring efficiency, e.g., in [62,86]. In [54], the authors, among other aspects, compared the efficiency of different interaction modes with a chatbot: NLPbased, button-based, and mixed. They measured the number of questions, the interaction time and the time per question in the dialogue. A main outcome of their work was that pure natural language interfaces were leading to less efficient recommendation sessions, in part due to problems of correctly interpreting the natural language utterances.
In the mentioned papers, shorter interaction or task completion times are generally considered favorable. Note however, that in some cases longer sessions are desirable. In particular, longer interaction times might reflect higher user engagement and, as in [62], correspond to a larger number of listened songs in a music application. In [28], the authors compared a voice-based and visual output system and measured the number of options that were explored by the users. In this context, note that the exploration of more items can, depending on the application, both be a sign that the user found more interesting options to inspect and a sign that the user did not find something immediately and had to explore more options. In [165], the effects of using a voice interface for a podcast recommender were analyzed. Their results showed that users were slower, explored fewer options, and chose fewer long-tail items, which can be detrimental for discovery.
In some works, finally, subjective measures regarding the efficiency of the process are used, typically as a part of usability assessments. In [23,81,109,152] and [86] the authors ask the study participants about their perceived cognitive effort.
Quality of the Conversation and Usability
Aspects. In a number of works, the focus of the evaluation is put on certain aspects of the dialogue quality and on usability aspects regarding the system as a whole. The general ease-of-use of the system was, for example examined in [47,62,112,122]; the more specific concept task ease was part of the user questionnaire in [154].
Regarding quality aspects of the conversation itself, various aspects are investigated in the literature. From the perspective of the conversation initiative, the authors of [81] and [109] measured the perceived level of user control. Whether or not the desire for control is dependent on personal characteristics was investigated in [62]. In addition to user control, perceived transparency was considered as a quality factor in [109]. A common way to establish transparency is through the use of explanations. Questions of how to design explanations for a recommender chatbot were investigated in [108]. The quality factors used in [154] were based on an early framework for evaluating spoken dialogue systems in [78]. They, for example, include adaptation (i.e., how fast the system adapts to the user's preferences), expected behavior (i.e., how intuitive and natural the dialogue interaction is), or the entertainment value. Furthermore, in [109] coordination, mutual attentiveness, positivity, and rapport were considered as additional desired factors of a conversation.
Looking closer at the content and linguistic level of the dialogues, many recent proposals based on natural language rely on the BLEU [107] score to assess the system's responses, e.g., [64,[75][76][77]105]. With the help of this score, which was developed in the context of machine translation, one can compare the responses generated by the system with ground-truth responses from real human conversations in an automated way. As an alternative, the NIST score can be used, e.g., in [105]. Additional objective linguistic aspects that are measured in the literate include the lexical diversity [46], perplexity (corresponding to fluency), and distinct n-gram (to assess diversity) [27]. In addition to these objective linguistic measures, researchers sometimes consider subjective assessments of the quality of the system responses in their evaluations, e.g., with respect to fluency, appropriateness, consistency, engagingness, relevance, informativeness, and the overall dialogue quality and generation performance [27,46,64,76,77,105,154].
Effectiveness of Sub
Task. In some works, finally, researchers focus on the evaluation of the performance of certain subtasks. Again, such measurements can both be objective or subjective ones. As objective measurements, the reward is often computed in approaches that rely on reinforcement learning [86]. In a critiquing system, the number of times a proposed critique was applied was investigated in [122]. In NLP-based systems, in contrast, researchers often evaluate the performance of the entity and intent recognition modules [77,101]. In the particular multi-modal CRS in [105], Recall was used for assessing the image selection performance. In terms of subjective measures, the interpretation performance, i.e., how good the system is in understanding the input, was, for example, considered in [154].
Discussion
Our review shows that a wide range of different evaluation methodologies and metrics are used to evaluate CRS. In principle, general user-centric evaluation frameworks for recommender systems as proposed in [68] and [113] can be applied for CRS as well. So far, however, while user-centric evaluation is common, these frameworks are not widely used and no standards or extensions to them were proposed in the literature. In terms of objective measurements, typical accuracy measures are used by several researchers. Still, the individual CRS proposals in the literature are quite diverse, e.g., in terms of the application domain, interaction strategy, and background knowledge, and a comparison between existing systems remains challenging.
In NLP-based systems, the BLEU score is widely used for automatic evaluation. However, according to [79], the BLEU score, at least at the sentence level, can correlate poorly with user perceptions, see also [46]. In general, the evaluation of language models is often considered difficult [88] and task-oriented systems like CRS might be even more challenging to assess. These observations therefore suggest that BLEU scores alone cannot inform us well about the quality of the generated system utterances and that in addition subjective evaluations should be applied.
Researchers therefore often resort to offline experiments with simulated users or user studies, where study participants have to accomplish a certain task. In offline studies, often a target (preferred) item is randomly selected, and then a rationally-behaving user is simulated, which interacts with the CRS by answering questions about preferences or by providing feedback on explanations. Such a design however assumes that users a priori have fixed preferences towards items or item features. However, in reality, users may also construct or change their preference during the conversation when they learn about the space of options. Therefore, it is not always fully clear to what extent such simulations reflect real-world situations. In user studies, in contrast, often realistic decision situations are explored and participants have to accomplish tasks like selecting a product in a shop or finding musical tracks for a birthday party. While such studies to some extent remain artificial as usually no real purchase is made, such evaluations seem more realistic than the offline experiments described above. In general, relying solely on offline experimentation seems too limited, except for certain subtasks, given that any CRS is a system that has to support complex user interactions.
Finally, more research seems needed with respect to understand (i) how humans make recommendations to each other in a conversation, and (ii) how users interact with intelligent assistants, e.g., what kind of intelligence they attribute to them and what their expectations are. Some aspects related to these questions are discussed, e.g., in [29,65,108,165]. With respect to how humans talk with each other, some analyses were done in [13] and [29]. In [13], the authors based their research on insights from the field of Conversational Analysis, and correspondingly implement typical real-world conversation patterns, albeit in a somewhat restricted form, in their technical proposal. In general, more work also needs to be done to understand the effects of the quality perception of a system when certain communication patterns like the explanation for a system recommendation are not supported, as it is the case for many investigated systems.
OUTLOOK
Our study reveals an increased rise in the area of CRS in the past few years, where the most recent approaches rely on machine learning techniques, in particular deep learning, and natural language based interactions. Despite these advances, a number of research questions remain open, as outlined in the discussion sections throughout the paper. In this final section, we briefly discuss four more general research directions.
One first question is "Which interaction modality supports the user best in a given task?". While voice and written natural language have become more popular recently, more research is required to understand which modality is suited for a given task and situation at hand or if alternative modalities should be offered to the user. An interesting direction of research also lies in the interpretation of non-verbal communication acts by users. Furthermore, entirely voice-based CRS have limitations when it comes to present an entire set of recommendations in one interaction cycle. In such a setting, a summarization of a set of recommendations might be needed, as it might in most cases not be meaningful when the CRS reads out several options to the user. Second, we ask: "What are challenges and requirements in non-standard application environments?" Today, most existing research focuses on interactive web or mobile applications, either with forms and buttons or with natural language input in chatbot applications. Some of the discussed works go beyond such scenarios and consider alternative environments where CRS can be used, e.g., within physical stores, in cars, on kiosk solutions, or as a feature of (humanoid) robots. However, little is known so far about the specific requirements, challenges, and opportunities that come with such application scenarios and regarding the critical factors that determine the adoption and value of such systems. Regarding the usage scenarios, most research works discussed in our survey focus on one-to-one communications. However, there are additional scenarios which are not much explored yet, for example, where the CRS supports group decision processes [1,103].
A third question is "What can we learn from theories of conversation?", see also [141]. Regarding the underpinnings and adoption factors of CRS, only very few works are based on concepts and insights from Conversation Analysis, Communication Theory or related fields. In some works, at least certain communication patterns in real-world recommendation dialogues were discussed at a qualitative or anecdotal level. What seems to be mostly missing so far, however, is a clearer understanding of what makes a CRS truly helpful, what users expect from such a system, what makes them fail [95], and which intents we should or must support in a system. Explanations are often considered as a main feature for a convincing dialogue, but these aspects are not explored a lot. In addition, more research is required to understand the mechanisms that increase the adoption of CRS, e.g., by increasing the user's trust and developing intimacy [72], or by adapting the communication style (e.g., with respect to the initiative and language) to the individual user.
Finally, from a technical and methodological perspective, we ask: "How far do we get with pure end-to-end learning approaches, i.e., by creating systems where, besides the item database, only a corpus of past conversations serves as input. Tremendous advances were made in NLP technology in recent years, but it stands to question if today's learning-based CRS are actually useful, see [59]. In part, the problem of assessing this aspect is tied to how we evaluate such systems. Computational metrics like BLEU can only answer certain aspects of the question. But also the human evaluations in the reviewed papers are sometimes not too insightful, in particular when a newly proposed system is evaluated relative to a previous system by a few human judges. We therefore should revisit our evaluation practice and also investigate what users actually expect from a CRS, how tolerant they are with respect to misunderstandings or poor recommendations, how we can influence these expectations, and how useful the systems are considered on an absolute scale. Technically, combining learning techniques with other sorts of structured knowledge seems to be key to more usable, reliable and also predictable conversational recommender systems in the future. | 2020-04-03T19:08:30.436Z | 2020-04-01T00:00:00.000 | {
"year": 2020,
"sha1": "1eb747ce0431f6f9c3a97fcea6f7b235191c3813",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2004.00646",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1eb747ce0431f6f9c3a97fcea6f7b235191c3813",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
237847319 | pes2o/s2orc | v3-fos-license | Changes in the Water Resources of Selected Lakes in Poland in the Period 1916–2020 as Information to Increase Their Availability
: The historical effects of land development on water management currently require a new approach, in many cases involving attempts at the restoration of the quasi-natural state. This is evident in reference to many regions in Poland, where the hydrographic network has been diminishing over the centuries, among others in favour of obtaining new agricultural land. Such activities overlap with natural processes causing transformations of the hydrosphere. The most serious problems currently include water deficits resulting from climate change and human activity. This paper employed archival bathymetric maps from the beginning of the 20th century for the determination of the scale of changes in the morphometric parameters of six lakes in western Poland. It was determined that over a period of more than a hundred years, the surface area (12.2%) and original volume of water bodies (13.9%) were largely reduced. This situation was caused by both natural (overgrowing and shallowing) and anthropogenic (change in water level) factors. The obtained information points to the need of an inventory of historical bathymetric maps of lakes. In combination with modern research, this will allow for the determination of changes in the water resources of lakes and, in a longer-term perspective, potential possibilities of their renaturisation. This knowledge is important in the context of the reconstruction of water resources in the territory of Poland, where their deficits are recorded increasingly frequently. It should also be emphasised that the restoration of the natural capacity of water retention in lakes is a more economical solution and, most importantly, it is not invasive for the environment.
Introduction
Changes in the natural environment constitute its permanent element determined by the exchange of energy and matter between its particular components. Due to the physicochemical properties of water (e.g., the dynamics of movement and the solubility of various substances), considerable changes have been observed in the environment, particularly referring to its quality and quantity. The response of surface waters to the supply of pollutants or changes in the components of the water balance in a given catchment are observed relatively rapidly, with broad consequences for other water-related components (including hydrobiological conditions) [1][2][3][4]. The human perception of water in the natural environment has been subject to evolution determined by the civilisation needs of a given epoch. For example, analysis of the function of water bodies in south-eastern Poland has shown that they fulfill different functions that have changed throughout centuries, including defence, economic, industrial, and recreational functions [5]. Recent decades have revealed intensification of the problem of water deficits in many regions around the globe [6,7], determined by purely climatic factors (e.g., global warming), but also improperly conducted water management. In the context of access to water resources, lakes are an important element of the hydrosphere. They show a high capacity for water retention, and due to their stability, they provide the possibility of easy access for use. Moreover, their presence contributes to the mitigation of extreme hydrological situations such as draughts and floods in the surrounding areas [8]. As they evolve, however, a vast majority of lakes are subject to successive disappearance. According to the current research in the scope, this process is usually associated with changes in the surface area [9][10][11], whereas the lake basin as a three-dimensional concave landform is also subject to the process of accumulation (filling) of sediment, consequently leading to the disappearance of a lake. The disproportion between the analysis of changes in the surface area of lakes and their complex disappearance (identified as a reduction in water resources) is determined by insufficient comparative data, namely, bathymetric maps from two or more periods [12]. It should also be emphasised that the availability of bathymetric maps permits the determination of the primary morphometric properties of lakes, and is important for the calculation of the water, thermal, and chemical balances of these elements of the hydrosphere [13]. Therefore, knowledge concerning lake bottom relief is a frequently discussed issue in hydrological research [14][15][16][17][18]. In this context, studies concerning changes in isobaths reflecting the transformations of bottom topography should be considered scarce [19][20][21].
This paper presents a comparative analysis of bathymetric maps of six lakes in the Wielkopolskie Lakeland (western Poland; Figure 1) in the period 1916-2020. The objective of this article is the analysis of changes in water resources of lakes, allowing for the determination of their direction, rate, and causes. Next to the purely scientific character showing changes occurring in reference to selected elements of the hydrosphere, the objective also has an applicative character: It points to a new direction of activities aimed at the restoration of natural water resources that requires detailed data regarding the scale of the related transformations to date.
Materials and Methods
The analysed lakes are of post-glacial origin, located on a moraine plateau, and their basins are primarily incised into glacial tills. Both the inflows and outflows through streams are negligible, and in certain periods of droughts, they completely disappear. The lake catchments are under intensive agricultural use, dominated by arable land, with a small share of forest areas.
Research on transformations of the natural environment requires detailed knowledge, which, next to the purely theoretical aspect, is necessary from the point of view of economic conditions. In the case the study area, this can be referred to as the problem of water deficits. The average unitary runoff from the area is 2-3 dm 3 ·km 2 , constituting one of the lowest values in Poland. The implementation of the adopted research objective was based on the analysis of historical and modern bathymetric maps. The adopted starting point for determining changes in water resources and other lake parameters (i.e., area and depth) was information from the beginning of the 20th century Geologische Karte von Preussen und benachbarten Bundesstaaten (1:25,000). Next to rich, extensive geological content, the maps provided valuable information on other components of the environment. Considering the complete cartometricity of the documents, the information provided by the bathymetric maps of the lakes ( Figure 2) is exceptionally valuable, because it refers to the shape of the lake basins from before intensified human pressure on the environment. The situation from the early 20th century was referred to as the current state through preparation of current bathymetric maps. On the date of performing bathymetric measurements, the measurement of the water level was conducted by means of GPS RTK. Bathymetric measurements were conducted from a boat by means of a Garmin Fishfinder 100 echosounder. The data file included information on the location of the measurement point (x and y coordinates) and information on depth (z). During the measurements with the echosounder, point measurements were simultaneously conducted by means of a traditional probe-a weight suspended on a cord. Such an approach aimed at normalising the modern technique to that from a century ago. Moreover, the data were used for the verification of measurements performed by means of the echosounder. The results obtained from the echosounder in the postprocessing were corrected based on manual measurements. The measurements performed by means of the echosounder and weight probe differed by an average of approximately 0.07 m. In the next stage, the data were fed into ArcMap software for the purpose of developing a digital model of the lake basin. This employed the topo to raster function, permitting the interpolation of point data to the continuous surface of the lake basin. Then, by means of the contour function, an isobath map was developed, with a depth contour corresponding to that presented in the historical maps. The obtained digital models of the lake basins were integrated with the digital terrain models available in Poland that were developed based on data from aerial laser scanning (LIDAR). The digital terrain model had a spatial resolution of 1 m and mean error of up to 0.2 m. This way, a three-dimensional terrain model with the lake basin was obtained. The developed model permitted the development of modern curves of the surface area and volume of the lakes. In the case of the historical maps, in the first stage, they were converted to the ETRS 1989 UWPP 1992 coordinate system. This process employed the methodology proposed by Kubiak-Wójcicka et al. [22] to convert raster maps to digital form and to validate the obtained products. Moreover, taking into account the information presented by Manowska and Szuba [23] regarding the precision of height measurements at the end of the 19th century declining with distance from the reference points, the development of digital terrain models of the areas adjacent to the lake basins involved referring them to the PL-KRON86-NH geodetic height system used in Poland. In order to reduce the errors associated with the direct transfer of heights found in the archival bathymetric maps, the surface relief of the terrain surrounding the lakes was also vectorised based on the contour lines. This way, an archival digital terrain model with lake bathymetry was obtained. The model was compared to the actual digital terrain model in the ArcMap software using the raster calculator and difference function. The differences in elevation in the areas adjacent to the lakes resulting from the overlaying of digital terrain models did not exceed 0.2 m. The second stage of the validation of the archival data involved overlaying the lake water surface ordinates recorded on the archival maps on the actual digital terrain model. This permitted the determination of the surface and spatial extent of the lakes. In the case of height differences, the digital terrain models developed based on archival data were matched with the actual models. The extracted volume and surface area from the archival and actual digital terrain models were used to plot graphs of depth against volume and surface area. The obtained results allowed for a comparative analysis in reference to changes in the surface area and volume of the lakes.
Results
This study revealed considerable changes in the morphometry of the analysed lakes that occurred over the last hundred years. For the six cases analysed in this paper, a total decrease in their surface area by 12.2% was recorded (Table 1), as well as a reduction in water resources of 13.9%.
The changes in the particular lakes were variable, largely due to their morphometric parameters and factors affecting particular objects. The smallest transformation occurred in the case of Lake Zajączkowskie, whose surface area decreased by only 4.4% and whose volume decreased by only 4%. It should be emphasised that the recorded changes primarily concerned the northern and southern parts of the lake, where in the first designation, development of an island was recorded (with a surface area of 918.0 m 2 ), whereas in the map from the early 20th century, the zone was only occupied by a belt of rush vegetation. The southern part covers a shallow bay isolated with a narrow isthmus, subject to considerable shallowing over the last hundred years, as confirmed by a 5 m isobath shift towards the middle of the lake. Moreover, the conducted field research suggests that the section connecting the main part of the lake with the bay is subject to intensive overgrowing and shallowing processes, and its current depth is approximately 0.9 m. Due to this, it can be assumed that, from a short-term perspective, this part will become completely separated from the main lake basin (Figure 3). Intensive overgrowing processes also concerned Lake Wielkie, which are favourable for the vegetation succession of small depth-mostly below 1.5 m (80%). Vegetation succession covered the entire length of the shoreline, consequently reducing the surface area by 25% in comparison to the state from the beginning of the previous century. A particularly intensive overgrowing process occurred in the western and northern parts of the lake, contributing to enlarging the peninsula and a considerable reduction in the size of the bay (Figure 3). The intensive process of disappearance of the lake was confirmed by a cartographic study from the early 19th century (Figure 4). Although the presented map fragment is not cartometric, notice that the outline of Lake Wielkie is considerably larger, with an elongated southern bay, already separated in the later period. Interestingly, substantial diminishment of the lake was recorded, along with an increase in the water level in comparison to the starting point from the early 20th century. This is related to the obstructed outflow of the Ostroroga River feeding the lake, limited by the overgrowing and shallowing of the bay in the northern part of the lake. A decrease in the water level was a key factor causing changes in Lakes Buszewskie and Lubosińskie (in the latter case later during the division into two independent lakes). The lakes are of flow-through character, connected by a system of ditches [24]. According to earlier research from the area, in terms of transformations of water relations, it was subject to melioration, among other things, resulting in a network of ditches and melioration canals, drainage of large areas, and a decrease in the level of shallow groundwaters [25]. An example of such activities is the dense network of ditches approximately 1.5 km below Lake Lubosińskie Małe, draining water into Lubosiński Canal. As a consequence of these works, the water level in the aforementioned lakes was considerably reduced, resulting in a reduction in their surface area, in the case of Lake Buszewskie by 7% and in Lake Lubosińskie by 12.5%, and division into two independent lakes (Figure 3).
Discussion
The study results point to an evident transformation of the analysed lakes, corresponding with the broad research trend. The surface area of lakes has been declining at a global scale [26][27][28][29], and the rate and scale of such transformations is determined by the local factors and properties of particular lakes. Based on a similar methodology supported by historical cartographic materials [30][31][32][33], a similar direction of changes in the surface area of lakes has also been determined in the case of other regions of Poland. The scale of the process has been variable, depending on the volume of the analysed data set and morphometric properties of the lakes. In reference to research employing cartometric archival maps (i.e., from the mid-19th century onwards), it can be stated that such an approach has no limitations, and the studies are of reference value to the modern scope of data. Limitations occur in the case of using maps older than the temporal scope specified above [34].
Referring the obtained study results to other scarce papers addressing changes in bathymetry over a period of more than a hundred years confirms the earlier findings [35,36], pointing to the progressing reduction of water resources with a rate dependent on local conditions. One of the key properties of lakes distinguishing them from other elements of the hydrosphere is the high capacity for water retention, whose volume depends on the size of the lake basin. The volume of lakes is not permanent and is subject to changes-usually a decrease, considering their natural course of evolution leading to their disappearance as concave landforms. The rate of the process is largely variable, determining the size of lakes and the effect of natural and anthropogenic factors in the catchment. The supply of nutrients is important in this context. According to Gradke [37], high amounts supplied in the 20th century accelerated the overgrowing process. The amount of accumulated water resources in the lake affects the rate of water circulation in the catchment. This is of key importance for shaping environmental (biodiversity) as well as economic conditions (agricultural irrigation, industrial purposes, transport, etc.). Using historical bathymetric maps and conducting modern sounding of lakes has permitted the determination of changes in the amount of water accumulated in lakes. Such a comparison does not only show the direction and scale of the transformation, but also the applicative character. Water deficits, increasingly frequent due to the evidently observed climate change, require activities permitting the mitigation of such situations. Due to this, projects are implemented aimed at increasing the water resources in Poland through the expansion of objects accumulating water. The highest increase in scope has been obtained through damming natural lakes and the construction of retention reservoirs [38]. The construction of artificial lakes is not always positively received by society, and the appearance of a new object in the environment substantially changes its functioning, not only in the vicinity of the lake, but throughout the catchment. Due to the cascade responses of many components of the environment, the effects of such decisions are difficult to predict in the context of their further functioning. Therefore, next to invasive hydrotechnical infrastructure, other solutions should be considered, including works aimed at the reconstruction of the lost water resources. The melioration of wetlands and a decrease in the water level of rivers and lakes have been determined by obtaining new agricultural areas. According to Kaniecki [39], these activities have caused large-scale effects evident in an increase in the rate of water circulation in Wielkopolska and a decrease in the level of groundwaters. As a result, over a relatively short period of time, the newly obtained arable land became excessively dry, and the maintenance of appropriate soil class properties required additional costs. It should be emphasised that in the period analysed in the paper, the agrarian culture was subject to a radical change-primarily through an increase in the intensity of production to surface area. Due to such an approach, the perception of water changed. In the past, it limited the expansion of agricultural land; today, its deficits in particular vegetation phases of plants can affect production yields. As a result of various hydrotechnical works, the water level of many lakes has reduced [40,41] and, in extreme situations, completely drained [42,43]. In the context of civilisation changes, many melioration works previously undertaken, originally aimed at the optimal use of the environmental resources for economic purposes, can be currently considered inappropriate or undesirable for the current state. This has been confirmed by refilling previously dried lakes with water [44], caused by the abandonment of relevant measures regulating hydrological relations, among others, due to their cost inefficiency. Therefore, one of the optimal solutions in the context of an increase in water resources is lake renaturisation, which aims to restore their water retention capacity. Such an approach corresponds with the research undertaken in this paper, assessing the magnitude of changes in water resources over the last hundred years, i.e., both the progressing natural processes of lake evolution and successively intensifying human pressure. The postulate concerning the reclamation and expansion of the retention function of lakes transformed as a result of human activity has been reported before in reference to north-eastern Poland [45], where the first activities related to the restoration of lakes to their original state have already commenced. Next to water resources, other benefits of lake renaturisation can be illustrated based on the example of Lake Ardung. From the long-term perspective, the lake will become less prone to degradation, the shore zone with high primary production serving as a biofilter for pollutants supplied from the catchment will increase, and soil moisture will improve, among other things [46].
The effects of climate change have been increasingly commonly observed in recent years. The close relationships of the atmosphere and hydrosphere point to a considerable transformation of the latter. This concerns, for example, thermal and ice conditions and water level fluctuations [47][48][49][50]. Climate conditions overlap with human activity, additionally intensifying their impact [51]. In this context, access to water is becoming of key importance in many regions of the world, and its amount may continue to decrease due to the forecasted further climate warming.
Conclusions
The conducted analysis of the water resources of selected lakes in Central Europe showed their evident decline over the last hundred years, varying in particular cases from 4% to 37%. This situation has resulted from their natural evolution (shallowing and overgrowing) and human activity (hydrotechnical works). In the context of environmental transformations (natural and artificial), it is important to provide conditions for an increase in water resources through new technical solutions aimed at an increase in retention, but also through the reconstruction of many elements of the hydrosphere that have become redundant at different stages of civilisation development or have restricted such development (according to the contemporary assumptions). The current and future situation faced by humanity appears completely different, and its further optimal functioning depends on access to water. The study results revealed the magnitude of the lost water resources and encourage undertaking broader research aimed at the development of a framework for their renaturisation.
Conflicts of Interest:
The authors declare no conflict of interest. | 2021-09-01T15:05:46.662Z | 2021-06-29T00:00:00.000 | {
"year": 2021,
"sha1": "b724176ec8e0ba84b9fcf6245edac663b4a6c595",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/13/13/7298/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "47d8589d952b63dfc8f07f170c2c723419fa5414",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geography"
]
} |
17219077 | pes2o/s2orc | v3-fos-license | Sensory and Microbiological Evaluation of Traditional Ovine Ricotta Cheese in Modified Atmosphere Packaging
Ovine ricotta cheese is a traditional Sicilian dairy product characterised by high humidity and a short shelf life (2-4 days when refrigerated). The increasing demand for fresh food has prompted manufacturers to develop special packaging techniques, such as modified atmosphere packaging (MAP), that can extend the shelf life and maintain the organoleptic characteristics of ovine ricotta cheese. The aim of the present study was to evaluate the shelf life of fresh MAP-packed ovine ricotta cheese by monitoring the microbiological, chemical, physical and organoleptic characteristics of the product. Samples of a single batch were packed in MAP or vacuum packed and stored at 4°C for 24 and 7 days, respectively. Water activity, pH, physicochemical parameters and microbiological characteristics were examined. A sensory panel rated the product’s main organoleptic characteristics (colour, odour, flavour and texture). Results showed that MAP controlled the development of any unwanted microflora, but did not affect the development of intrinsic lactic floras or chemical parameters. Sensory analysis revealed that overall the MAP-packed ricotta remained acceptable for up to 15 days of storage. The vacuum-packed ricotta cheese, however, showed a progressive deterioration in organoleptic characteristics from day 5 onward and therefore had a shorter shelf life. In conclusion, the ability of MAP to extend the shelf life of a traditional regional product (such as fresh ovine ricotta cheese) guarantees consumers a quality product and provides opportunities for manufacturers to expand their markets beyond national boundaries.
Introduction
Ricotta cheese is a traditional dairy product of some Italian regions, obtained from the thermo-acid coagulation of proteins in the residual whey of processed ovine or bovine cheese. Ricotta (literally twice cooked) is produced by first heating milk at 37°C to produce cheese and then treating the whey thermally at 80-90°C (Gammariello et al., 2009). This thermal treatment results in the denaturation of residual proteins, mainly albumin and globulin, which, when surfacing, retain the fat globules remaining from the cheese processing. During the production of Sicilian ovine ricotta cheese, the whey is enriched with whole raw ovine milk (5-15%) to increase the yield and improve the commercial characteristics of the product. Most importantly, it should be noted that for the purposes of the present study the term cheese for a dairy product obtained by coagulation of whey proteins is used in a conscious manner and without ambiguity.
When properly refrigerated, ricotta generally has a shelf life of only a few days because of its high moisture level and pH (above 6). To meet the commercial needs of medium and large distribution and still maintain the typical sensorial characteristics of the traditional product, it is necessary to extend the product's shelf life.
The aim of the present study was to evaluate the shelf life of MAP-packed ricotta ovine cheese by monitoring microbial, physicochemical, and sensory parameters of the cheese during its storage period.
Materials and Methods
This study was conducted on a batch of Sicilian ovine ricotta cheese produced on a dairy farm in the province of Enna. The ricotta cheese was produced according to the traditional process, using the whey of ovine milk from Pecorino cheese processing and adding 13.5% raw sheep's milk. The MAP-packed ricotta cheese (MAP ricotta) was portioned into 500-g packages using a gas mixture consisting of 30% CO 2 and 70% N 2 . Some of the ricotta cheese was vacuum packed (VP) ricotta to be compared with the MAP ricotta. Packages con-sisted of two plastic cylindrical containers placed one inside the other; the inner container was pierced to allow the ricotta to drain. For the first 3 h after production the samples were left to drain at room temperature, then they were stored at 4°C overnight. After packing, they were refrigerated and sent to the Centro Latte e Lotta alle Mastiti laboratory of the Istituto Zooprofilattico Sperimentale della Sicilia, Palermo, Southern Italy. Laboratory analysis and sensory evaluations were performed on the MAP ricotta 24 h after production (T1) and at successive intervals up to 24 days (T3, T7, T11, T14, T17, T21 and T24). The VP ricotta was analysed at T1, T3 and T7 (the end of its shelf life). Furthermore, to mimic domestic storage conditions, part of the MAP ricotta was stored at 7±1°C.
Ricotta cheese samples were subjected to sensory evaluation by 12 judges (5 men and 7 women) aged 30 to 52. The panelists evaluated the main organoleptic characteristics: colour (white, ivory, yellowish), smell (lactic, acid, animal/stable), taste (sweet, acid, salty, bitter) and consistency (creamy, pasty, grainy, unctuous). After tasting the sample, each panelist rated the overall acceptability of the product (good, acceptable, not acceptable). The samples, which were identified by number in order to avoid affecting the panel's judgements, were tasted about 30 min after the package had been opened and were kept at room temperature.
Results
The results of the microbiological analysis of the MAP ricotta stored at 4±1°C (Table 1) showed an increase in TBC from 10 3 to 10 6 cfu/g, from the third day of storage; this value remained constant for the duration of the observation period, settling around 10 7 cfu/g. A single aliquot at T11 showed the presence of coliforms in concentration of 10 3 cfu/g. The isolated lactic microflora was mainly represented by mesophilic cocci, which increased to 10 5 cfu/g in the first 7 days and then remained constant. Thermophilic cocci and mesophilic rods were absent in the first two samples (T1 and T3) and from T7 their mean concentrations stabilised around 10 3 cfu/g (Table 1). On Gram-positive and catalase-and oxidase-negative isolates, the genotyping PCR detected a predominance of Lactobacillus casei and Enterococcus faecalis and a low prevalence of Enterococcus gallinarum. pH and a w values are shown in Table 2; specifically a w remained constant and decreased slightly from T17, with values ranging from 0.97 to 0.94, whereas pH decreased from 6.54 to 5.96. The main commodity-related parameters (Table 2) reflect those commonly found in ovine ricotta cheese. The MAP ricotta samples stored at 7±1°C did not differ from those kept at 4°C in terms of microbiological characteristics (Table 3) and sensory evaluation.
The results of the microbiological analysis of the VP ricotta were similar to those of the MAP ricotta; TBC ranged from 6.7×10 3 cfu/g (T1) to 8.8×10 6 cfu/g (T7) and the values of final pH and a w (T7) were 6.36 and 0.96, respectively. The only difference is that lactic microfloras were present in lower concentrations: mesophilic cocci were only isolated in concentrations from 1.0 (T1) to 7.2×10 2 cfu/g (T7).
No pathogens (e.g. Listeria monocytogenes, Salmonella spp.), coagulase-positive staphylococci or E. coli were detected in any of the samples examined.
Sensory evaluation of the MAP ricotta showed that the sample was white at T1 only, ivory up to T17 and yellowish up to T24. The smell was classified as predominantly lactic up to T14; from T17 most of the evaluators perceived acidic notes that became more intense and were noted by everyone at T21 and T24. The taste was pleasant, sweet, slightly acidic or salty up to T14, then salty, salty-bitter and markedly acidic at T24. The consistence was creamy, grainy and slightly unctuous until T3, but then it became pasty and more unctuous from T7 to T24.
From T11 onwards, whey was found at the bottom of the perforated trays and the quantity increased in the following days as a result of incipient separation of the liquid and solid components. The VP ricotta was ivory up to T3, had a lactic odour, a sweet or slightly salty taste, and a creamy or slightly grainy consistency; almost all tasters considered it good. At T7 the majority of the panelists changed their taste rating from sweet to acidic and bitter; the consistency was rated as pasty and unctuous; and the pleasantness rating was good for three subjects, acceptable for four and not acceptable for five.
Article
Sensory evaluation of MAP ricotta cheese showed that the pleasantness judgment was evaluated as good for most of the tasters until T3 and always as acceptable overall up to T14, with the exception of two evaluators who defined the T11 ricotta cheese as not acceptable. At 17 days of storage 7/12 tasters evaluated the MAP samples as acceptable, while at T24 the unanimous opinion was not acceptable. For VP ricotta the pleasantness assessment appeared better (good) up to T3, while at T7 5/12 evaluators defined it as not accettable.
Discussion
The literature reports different findings about the effect of CO 2 on the growth of lactic acid bacteria, due to their microaerophilic nature. Our results showed increased concentrations of rod-and bacillus-shaped lactic flora after a lag phase of 3-7 days, together with a decrease in pH, as reported by Del Irkin (2011). According to Irkin (2011), vacuum packaging gives lower counts than MAP. Total bacterial count up to T7 in MAP and VP ricotta were similar; Irkin (2011) reports an increase of TBC in VP whey cheese after ten days of storage, a period that we did not investigate because the deterioration in the sensory characteristics made the VP ricotta cheese unacceptable.
The values of pH and a w we observed to promote the growth of lactic acid bacteria; our results of the physicochemical analysis were similar to those detected in Sardinian ovine ricotta cheese (Scintu et al., 2001). Sensory evaluation of MAP ricotta cheese produced overall judgments of acceptable up to T14. The 2/12 evaluators who considered the T11 ricotta cheese to be unacceptable were probably more sensitive to the presence of coliforms detected in this sample, resulting from the probable contamination of the aliquot during the packaging phase. The presence of whey in the bottom of the perforated trays from day 11 is considered to be a defect by Sicilian consumers who are used to the fresh product, although it did not affect the sensory characteristics. The VP ricotta cheese at T3 was more popular, while at T7 most of the evaluators preferred the MAP product.
Conclusions Overall, the ricotta cheese samples tested showed normal microbiological characteristics with a good lactic microflora content that remained almost constant for the entire observation period.
Modified atmosphere packaging of traditional Sicilian ovine ricotta cheese preserved the typical organoleptic characteristics of this product for up to 15 days, twice the shelf life of the vacuum-packed product, thus allowing producers to access markets which are further away and even abroad.
Since traditional ricotta cheese is highly perishable, if manufacturers want to guarantee a shelf life of 15 days they must ensure high standards of hygiene during the production process and maintain the cold chain during distribution and marketing phases. The gas mixture used (30% CO 2 and 70% N 2 ) ensured that good sensory characteristics were maintained, but it was unable to inhibit the viability of coliforms, if any. However, further work is necessary to establish the optimum gas concentration for packaging this product, in order to ensure an extended shelf life, the inhibition of undesirable microorganisms and the conservation of sensorial attributes. | 2018-04-03T01:55:30.476Z | 2014-04-17T00:00:00.000 | {
"year": 2014,
"sha1": "f59bcdf0d29fd7712ba8a556467705ecd82607f4",
"oa_license": "CCBYNC",
"oa_url": "https://www.pagepressjournals.org/index.php/ijfs/article/download/ijfs.2014.1725/3922",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f59bcdf0d29fd7712ba8a556467705ecd82607f4",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Environmental Science",
"Medicine"
]
} |
255497509 | pes2o/s2orc | v3-fos-license | No common factor for illusory percepts, but a link between pareidolia and delusion tendency: A test of predictive coding theory
Predictive coding theory is an influential view of perception and cognition. It proposes that subjective experience of the sensory information results from a comparison between the sensory input and the top-down prediction about this input, the latter being critical for shaping the final perceptual outcome. The theory is able to explain a wide range of phenomena extending from sensory experiences such as visual illusions to complex pathological states such as hallucinations and psychosis. In the current study we aimed at testing the proposed connection between different phenomena explained by the predictive coding theory by measuring the manifestation of top-down predictions at progressing levels of complexity, starting from bistable visual illusions (alternating subjective experience of the same sensory input) and pareidolias (alternative meaningful interpretation of the sensory input) to self-reports of hallucinations and delusional ideations in everyday life. Examining the correlation structure of these measures in 82 adult healthy subjects revealed a positive association between pareidolia proneness and a tendency for delusional ideations, yet without any relationship to bistable illusions. These results show that only a subset of the phenomena that are explained by the predictive coding theory can be attributed to one common underlying factor. Our findings thus support the hierarchical view of predictive processing with independent top-down effects at the sensory and cognitive levels.
Introduction
Our subjective impression of the outside world results from a complex interplay between the sensory information that our eyes send to our brain on the one hand, and knowledge and experience that we collect throughout our life on the other hand. The influential predictive coding theory aims to explain this interplay by postulating that perception results from an active process of predicting the cause of the current sensory input Clark, 2017;Friston, 2018). According to this theory, the brain forms a hypothesis about what caused a certain sensory impression. This hypothesis is then compared with the sensory input by sending a top-down prediction signal. If there is a match, i.e., the hypothesis is able to 'explain away' the sensory input, it is equated to our perception. If, however, there is a mismatch between the prediction and the input, termed the "prediction error, " the information about it is resent in a bottom-up fashion for adjusting the prediction. The predictive coding view is often combined with Bayesian inference approach, which considers reliability of the two sources of information when prediction and sensory input are combined (Friston and Kiebel, 2009;Aitchison and Lengyel, 2017). When the top-down expectation (prior) is weak or unreliable, sensory input (evidence) plays a major role in shaping perception. In contrast, when the sensory input is weak or ambiguous, top-down prediction plays a major role in shaping subjective outcome. The predictive coding theory is able to explain a wide range of perceptual and non-perceptual phenomena, ranging from perception of visual illusions in healthy individuals (Hohwy et al., 2008;Kok and de Lange, 2015;Weilnhammer et al., 2017) to pathological states such as hallucinations (Powers et al., 2016) and psychosis , and even such complex phenomena as consciousness (Hohwy and Seth, 2020).
One type of visual illusions that is often interpreted within the predictive coding framework are the ambiguous (or "bistable") stimuli. Such stimuli contain visual information that can be interpreted in more than one way. When viewed continuously, such stimuli cause the subjective experience of the observer to alternate between perceiving either one or the other interpretation, with a change in perception occurring every couple of seconds (Long and Toppino, 2004;Brascamp et al., 2018;see also Figures 1A,B). The predictive coding theory yields a straightforward explanation for why the perception changes: after one of the possible interpretations has been selected as the likely cause of the sensory input, the feedback signal about this interpretation is send back to the early processing stages. Since the top-down prediction contains only one of the interpretations, but the sensory input allows for two mutually exclusive ones, the second interpretation is sent forward as the prediction error, which is then used to update the prediction, favoring the second alternative. As long as the sensory input remains the same, there is a constant mismatch between the currently selected interpretation and the ambiguous sensory input, which causes constant prediction updating, and hence a constant change in perception (Weilnhammer et al., 2017;Brascamp et al., 2018).
Among bistable stimuli, one subcategory is particularly intriguing, because one of the possible interpretations appears to be simpler and closer related to the sensory input, while the other represents a more complex illusory impression that is derived from the first, simpler one (Lorenceau and Shiffrar, 1992;Anstis and Kim, 2011;see also Figures 1C,D). Such illusions, termed "asymmetric bistable stimuli" (Grassi et al., 2018), are ideally suited for studying the individual proneness to illusory perception. In contrast to most visual illusions where illusory content is always perceived, here perception alternates between the non-illusory and the illusory interpretation, allowing the quantification of the individual tendency for the latter. The predictive coding explanation of the illusory interpretation is additionally supported by the patterns of brain activity that accompany the more complex illusory interpretation. When a more complex interpretation is perceived, a deactivation of early visual areas is observed, which is interpreted as the top-down prediction matching the sensory input (Murray et al., 2002;Fang et al., 2008;Zaretskaya et al., 2013;Grassi et al., 2018).
Pareidolia is a further form of illusory perception typically associated with predictive coding. Pareidolia is a tendency to recognize familiar forms, most commonly faces, in other meaningful or random objects or patterns (Zhou and Meng, 2020). Examples of pareidolia include recognizing animals in cloud formations, in old tree trunks, or even in radiological images (Alexander et al., 2021). The predictive coding framework offers the most straightforward account of this phenomenon. Specifically, a tendency to recognize familiar items can be seen as manifestation of strong perceptual priors that overweigh the sensory information, especially in situations where the sensory input is weak (Salge et al., 2021). A typical experimental paradigm that induces pareidolia contains stimuli with degraded or ambiguous sensory input and a manipulation that enhances participant's expectation (i.e., top-down prediction) about the presence of a certain stimulus (Liu et al., 2014;Pajani et al., 2015; Bistable stimuli used in this study. Static stimuli (A,C) and dynamic stimuli (B,D). For dynamic stimuli (B,D) movement direction of items is indicated by red arrows.
Frontiers in Psychology 03 frontiersin.org Salge et al., 2021). Nevertheless, pareidolias can occur in everyday life even under clear visibility conditions and without a particular expectation of a certain stimulus (Voss et al., 2012). While the above phenomena are observed in healthy individuals, predictive coding theory is also capable of explaining clinically relevant perceptual and non-perceptual phenomena, such as hallucinations and delusions. Although a mild tendency for both phenomena is encountered in general population, extreme forms may be symptoms of a clinical condition. Hallucinations are sensory impressions that are not related to the actual sensory input. In the context of predictive coding theory, hallucinations are thought to be caused either by a pathologically strong role of predictive mechanisms, or as a failure to accomplish a comparison between prediction and the sensory input and to generate a more accurate prediction error (Powers et al., 2016). Interestingly, links between pareidolia tendency and the presence of visual hallucinations have been reported in some clinical populations that are known to experience hallucinations in the visual modality, such as Parkinson's disease or dementia with Lewy bodies (Shine et al., 2011;Onofrj et al., 2013). For example, it has been shown that patients suffering from dementia with Lewy bodies exhibit a higher pareidolia proneness compared to controls, both in images of natural scenes and for two-tone noise images (Uchiyama et al., 2012;Yokoi et al., 2014). Similar findings have been demonstrated for Parkinson's disease patients using ambiguous and unambiguous visual images (Shine et al., 2012). Patients who experienced visual hallucinations were more likely to erroneously identify alternative interpretations in unambiguous images (misperception) and to miss the alternative meaning in the true ambiguous images. In line with the predictive coding explanation of hallucinations as a deficit of top-down influences on perception, recent neuroimaging findings demonstrate specific functional connectivity changes of the frontal areas that are associated with visual hallucinations in Parkinson's disease (Shine et al., 2015;Kajiyama et al., 2021;Revankar et al., 2021).
In contrast to hallucinations, delusions are non-sensory phenomena and represent aberrant and rigid thoughts and beliefs that are not updated despite the contradicting evidence. As non-sensory phenomena, delusions require a predictive processing explanation at higher non-sensory levels. Crucially, however, dysfunctional sensory predictions are thought to lie at the core of higher-level delusional ideations, both in healthy individuals and as a manifestation of psychotic disease . According to this view, weak top-down sensory predictions (sensory priors) lead to excessive salience of bottom-up sensory events. The excessively salient and overweighed sensory events lead to the formation of aberrant higher-level beliefs that are based on distorted and biased evidence.
In the current study, we tested whether there is indeed a relationship between different perceptual and non-perceptual phenomena that are typically explained by the predictive coding theory in healthy adult individuals. A statistical relationship would indicate not only conceptual similarity, but also a common underlying mechanism. We tested a range of visual perceptual phenomena that contain a dissociation between the sensory input and the actual subjective experience of this input, including two classes of bistable illusions, and two types of pareidolia tasks, one with and one without explicitly induced expectations, and collected self-reports of subjects about their hallucinatory experiences and tendency for delusional ideations. We found a covariation between self-reported tendency for delusional ideations and pareidolia proneness, and a separate, independent covariation between different types of bistable stimuli. We conclude that bistable illusion perception on the one hand, and pareidolia as well as delusion tendency on the other hand, are driven by independent perceptual and cognitive mechanisms.
Participants
Eighty two healthy adult volunteers participated in the experiment (mean age: 23.78, SD: 3.29, 55 female). The number of participants was determined using a priori power analysis for detecting a correlation at p < 0.05 with a power of 80% or more, and considering effects found in previous studies with a similar sample size (Smailes et al., 2020). All participants had normal or corrected-to-normal vision (−0.5 diopters or better) and no history of neurological impairments or psychiatric disorders. Recruiting was performed through the university mailing list as well as through the word of mouth. Since our study exploits individual differences in perception, we deliberately focused on a narrow age group of young healthy adults to reduce variability in perception that is related to age or other factors. Our inclusion criteria as advertised in the study announcements were: age between 18 and 35 years, normal or corrected to normal eye sight, no neurological or psychiatric illnesses, no regular medication intake. Subjects signed a written informed consent prior to participation. They received monetary reimbursement for their time and effort. Psychology students had an option of alternatively receiving course credit. The study was conducted according to the Declaration of Helsinki and was approved by the ethics committee of the University of Graz.
Stimulus and experimental procedures 2.2.1. Vision tests
Prior to the main data collection, we acquired an objective measure of participant's visual acuity by means of a visual acuity test and a stereoacuity test. Both tests were presented on a Samsung screen (1920 × 1,080 pixels, diagonal display size, 22 inches, vertical refresh rate: 60 Hz, Samsung Group, Seoul, South Korea). First, the Freiburg Computerized Visual Acuity test (FrACT) based on Landolt C's with 24 trials (Bach, 1996) was Frontiers in Psychology 04 frontiersin.org conducted with both eyes open, and then separately for each eye at a distance 230 cm from the participant. After this, participants were asked to put on red-blue polarized filters and to perform a random dot V stereotest 1 with 6 disparity levels, two trials per level in random order. The total stereoacuity score was determined by summing all difficulty levels of correct trials (maximum 42). Visual and stereoacuity data were used to ensure that our results cannot be explained by low-level visual factors.
All bistable illusions elicited two different interpretations while the physical input remained the same. In two of the illusions (Rubin Face-Vase illusion, Structure-from-Motion stimulus), the two perceptual alternatives were similar in content and complexity, making these illusions symmetric. The other two illusions (Coffer illusion, global-local motion illusion) were asymmetric, with one perceptual interpretation being simpler and the other more complex and illusory (Grassi et al., 2018). Illusions were selected such that there was one static (Figures 1A,C) and one moving dynamic (Figures 1B,D) illusion in each category. Every bistable illusion was presented on a gray background (0.5 of full luminance) with a red fixation dot (0.28° in diameter) in the center of the screen.
In the Rubin's Face-Vase illusion ( Figure 1A), participants were presented with the ambiguous vase-face image (Rubin, 1915). In this image (6.94 × 6.94°), either two face profiles in black facing each other or a white vase could be perceived. In the Coffer Illusion ( Figure 1C) the participants were presented with an image (6.94 × 6.94°) of what initially looks like a grid of squares (default percept). Upon longer observation 16 circles (alternative percept) could appear in the image (Norcia, 2006). The dynamic structurefrom-motion illusion ("SFM, " Figure 1B) was produced by 350 black dots (dot diameter: 0.16°) that were randomly placed around the fixation point forming a cylinder (cylinder width and height 6.58 × 6.67°). The dots moved horizontally in opposite directions at a speed of 0.56°/s creating the effect of a 3D cylinder structure. Participants perceived the cylinder as rotating to the right or to the left. Finally, in the bistable global-local motion illusion ("Anstis, " Figure 1D) four pairs of black dots (0.42° dot diameter, 1.23° center-to-center distance between two dots in a 1 http://www.neuro-o.se/CritVis/cVis2.html#3DV pair) were arranged in a square (side length: 5.89°). The pairs were rotating in circular motion (0.5 revolutions/s) leading to perception of either four pairs of dots moving locally (default percept) or of two large squares rotating on top of each other (Anstis and Kim, 2011). Short videos of the dynamic stimuli are available as Supplementary material.
Subjects were seated in a chair in front of the monitor with their head in a chin rest to minimize head movement. They were asked to view the stimuli and indicate their perception using the left and right arrow keys on a SteelSeries APEX M800 highprecision mechanical gaming keyboard (SteelSeries ApS, Copenhagen, Denmark). Participants had to keep pressing the key as long as the corresponding percept was experienced, only pressing no key if they perceived both at the same time or were unsure of what they saw. The left arrow was used for faces, leftward cylinder rotation, squares in the Coffer illusion and local motion in the Anstis illusion. The right arrow was used for vase, rightward cylinder rotation, circles in the Coffer illusion and global illusory squares in the Anstis illusion. Before each illusion the participants had a chance to familiarize themselves with the stimulus and to practice.
The four illusions were presented in randomized order. Each illusion was presented four times for 120 s with a 20 s break in between, making the total viewing time of each illusion 8 min long. After each bistable illusion block participants were given the possibility of a self-determined break.
Pareidolia tasks
Either following or preceding the bistable illusion block (in a counterbalanced order) subjects were presented with two different pareidolia tasks (Noise Pareidolia task, Picture Pareidolia task). Both tasks were generated using PsychoPy3 version 2021.1.4 (Peirce et al., 2019) and presented using the same setup in the order that was counterbalanced across subjects.
Noise Pareidolia task. The Noise Pareidolia task aimed at measuring the tendency of participants to perceive expected meaningful items in pure noise and followed the procedure described previously by Liu et al. (2014) using identical stimulation material, but a modified experimental paradigm (for details see below). The stimuli consisted of either faces or letters embedded in noise or of pure noise, yielding 6 experimental conditions: easy-to-detect faces, hard-to-detect faces, pure noise with face expectation, easy-to-detect letters, hard-to-detect letters and pure-noise with letter expectation (Figure 2). The pure-noise images were produced by randomly combining and uniformly spacing bivariate Gaussian blobs with different standard deviations. The same noise images were used in the face and letter task. The easy-to-detect faces and hard-to-detect faces were created from 20 grayscale face photographs (male and female). The faces in the photographs showed a front view and held a neutral face expression. Each face was placed in the center of the image. The face-noise images were created by blending a face photo with a pure-noise image. The letter-noise images consisted of nine Arial Roman/English letter images (a, s, c, e, m, n, o, r, u) and were created by placing a black, printed letter in the center of an image. Identical to the face-noise images, the letter-noise images were created by blending a letter image with a pure-noise image. A checkerboard image was used to neutralize any aftereffects of the images after each trial. For a detailed description of the stimuli see Liu et al. (2014).
Face and letter detection tasks were presented separately in a randomized and counterbalanced order. Each detection task consisted of three blocks and always started with the easy block where 20 easy-to-detect pictures and 20 pure noise images were presented in a randomized order. The next block contained 20 hard-to-detect images and 20 pure-noise images, and the last block included 40 pure noise images. The difficulty, type of stimulus to detect and the instructions were shown to the participants before each block. Each trial started with a fixation cross presented at the center of the screen for 200 ms followed by the stimuli (easy-to-detect, hard-to-detect, noise) for 150 ms, followed by a checkerboard image with a fixation crosshair for 200 ms. Afterwards participants were prompted to give their answer within 3 s by pressing a button.
Participants were asked to press the right arrow on the keyboard if they saw a face/letter in the image and the left arrow if they did not. They were informed that the difficulty of the task would increase from the first to the third block, and that the third block would be the most difficult one. Participants were not informed about the exact percentage of face-containing trials in the most difficult block. At the beginning of each condition participants were presented with five example trials consisting of easy-to-detect, hard-to-detect and pure noise images to familiarize them with the task. The progression from easy to pure noise blocks was intended to induce an expectation of a face/letter in the pure noise blocks. The rate of false positives (i.e., faces or letters identified in pure noise images) in the pure noise block was measured.
Picture pareidolia task. The picture pareidolia task was aimed at determining the participant's tendency to produce a more complex interpretation of the sensory input. It was designed by the authors and consisted of color photographs of natural scenes with three types of context (woods, clouds and man-made objects) containing either a face, an animal or a human-like body ( Figure 3). We deliberately included several object categories and not just faces to make sure we are not investigating abilities related to face recognition, but a general ability to interpret visual information in a meaningful way. An object was hidden in 75% of the images of each context. Each object type was present at least twice in each context. The cloud context contained 3 animals, 3 human-like faces and 2 human-like bodies. The man-made context contained 2 animals, 3 human-like faces and 3 humanlike bodies. The wood context contained 3 animals, 2 human-like faces and 3 human-like bodies. This yielded an equal number of images per object category. The images were identified by a web search (primarily on www.commons.wikimedia.org) and were preselected from an initial larger set of images (96) based on a short pilot online experiment with an independent group of participants (N = 10). The pilot study aimed to assure that the participants do not indicate any pareidolia in images intended for the "pareidolia-absent" image category but are able to identify the respective objects in the "pareidolia-present" image category. The pilot experiment led to a selection of 39 images in total. Three of these images were used for the practice trial and the remaining 36 images were made up of 24 "pareidolia-present" (8 images per context) and 12 "pareidolia-absent" (4 images per context) images.
Each trial started with a 1-s fixation cross, after which a picture was presented, and participants had 10 s to answer. Participants were instructed to view the pictures and use the mouse to left-click at the center of the object they saw if they saw an animal, a face or human-like body in the photo. If they did not see any figure in the picture they were asked to right-click into the center of the picture. The task started with a practice trial made up of three pictures. In this experiment we were primarily interested in the proportion of correctly identified objects (hit rate).
Questionnaires
The main experimental part was followed by 3 self-rating scales that measure hallucination and delusion proneness. An additional fourth questionnaire that assessed mindfulness (Baer et al., 2004) was also presented to the first 60 tested participants. This data was collected to address an entirely different research question and was therefore not a focus of the current study. The questionnaires were presented electronically using LimeSurvey 2 using the same setup as the stimuli. Participants were required to use the mouse for answering the questions. The order of the questionnaires was randomized and counterbalanced across individuals. Main quantitative information on the questionnaires is presented in the Supplementary Example stimuli for the Noise Pareidolia face (A,B) and letter (C,D) detection task as well as pure noise (E) and a checkerboard mask (F). Reproduced with permission without changes from Liu et al. (Liu et al. 2014).
Cardiff Anomalous Perceptions Scale (CAPS)
is a selfreport scale that measures perceptual anomalies (Bell et al., 2006). The 32 items could be answered by the participants with "yes" or "no." For each item being answered with a "yes" participants were required to rate this item on 5-point subscales on intrusiveness, frequency, and distress. An example item would be: "Do you ever see shapes, lights or colors even though there is nothing really there?" The translation of the CAPS into German was performed using the back-translation procedure (Brislin, 1970;Sperber, 2004). The initial translation was performed by the first author. Afterwards a colleague with a very good to excellent command of English translated the German version back to English. Discrepancies between the original version and the retranslated version were solved consensually.
Launay-Slade-Hallucinations Scale -R (LSHS-R) is a selfreport questionnaire assessing hallucination proneness in healthy individuals (Launay and Slade, 1981;Bentall and Slade, 1985). Participants were asked to rate each of the 12 items on a 5-point scale from "certainly does not apply to me" to "certainly applies to me." An example item would be: "In the past, I have had the experience of hearing a person's voice and then found that no-one was there." The existing German adaptation of the questionnaire was used (Lincoln et al., 2009). Peters et al. (1999) Delusions Inventory (PDI) measures delusional ideation in the general population. It contains a total of 40 items with dichotomous response format (yes/no). For each item being answered with a "yes" participants were required to rate this item on 5-point subscales on distress, preoccupation, and conviction. An example item would be: "Do you ever feel as if someone is deliberately trying to harm you?" The delusions inventory was added to determine to which extent abnormal perceptual effects quantified with CAPS and LSHS-R are restricted to the perceptual domain or are related to higher-level delusional tendencies. The existing German adaptation of PDI was used (Lincoln et al., 2009).
Data analysis 2.3.1. Subject-level analysis
Subject-level analysis was performed using custom scripts written in GNU Octave version 5.2.0.
Bistable perception task. Subject's button presses during bistable perception blocks were used to determine duration of each perceptual phase. Because the distribution of individual-level dominance durations deviates from the normal distribution (Levelt, 1967;Brascamp et al., 2005), the geometric mean (defined as the n th root of the product of n values) instead of the arithmetic mean was used as a measure of average duration for each illusion of each subject. Additionally, for the asymmetric bistable stimuli we quantified the tendency to perceive the alternative interpretation ("global" for the Anstis stimulus, "circles" for the Coffer illusion), defined as the total time the alternative interpretation was perceived divided by the total time both percepts were perceived.
Pareidolia tasks. For the Noise Pareidolia task, in which we measured subject's tendency to perceive expected stimuli in pure noise, subject's reports for "stimulus present" in the noiseonly blocks were summed for the face and letter task and divided by the total number of noise-only trials, yielding one false alarm rate value per subject. For the Picture Pareidolia task, where we measured subjects' tendency to overinterpret the content that already had its standard meaning in the absence of explicitly induced expectations, subjects' correctly identified hidden items were summed over all contexts and all object types and divided by the total number of trials, yielding one hit rate value per subject.
Principal component analysis and pairwise correlations
The group-level analysis was performed in RStudio (22.07.1 + 554). The subject-level analysis described above yielded a multidimensional dataset with 11 values per subject: dominance durations for each of the four bistable illusions (Rubin's Face-Vase Illusion, Structure-From-Motion Stimulus, Coffer Illusion, Anstis global-local illusion), complex percept predominance for each of Frontiers in Psychology 07 frontiersin.org the two asymmetric bistable illusions, false alarm rate in the pure noise blocks of the Noise Pareidolia task, hit rate for the Picture Pareidolia task and the scores of the three questionnaires (LSHS-R, CAPS and PDI). These 11 values were used to perform the principal component analysis with prcomp function. Variables were standardized prior to PCA (i.e., subtracting the mean and dividing by the standard deviation of each variable). The number of extracted components was chosen based on Horn's parallel analysis with 10,000 iterations (Horn, 1965) as implemented in paran library. To test which of the dependent variables covary, we conducted a varimax rotation on extracted components (rotating the axes of principle components to maximize the separation of individual variables across components). Finally, since most variables were not normally distributed ( Supplementary Figure 1), the correlation structure between individual variables was examined using Spearman's correlation coefficient.
Results
Data for the Picture Pareidolia task was missing for two participants for technical reasons. For these two subjects the hit values were replaced by the group average. All remaining datasets were complete. The distributions, as well as means and standard deviations of all measured variables are presented in Figure 4. All but one variable, the false alarm rate in the Noise Pareidolia task, revealed sufficient variability. Despite our expectation and in contrast to a previous study that reported false alarm rates of 38% for letters and 34% for faces in this task (Liu et al., 2014), we observed extremely low false alarm rates, with 30.49% of all subjects having zero false alarms in the hardest task. We nevertheless included this variable into the subsequent PCA. Removing it entirely, or substituting it with the false alarm rate over the entire Noise Pareidolia experiment (i.e., including noise trials from easy and hard blocks) does not substantially change the distribution of variable loadings across the first two PCA components (see Supplementary Figure 2).
The outcome of the PCA analysis is shown in Figure 5. Following the parallel analysis ( Figure 5A), 3 principal components were kept for subsequent varimax rotation, which together explained 54% of the variance in the data.
Following the rotation, the hit rate in the Picture Pareidolia task as well as the questionnaires showed similar loadings on the first rotated component ( Figure 5B), while the Data distribution. Frequency histogram of each variable together with mean (also indicated as a vertical dashed line) and standard deviation values, color-coded according to the variable type: bistable stimuli (red), alternative percept predominance (green), Pareidolia tasks (blue) and Questionnaires (violet). This pattern is reflected in the correlation structure between variables which is shown in Figure 6. As expected, there are positive moderate to strong correlations between the questionnaires, ranging from 0.46 to 0.61 (p < 0.01). There are also positive moderate to strong correlations between the reversal rates of bistable illusions, ranging from 0.30 to 0.66 (p < 0.01). Most importantly, there is a moderate positive correlation between the hit rate in Picture Pareidolia task and the delusion score (PDI, R = 0.36, p < 0.01). Finally, there is no significant correlation of Noise Pareidolia with any other variable. Crucially, there is no positive association between any of the illusion measures on the one hand, and the pareidolia and questionnaire scores on the other hand.
Hit rate in the Picture Pareidolia task thus correlated most strongly with the delusion questionnaire ( Figure 7A), but delusion scores also correlated with hallucination scores. We therefore additionally wanted to determine the unique contribution of delusion scores to explaining the Picture Pareidolia proneness. We calculated Spearman's correlation coefficient between Picture Pareidolia hit rate and PDI score with CAPS scores regressed out. This analysis still showed a significant positive correlation ( Figure 7B). In contrast, repeating the same procedure for hallucination scores (CAPS, Figure 7C) versus Picture Pareidolia hit rate with delusion scores regressed out did not show a significant association ( Figure 7D). Similar results were obtained when using LSHS instead of CAPS as hallucination scores (Supplementary Figure 4). This shows that Picture Pareidolia proneness is best explained by the delusion tendency.
The visual acuity test revealed a visual acuity of-0.13 logMAR (SD = 0.21), with only 4 participants exceeding the range of normal visual acuity up to 0.65 logMAR. Neither visual acuity nor stereoacuity showed a significant relationship with Picture Pareidolia hit rate (both p ≥ 0.23) and the PDI questionnaire score (both p ≥ 0.10).
There have been reports of gender differences in pareidolia perception (Pavlova et al., 2015;Proverbio and Galli, 2016). Although the effect was reported specifically for face pareidolia and may be related to women's general superiority in processing facial information (Zhou and Meng, 2020), given that our sample Frontiers in Psychology 09 frontiersin.org contained more female than male participants we tested for potential gender differences in our main dependent variables. Both the Picture Pareidolia task and the PDI score revealed no statistically significant gender differences (Picture: t(80) = 0.13, p = 0.90; PDI: t(80) = −0.22, p = 0.83).
Discussion
In the current study we investigated the connection between different visual and non-visual phenomena explained by the predictive coding theory by examining the individual differences in their parameters in healthy young adults. Our results revealed a known relationship between different types of bistable stimuli, but also a previously unreported link between the pareidolia proneness and the tendency for delusional ideation. Crucially, there was no correlation between bistable perception and the remaining phenomena. Our data do not support the hypothesized link between different visual phenomena and suggest instead that they are governed by independent mechanisms. They further suggest that illusory perception in pareidolia is driven primarily by higher-level factors related to thought than by sensory processing that drives bistable illusions.
No common factor for different types of illusory perception
Our study follows the line of research taken previously by other groups for finding common factors that may underlie potentially related perceptual phenomena. In one such study, Cappe et al. (2014) tested whether there is a common factor in visual perception akin to the g factor in intelligence. The authors analyzed correlations between a range of basic visual perception tasks such as visual acuity, backward masking, detection and discrimination, finding surprisingly few significant relationships between different aspects of visual perception. Such independence was also reported for different types of visual illusions (Cretenoud et al., 2019), and even between the different examples of the same visual illusion type (Cao et al., 2018). In the study by Cao et al. (2018), the authors examined a potential relationship between the alternation rates in different types of bistable stimuli, concluding that the degree of association varied from very weak to moderate and depended on the extent to which the stimuli engage similar perceptual mechanisms. We also observed weak to moderate correlations between dominance durations of different bistable stimuli in our study, with only one strong correlation between the Face-Vase illusion and the Coffer illusion, possibly because both involve figure-ground segregation mechanisms. Importantly, Relationship between pareidolia proneness in the Picture Pareidolia task, tendency for delusional ideation and abnormal perceptual experiences we observed no positive association between bistable illusions and pareidolias, which we hypothesized are both driven by perceptual top-down mechanisms. This and similar studies thus demonstrate that visual perception is less homogeneous than one would expect, and that even the apparently similar visual phenomena may result from entirely different perceptual, and potentially also neural, processes.
Hierarchical nature of predictive coding
Overall, our main finding of independent variation in bistable perception properties on the one hand, and pareidolia proneness and delusion tendency on the other hand is broadly consistent with the idea that top-down mechanisms determine perception at different hierarchical levels, and that these levels may be largely independent from each other . And yet several specific results of our study contradict previous findings reported in the field. For example, one previous study could show a negative correlation between delusional ideation and perceptual stability of bistable stimuli and a positive correlation between delusional ideation and belief-induced perceptual bias in a group of healthy individuals, suggesting an opposite role of weak sensory and strong cognitive priors for the emergence of delusions (Schmack et al., 2013). We observed neither of the two relationships in our data. The absence of a negative relationship between perceptual stability and delusion tendency as measured with PDI scores could partially be explained by the specific measure used to quantify perceptual stability. In the study of Schmack et al., an intermittent stimulus presentation paradigm, in which a bistable stimulus is regularly removed and then shown again, was used to calculate "survival probability," i.e., the likelihood of the perceptual interpretation remaining the same across the stimulus removal periods. In the current study, we used a more classical bistable perception paradigm and measured perceptual stability as an average dominance duration. Average dominance Frontiers in Psychology 11 frontiersin.org duration is directly related to survival probability, as it measures the duration of perceptual stability before a spontaneous destabilization and a switch to the alternative percept occurs. The two measures appear to show a strong correlation across individuals (Figure 2; Leopold et al., 2002), and were shown to be driven by the same oscillatory mechanisms (Zhu et al., 2022). Nevertheless, we cannot exclude the possibility that survival probability in intermittent stimulus presentation may be a measure that better captures the perceptual stability mechanisms. Given the relatively weak association between bistable perception and PDI, even minor deviations in how perceptual stability is quantified could make a difference. Our data also show that correlations between perceptual stability measures derived from different bistable stimuli is moderate at best. It therefore remains an open question to which degree an association between PDI and survival probability in structure-from-motion illusion would generalize to other bistable stimuli. Interestingly though, we found a weak negative correlation between the PDI score and the tendency for an alternative percept in the Anstis and Coffer illusions (and also for the dominance duration of the Anstis illusion, which is likely to be driven by a typically longer global percept). These correlations, although not reaching significance in our slightly smaller sample than that of Schmack et al., are similar in magnitude (for a comparison, see Pearson's correlation coefficients reported in Supplementary Figure 3). They do speak in favor of the hypothesis that delusional tendencies are related to weak top-down sensory predictions if the latter are expressed as the alternative percept predominance.
What drives pareidolia proneness
The key finding of our study is a relationship between the delusion tendency and pareidolia proneness as measured in the Picture Pareidolia task. Delusions are complex cognitive phenomena that involve interpretation, attribution, reasoning and causal inference. This relationship indicates that detecting pareidolias in natural scenes is not limited to perception and object recognition, but involves complex higher-level functions related to cognition and thought.
Our results complement findings of a previous study reporting that pareidolia proneness is related to hallucination tendency in healthy individuals (Smailes et al., 2020). In the current study, where we examined both, hallucination and delusion tendency, we could show that the association with delusions is stronger and may be the actual primary driver behind the association with hallucinations. Interestingly, the same study measured schizotypy in healthy adults and could not find evidence for the latter contributing to the pareidolia-hallucination association (Smailes et al., 2020). Together with previous findings, our results suggest that the association between pareidolia and delusion tendency is not a reflection of general psychotic tendency of an individual, but is confined to its one specific manifestation, namely delusions. It follows that increased pareidolia proneness previously reported in schizophrenia patients (Abo Hamza et al., 2021) may be explained solely by the core symptom of delusions.
Picture and noise pareidolia
An interesting aspect of our findings is that we observed effects only in the Picture Pareidolia task, but not in the Noise Pareidolia task. One rather technical reason for this could be the skewed distribution of false alarm rates for the Noise Pareidolia task in our sample, which could have led to floor effects (see Figure 4). However, this must have also been the case in the study of Smailes et al., (2020) (false alarm rate of 0.12 in Smailes et al., (2020) versus 0.10 in this study), which nevertheless could report an association between a similar Noise Pareidolia task and hallucinations. More recently, a large-scale study with more than 1,000 participants revealed that the effect size for the association between hallucinatory experiences and false alarm rates in a detection task is rather small (Spearman's r = 0.14), even when the average false alarm rate is sufficiently high (Moseley et al., 2021). Effects of this size would not have been possible to detect with our sample size, which could be another reason for this negative finding.
Another, more substantial reason could be the different aspects of false perception that are measured in the Picture Pareidolia and the Noise Pareidolia tasks. The Noise Pareidolia task, which is also referred to as "reality discrimination" task, requires a high level of uncertainty about the sensory input and a suggestive instruction that induces expectations of a specific stimulus (Salge et al., 2021). Furthermore, the usage of Noise Pareidolia-type tasks for quantifying hallucinatory tendencies in general population requires high metacognitive confidence ratings that a participant saw something. False alarms under low confidence can be attributed to, e.g., a more liberal reporting criterion or social pressure of experimental situation (Salge et al., 2021). In contrast, the Picture Pareidolia task contains clear sensory input and less expectation to detect a specific stimulus (i.e., weaker sensory priors). It rather represents an individual's tendency to re-interpret or even overinterpret the existing input in another meaningful way. This aspect actually relates pareidolias in everyday life and in natural images to delusional ideations, which are thought to result from weak sensory priors which enable higher-level aberrant interpretation. While we provide the correlational evidence for this relationship in healthy individuals, further studies are needed to test whether it holds in, e.g., clinical samples.
Since the low false alarm rate in the Noise Pareidolia task in our study deviates significantly from the original report by Liu et al. (2014), which served as the basis for our paradigm, we would like to briefly discuss potential reasons for this Frontiers in Psychology 12 frontiersin.org discrepancy. In the hardest task block of Liu et al. (2014) participants were instructed that 50% of all images will contain the target item (face or letter depending on the block), thereby creating a strong expectation in participants that some images will contain targets. We slightly modified this instruction, only informing participants that in this block target items will be very hard to detect (i.e., without specifying the exact percentage of target-present trials). Our modification was aimed at reducing potential effects of socially desirable behavior in our participants, who would otherwise feel the pressure to report targets even though they do not perceive them. More technical factors, like monitor properties or surrounding experimental conditions, may have also played a role. Low false alarm rates were apparent to us in the pilot testing, and we intentionally reduced the duration of stimulus presentation compared to Liu et al. (2014) to make the stimulus processing harder for the participants, but this modifications appeared to be insufficient.
Limitations
In this final section of our manuscript we would like to address several limitations of our study. First, as discussed above, we observed low false alarm rates in the Noise Pareidolia task, which could have precluded finding significant associations with other dependent variables due to potential floor effects. Therefore, future studies using similar tasks should put more effort into careful calibration of experimental parameters and/or instruction to achieve higher false alarm rates. Second, a prevalence of women over men in our participant sample may have caused a potential bias if, for example, a certain dependent variable is more pronounced in individuals of one gender than of the other. We could rule out such biases for the significant associations by testing for the effects of gender post-hoc, but it remains unclear whether gender disbalance could have caused null effects for some associations. Therefore, future studies should also pay more attention to recruiting a gender-balanced participant sample. Finally, the results we report here are based on a sample of healthy individuals. It is not clear to what extent anomalous perception or delusional ideations reported by our participants are qualitatively similar to hallucinations and delusions that are encountered as symptoms in clinical populations. Future studies can be aimed at testing the relationship between pareidolia and delusional ideations in clinical populations with delusions, such as schizophrenia patients.
Conclusion
Overall, our results speak against a common mechanism behind different perceptual and non-perceptual phenomena explained by the predictive coding theory. However, they are consistent with the notion of the hierarchical predictive processing and suggest that lower-level perceptual and higher-level cognitive predictions operate independently. They also place the phenomenon of pareidolia at the higher cognitive level of the prediction hierarchy.
Data availability statement
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found at: Open Science Framework (OSF) https://osf.io/jnz4e/.
Ethics statement
The studies involving human participants were reviewed and approved by Ethics Committee of the University of Graz. The patients/participants provided their written informed consent to participate in this study.
Author contributions
ML: conceptualization, software, investigation, formal analysis, writing -original draft, writing -review and editing, and project administration. AI: validation, resources, supervision, and writing -review and editing. BH: conceptualization, software, and writing -review and editing. NZ: conceptualization, methodology, software, formal analysis, writing -original draft, visualization, supervision, and funding acquisition. All authors contributed to the article and approved the submitted version.
Funding
This work was funded by the BioTechMed-Graz, Austria (Young Research Group Grant to NZ) and by the University of Graz. | 2023-01-07T16:47:19.082Z | 2023-01-04T00:00:00.000 | {
"year": 2022,
"sha1": "2ccd0014723e3bbe816405d2f77043728c972293",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2022.1067985/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "2ccd0014723e3bbe816405d2f77043728c972293",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
208649270 | pes2o/s2orc | v3-fos-license | Man or machine? Prospective comparison of the version 2018 EASL, LI-RADS criteria and a radiomics model to diagnose hepatocellular carcinoma
Background The Liver Imaging Reporting and Data System (LI-RADS) and European Association for the Study of the Liver (EASL) criteria are widely used for diagnosing hepatocellular carcinoma (HCC). Radiomics allows further quantitative tumor heterogeneity profiling. This study aimed to compare the diagnostic accuracies of the version 2018 (v2018) EASL, LI-RADS criteria and radiomics models for HCC in high-risk patients. Methods Ethical approval by the institutional review board and informed consent were obtained for this study. From July 2015 to September 2018, consecutive high-risk patients were enrolled in our tertiary care hospital and underwent gadoxetic acid-enhanced magnetic resonance (MR) imaging and subsequent hepatic surgery. We constructed a multi-sequence-based three-dimensional whole-tumor radiomics signature by least absolute shrinkage and selection operator model and multivariate logistic regression analysis. The diagnostic accuracies of the radiomics signature was validated in an independent cohort and compared with the EASL and LI-RADS criteria reviewed by two independent radiologists. Results Two hundred twenty-nine pathologically confirmed nodules (173 HCCs, mean size: 5.74 ± 3.17 cm) in 211 patients were included. Among them, 201 patients (95%) were infected with hepatitis B virus (HBV). The sensitivity and specificity were 73 and 71% for the radiomics signature, 91 and 71% for the EASL criteria, and 86 and 82% for the LI-RADS criteria, respectively. The areas under the receiver operating characteristic curves (AUCs) of the radiomics signature (0.810), LI-RADS (0.841) and EASL criteria (0.811) were comparable. Conclusions In HBV-predominant high-risk patients, the multi-sequence-based MR radiomics signature, v2018 EASL and LI-RADS criteria demonstrated comparable overall accuracies for HCC.
Background
Hepatocellular carcinoma (HCC) is the fifth most common malignancy and the second leading cause of cancer-related death worldwide [1]. Currently, all major clinical guidelines [2][3][4] recommend the noninvasive diagnosis of HCC based on characteristic imaging findings on computed tomography, magnetic resonance (MR) imaging and/or contrast-enhanced ultrasound.
With the advent of novel imaging techniques, HCC diagnostic criteria have been continuously updated to incorporate several new imaging features on various modalities, among which the European Association for the Study of the Liver (EASL) criteria have been widely considered as a reliable scheme [2]. However, many of these criteria lack clear lexicons regarding modality-specific imaging features [2,3]. Fortunately, the introduction of Liver Imaging Reporting and Data System (LI-RADS) offered the opportunity to standardize the interpretation, reporting and data collection of imaging results in patients at risk for HCC [5]. However, the assessment of several LI-RADS features can be subjective due to variations in radiologists' experience and familiarity with the system [6,7]. In addition, LI-RADS is developed and modified based predominantly on Western data [2,4], but the demand for validation of the system in Asian cohort remains vital.
Radiomics, which allows quantitative tumor behavior and heterogeneity profiling by extracting high-throughput data with advanced image processing techniques [8], may be a possible approach to improve the accuracy and reproducibility of HCC diagnosis. Previous studies have demonstrated the potential of radiomics in the diagnosis of focal liver lesions [9] and several other solid tumors [10][11][12]. However, evidence regarding the comparison between the accuracies of radiomics models and existing HCC diagnostic criteria remains limited, and few studies have optimized the radiomics model with the multidisciplinary approach.
Thus, the aim of this prospective single-center study was to develop a diagnostic radiomics model for HCC and to compare its accuracy with the version 2018 (v2018) of the LI-RADS [5] and European Association for the Study of the Liver (EASL) criteria [2] in high-risk patients with surgical histopathologic examination as the reference standard. We also explored the diagnostic benefit of the refined radiomics-clinical model incorporating both radiomics features and predictive clinical markers.
Study cohort
Ethical approval by the institutional review board and informed consent from all patients were obtained for this prospective study before the start of patient enrollment. From July 2015 to September 2018, we enrolled consecutive adult patients with hepatitis B virus infection and/or cirrhosis to undergo gadoxetic acid (Gd-EOB-DTPA)-enhanced MR imaging from our tertiary care hospital. The exclusion criteria were patients i) with Child-Pugh class C disease; ii) with any previous antitumoral treatment (e.g. locoregional, surgical, systematic etc.); iii) with any contraindication of Gd-EOB-DTPAenhanced MR imaging; iv) with inadequate image quality (e.g. substantial to severe arterial phase motion artifact); v) who did not receive or were not eligible for liver resection or transplantation in our center; vi) with inconclusive histopathologic diagnosis.
Imaging protocols
All MR examinations were performed on a MAGNETOM Skyra 3.0 T MR scanner (Siemens Healthcare, Erlangen, Germany). 0.025 mmol/kg of Gd-EOB-DTPA (Primovist®; Bayer Schering Pharma AG, Berlin, Germany) was injected at a rate of 2 ml/s. The detailed acquisition parameters were shown in the Additional file 1: Supplementary material and Table S1.
Image analysis Qualitative analysis
All MR imaging analyses were performed independently by two abdominal radiologists (with 10 years and 4 years of experience in liver imaging, respectively) who were blinded to the other imaging results, any clinical information and the final pathological diagnoses. Before start of the image analysis, both reviewers were given at least 2 months of intensive hands-on instructions in the practice of EASL v2018 and LI-RADS v2018 on Gd-EOB-DTPA-enhanced MR imaging.
Observations were diagnosed as HCC if they displayed a combination of arterial phase hyperenhancement and washout on portal venous phase exclusively by the EASL v2018 criteria [2]. Using all major, ancillary and LR-M features, each observation was assigned to an LR category according to the LI-RADS v2018 criteria by navigating the diagnostic algorithm in a stepwise fashion [5]. LR-4 V, LR-5 V or LR-MV was defined as LR-TIV contiguous with LR-4, LR-5 or LR-M lesions, respectively. All patient images were provided to the reviewers in random sequences, and both reviewers were asked to gap for at least 1 month between evaluating according to LI-RADS v2018 and evaluating according to EASL v2018 criteria. Disagreements regarding the LR categorization and HCC diagnosis were resolved by consensus with a senior abdominal radiologist with over 30 years of liver imaging experience.
Radiomics analysis
3D regions of interest were placed manually by delineating along the entire tumor margin on T2-weighted, T1weighted in−/opposed-phase, unenhanced, arterial phase, portal venous phase, and hepatobiliary phase images to avoid major vessels and any marked necrotic areas with the 3D segmentation software ITK-SNAP [13] (version 3.6.0-RC1; http://www.itk-snap.org). The free-hand outlines were independently drawn by the two radiologists who conducted qualitative image analyses.
Radiomics analysis was performed with in-house texture analysis algorithms using the nonpublic scientific research 3D analysis software Analysis Kit (version v3.0.1. A, GE Healthcare, China). To standardize the imaging data of all MR images, the signal intensity is aligned to the same level by changing the formula of the original radiomics feature. In the processing of the pixel size, we pushed the wavelet transformation and calculated all features repeatedly. Using bin size as the variable point, one of the key processes in the standardization of feature extraction was feature discretion, which had a substantial impact on the value of the radiomics features. A total of 396 radiomics features from the categories of histogram, gray-level cooccurrence matrix, run-length matrix, gray-level size zone matrix, form factor and Haralick were extracted from each MR image.
Construction and validation of the radiomics models
All nodules were randomly divided into a training cohort (137 nodules [60%] in 133 patients) and a validation cohort (92 nodules [40%] in 78 patients) using repeated stratified splitting method to reduce the bias selection of a single validation dataset. In a multivariate analysis, the number of events should be no less than 10 times the number of included covariates [14]. Therefore, we applied the least absolute shrinkage and selection operator (LASSO) model [15] with 10-fold cross-validation to select radiomics features with the strongest diagnostic powers in the training data set. Radiomics features with an intraclass correlation coefficient over 0.80 between two reviewers were considered stable and entered into further radiomics model construction [16]. A radiomics score (Rad-score) of each MR sequence was calculated by a linear combination of the selected radiomics features weighted by the corresponding LASSO regression coefficients as: Where a n is the LASSO regression coefficient of variable n, X n is the value of the variable n determined from the input MR image and b is the intercept. A summarized Radscore of all sequences was generated by a linear combination of the Rad-score of each sequence weighted by its logistic regression coefficient to construct the diagnostic radiomics signature. The radiomics signature was further integrated with clinical markers that were independently predictive for HCC diagnosis in the training cohort to formulate a radiomics-clinical nomogram with multivariate logistic regression analysis. The performances of the radiomics signature and radiomics-clinical nomogram were evaluated in the validation cohort (Fig. 2).
Reference standard
Histopathologic examination of the resected or explanted liver was used as the reference standard for all lesions. Two experienced pathologists (with 8 years and over 20 years of experience in liver oncology, respectively), who were aware of the clinical data and imaging results for co-localization of the target lesions, independently performed gross and histologic analyses of all resected or explanted specimens. All disagreements were resolved by consensus. Histopathologic diagnoses of the hepatic lesions were established according to the World Health Organization classification [17].
Per-lesion diagnostic performances were assessed by sensitivities, specificities, positive predictive values (PPVs), negative predictive values (NPVs) and receiver operating characteristic (ROC) analysis. Diagnostic measures were compared with the McNemar test or the method described by DeLong et al [18], where applicable. Comparisons of diagnostic accuracies between the EASL and LI-RADS criteria were conducted in the combined cohort comprising all patients, while all comparisons were made in the validation cohort between the radiomics signature and EASL or LI-RADS criteria.
All statistical analyses were performed with R software, version 3.3.1 (The R Foundation for Statistical Computing, Vienna, Austria). P values for multiple comparisons were adjusted by the Bonferroni method, and p < 0.05 was considered statistically significant.
Among the included patients, 201 (95%) were infected with HBV. No difference of the nodule type proportion (HCC, non-HCC malignancy and non-HCC benign lesion) or any demographic, clinical or biological characteristic was detected between the training and validation cohorts (p > 0.05 for all). Table 2 summarizes the interrater reliability results of the EASL v2018 and different LI-RADS categories for all 229 nodules. Agreement was substantial between the two reviewers for each LI-RADS category (κ = 0.7437), the combination of LR-5/LR-5 V (κ = 0.6542), LR-4/LR-4 V/LR-5/LR-5 V (κ = 0.7109) and the EASL v2018 results (κ = 0.6809).
Interrater agreement assessment
Agreement was substantial to almost perfect for all LI-RADS major features and most ancillary and tiebreaking Table S2). Agreement was not evaluated for nodule size or growth, which were provided to the reviewers.
Construction and validation of the radiomics models
After LASSO regression analysis in the training data set, a total of 18 features with nonzero regression coefficients were extracted from T1-weighted inphase, opposed-phase, arterial phase, portal venous phase images and T2-weighted images (Additional file 3: Table S3). After multivariate logistic regression analysis, the summarized Rad-score (Fig. 3a) revealing the radiomics information of all predictive sequences was generated as: Serum AFP (p<0.001), HBsAg (p = 0.01), AST (p = 0.046), IBIL (p<0.001) and ALB (p = 0.049) were significantly predictive of HCC after multivariate logistic regression analysis in the training data set and were incorporated with the Rad-score to formulate a radiomics-clinical nomogram (Fig. 3c).
Diagnostic accuracy of the radiomics models, EASL and LI-RADS criteria Table 3 summarizes the diagnostic performances of the radiomics model, EASL and LI-RADS v2018 criteria by consensus.
The radiomics models
The AUCs of the radiomics signature were 0.861 and 0.810 in the training and validation cohort, respectively (Fig. 3b). These measures were 0.982 and 0.866 for the radiomics-clinical nomogram in the training and validation cohort, respectively. In the validation cohort, the sensitivity, specificity, PPV and NPV of the radiomics signature and radiomics-clinical model were 73, 77, 91, 47 and 77%, 68, 89, 48%, respectively. No difference was detected between any paired diagnostic measure for the radiomics signature and radiomics-clinical model in the validation cohort (Fig. 3d) or for the radiomics signature in the training and validation cohorts (Fig. 3b).
Comparisons between the radiomics signature, the EASL and LI-RADS criteria
Diagnostic results by LR-5/LR-5 V were used to represent the LI-RADS v2018 performances. After p value adjustment for multiple comparisons, the v2018 EASL and LI-RADS criteria yielded comparable diagnostic accuracies for HCC irrespective of underlying cirrhosis or lesion size. In the validation cohort, the EASL v2018 demonstrated significantly higher sensitivity than the radiomics signature in all nodules (p = 0.01), cirrhotic livers (p = 0.01) and in nodules ≤2 cm (p = 0.03). The radiomics signature is more specific than the EASL (p = 0.01) and LI-RADS (p = 0.045) in non-cirrhotic livers. The AUCs of all three diagnostic models were comparable in the validation data set.
Discussion
Both updated in 2018, the EASL and LI-RADS criteria are currently the most widely used diagnostic criteria for HCC. However, concerns have been raised for both criteria regarding their applicability in Asian cohort and with hepatobiliary-specific contrast agents. Advances in radiomics have led to improved tumor-heterogeneity quantification and may assist in liver lesion characterization [9]. In this prospective study, we found that the multi-sequence-based MR radiomics signature, the LI-RADS v2018 and the EASL v2018 demonstrated comparable diagnostic accuracies for HCC in high-risk patients. First, we constructed a multi-sequence-based MR radiomics signature in the training cohort and compared its diagnostic accuracy with EASL and LI-RADS criteria exclusively in the validation cohort to eliminate the effect of overfitting. We found that the AUCs of the radiomics signature were similar to EASL and LI-RADS criteria irrespective of lesion size and the presence of underlying cirrhosis. Notably, in non-cirrhotic patients, the radiomics signature demonstrated 100% specificity, which was significantly higher than both EASL (p = 0.008) and LI-RADS (p = 0.045) criteria, with an excellent AUC of 0.923. Since HBV chronic infection is currently the leading risk factor for HCC in Asian countries [3] and in this context many HCCs can develop without cirrhosis, the radiomics signature may play a pivotal role in increasing the diagnostic specificity and overall accuracy for these patients. However, the radiomics signature was less sensitive than EASL criteria, particularly in cirrhotic livers and for lesions≤2 cm, and these might have been explained by the fact that radiomics signatures constructed in small lesions could not usually provide sufficient biological information in a reliable fashion, as many such small lesions have not developed in the full spectrum [19].
Extracted from clinical radiologic images, radiomics features can indicate the gene expression profiles of HCC [20] and reveal key phenotypic characteristics including tumor growth and vascular invasion [21][22][23]. In our multi-sequence-based radiomics signature, most extracted imaging features belonged to the gray-level cooccurrence matrix (61%, 11/18) and run-length matrix (28%, 5/18) categories. Gray-level co-occurrence matrix parameters can depict tumor texture described by pixel spatial relationships [24]. Run-length matrix features enable evaluation of the complex 3D structures labelled with the same grey level values and have been reported to indicate HCC aggressiveness on Gd-EOB-DTPA-enhanced MR imaging [19]. However, the one-to-one correlations between numerous radiomics features and complex tumor biology processes are still unclear and need to be explored in further studies. Interestingly, we found that the radiomics-clinical model incorporating predictive clinical markers showed no diagnostic benefit compared with the sole radiomics signature. This finding highlighted the central role of imaging examinations in HCC diagnostic workflow and indicated that clinical markers may provide limited information for liver lesion characterization in high-risk patients.
Afterwards, we compared the performances between EASL and LI-RADS criteria in the combined cohort comprising all patients. Both criteria demonstrated similar diagnostic accuracies irrespective of lesion size and the underlying cirrhosis status, which were in line with the study of Ronot et al [25]. However, despite that both EASL and LI-RADS were developed and modified in order to be nearly 100% specific, we reported relatively low specificities of both criteria. These results were not in accordance with previous studies [25][26][27][28], in which the specificities of previous EASL and LI-RADS criteria reached up to 87.6-98.6% [25,26] and 83.6-100% [25][26][27][28], respectively. Therefore, we explored origins of the restricted specificities on a per-lesion level. Among all false-positive cases, 9 (Fig. 4) were misclassified by both EASL and LI-RADS criteria (cHCC-CCA: n = 3; ICCA: n = 2; neuroendocrine tumor: n = 2; inflammatory pseudotumor: n = 1; angioleiomyolipoma: n = 1), 7 exclusively by EASL criteria (ICCA: n = 5; cHCC-CCA: n = 1; dysplastic nodule: n = 1) and 1 exclusively by LI-RADS criteria (ICCA). 85% (6/7) of the false-positive lesions misdiagnosed exclusively by EASL criteria presented the "targetoid appearance", a target-like imaging morphology as a result of the highly cellular peripheral area surrounding the central fibrotic/ischemic stroma according to LI-RADS Fig. 4 Gd-EOB-DTPA-enhanced MR images of a 47-year-old man with chronic HBV infection and pathologically proven cirrhosis. Images of unenhanced phase (a) show a hypointense mass predominantly in segment VI. The mass demonstrates typical arterial phase (b) hyperenhancement (not rim), portal venous phase (c) washout and moderate T2 hyperintensity (e). No targetoid appearance is identified on hepatobiliary phase (d) or diffusion-weighted (f, b = 1200s/mm 2 ) images. Note the peritumoral corona enhancement (b, white arrow heads) pattern in arterial phase due to venous drainage from the tumor. The mass was histopathologically proven to be intrahepatic cholangiocarcinoma with hematoxylin-eosin staining at 200 × magnification (g). Cytokeratin 19 is positive at 200 × magnification with immunohistochemical staining (h). The serum alphafetoprotein (4.91 ng/ml) and carcinoembryonic antigen 19-9 (17.44 U/ml) levels were within the normal range criteria [5]. This feature is highly indicative of ICCA, cHCC-CCA and other non-HCC malignancies. In our study, the "targetoid appearance" was significantly more common in non-HCC malignancies (75.0%) than in HCCs (7.5-9.8%) (both p < 0.001), as previously reported [7,29]. Thus, a possible approach to improve the specificity of EASL criteria for HCC is to eliminate the effect of the "targetoid appearance" from the diagnostic algorithm.
However, neither EASL nor LI-RADS criteria demonstrated satisfactory specificities even after eliminating the effect of the "targetoid appearance", particularly in differentiating between HCC and non-HCC malignancies in cirrhotic patients. One possible explanation was that 49% (112/229) of the included lesions were>5 cm. As larger lesions are more likely to demonstrate significant intratumoral heterogeneity and atypical imaging features, differential diagnosis of these tumors can be particularly challenging due to considerable clinical and imaging overlaps. By subgroup analysis, we reported the lowest specificities for both EASL and LI-RADS criteria in nodules>5 cm, which might have affected the overall diagnostic results substantially. Another likely explanation for the limited specificities was that 64% (134/211) of the included patients were cirrhotic, and small duct type ICCAs and cHCC-CCAs, can mimic HCCs in cirrhotic patients [30][31][32]. Similarly, Choi et al reported a relatively low specificity (87%) for LI-RADS v2017 in differentiating between HCC, ICCA and cHCC-CCA in HBV-predominant patients [32]. As both EASL and LI-RADS were developed in Western countries, where hepatitis C virus infection is the most important risk factor for HCC [2,4], the diagnostic dilemma caused by these mimickers in chronic HBV patients may not be well addressed by either EASL or LI-RADS criteria.
In summary, the radiomics signature demonstrated comparable AUC for HCC with the v2018 EASL and LI-RADS but significantly higher specificity in noncirrhotic patients, which may be clinically beneficial for patient with chronic HBV infection. However, the sensitivity of it was limited and the diagnostic results were difficult to interpret. In addition, radiomics results are prone to overfitting and the influence of imaging collection and modality variation [33,34]. Thus, one of the key aspects of applying radiomics results in daily clinical practices is optimal acquisition and integration of curated data in a standardized and reproducible manner.
The EASL criterion is currently the most widely used diagnostic criteria for HCC. It is sensitive for small lesions, easy to apply and does not require the use of advanced imaging techniques. However, its accuracy might be restricted by relatively low specificity. LI-RADS empowers HCC probability assessment by integrating various imaging features with standardized interpretation and reporting. However, the diagnostic performances of LI-RADS were suboptimal in our HBV-predominant cohort. Apart from the geographical discrepancies of HCC between Western and Eastern cohorts, another possible explanation for the suboptimal performance of LI-RADS in this study might be the fact that LI-RADS was predominantly designed for MR using extracellular contrast agents instead of Gd-EOB-DTPA. Therefore, further tailoring of the system in Asian cohort using Gd-EOB-DTPA is necessary to optimize patient management. In addition, all LI-RADS ancillary features are weighted equally and optional, but some features (e.g. hepatobiliary phase hypointensity and restricted diffusion) may merit more emphasis or weighting [35]. Notably, combining LR-4 with LR-5 [26,27] might be a possible approach to improve the sensitivity of LI-RADS in Eastern cohort.
This study has several limitations. First, the consecutive prospective cohort consisted of limited numbers of non-HCC and small HCC lesions. The small sample sizes of these specific categories of hepatic nodules might introduce significant selection bias to our diagnostic results. However, only patients with reliable pathological results were included, and many patients with small HCCs or non-HCC lesions were excluded because they were not candidates for surgery (e.g., some non-HCC benign lesions), received alternative therapies (e.g., ablation for small HCCs) or did not have conclusive histopathologic results. However, a different study design, such as using either histopathologic diagnosis or imaging follow-up as the reference standard might provide a larger number of these lesions. Second, we did not conduct multicenter external validation for the radiomics models due to dramatic variations in MR imaging protocols and surgical procedures across different centers. To overcome this limitation, we assessed the performance of the radiomics-clinical model in an independent validation cohort in our center. However, further prospective studies with multicenter large-scale external validation are warranted to assess the reproducibility and generalizability of the reported findings.
Conclusions
The multi-sequence-based MR radiomics signature was significantly more specific in non-cirrhotic patients than v2018 EASL and LI-RADS criteria for HCC in HBVpredominant high-risk patients. However, the radiomics signature was less sensitive than v2018 EASL. The overall accuracies of these three diagnostic approaches were comparable. | 2019-12-06T16:30:28.645Z | 2019-12-01T00:00:00.000 | {
"year": 2019,
"sha1": "146d41c654c749ae5c25d4cc1d28a34fbd721992",
"oa_license": "CCBY",
"oa_url": "https://cancerimagingjournal.biomedcentral.com/track/pdf/10.1186/s40644-019-0266-9",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "146d41c654c749ae5c25d4cc1d28a34fbd721992",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
53052030 | pes2o/s2orc | v3-fos-license | LIPIDS AND ISCHEMIA-MODIFIED ALBUMIN IN MILD SUBCLINICAL HYPOTHYROIDISM: RESPONSE TO LEVOTHYROXINE REPLACEMENT
Objective: Subclinical hypothyroidism (SCH) with thyroid-stimulating hormone (TSH) less than 10 μIU/ml is a common finding discovered during routine thyroid function testing. Thyroxine substitution and its benefits to alleviate dyslipidemia and oxidative stress (OXs) markers at this stage are a matter of debate. Methods: This study aimed to investigate the influence of thyroxine substitution on lipid profile and OXs markers in newly diagnosed SCH subjects. The study included a total number of 50 newly diagnosed (20 treated and 30 untreated), SCH subjects aged 20-50 years with (TSH<10 μIU/ml), and free thyroxine (FT4) levels in the normal range. Patients on medications that could cause thyroid hormone dysfunction, diabetes mellitus, and current or pregnancy during the last 2 years were excluded from the study. Serum TSH, T3, T4, FT4, anti-thyroid peroxidase antibodies, total cholesterol (TC), high-density lipoprotein cholesterol (HDL), triglycerides (TG), low-density lipoprotein cholesterol (LDL), and ischemia modified albumin (IMA) were determined in all subjects at baseline and after 9 months. Results: After thyroxine replacement, a significant decrease in TSH, LDL, IMA and an increase in FT4 were observed. The decrease in TC was not statistically evident. There was no significant change in T3, T4, TG, HDL, after treatment. The untreated group showed an insignificant increase only in TSH. Conclusion: Thyroid substitution therapy has a favorable influence on lipid profile and OXs, where it particularly reduced LDL and IMA.
INTRODUCTION
Subclinical hypothyroidism (SCH) is a common finding discovered during routine thyroid function testing with a prevalence reaching up to 10-20% worldwide [1][2][3]. SCH is a well-established clinical entity with biochemical evidence of cardiovascular risk similar to that of overt hypothyroidism in relation to atherogenic lipids and oxidative stress (OXs) markers when thyroid-stimulating hormone (TSH) levels are >10 µIU/ml [4][5][6].
Recent prevalence studies [1][2][3] show that 80-90% of patients with SCH have TSH <10 µIU/ml. Most of the studies taken up in the past decade did not categorize SCH subjects based on the degree of TSH elevation while concerning cardiovascular impact and management protocol in them. The data on this subdivision are scanty and evidence in favor of thyroxine therapy is not well established; hence, studies to assess the cardiovascular risk in the newly defined SCH patients are needed. Hence, this study aimed to study the influence of thyroxine substitution on lipids and OXs based on ischemia-modified albumin (IMA) in newly diagnosed SCH subjects with TSH<10 µIU/ml.
METHODS
A total of 50 newly diagnosed SCH subjects aged 20-50 years with TSH<10 µIU/ml and free thyroxine (FT4) levels in the normal range for a minimum period of 3 months (20 treated and 30 untreated) were followed prospectively for 9 months. Patients on medications that could cause thyroid hormone dysfunction, diabetes mellitus, and current or previous pregnancy in the last 2 years were excluded from the study. L-thyroxine (LT4) was administered at doses ranging from 25-100 µg/day. Written informed consent was taken from all subjects. The study was approved by the Institutional Ethics Committee. Fasting serum TSH, T3, T4, FT4, anti-thyroid peroxidase (anti-TPO) antibodies, total cholesterol (TC), high-density lipoprotein cholesterol (HDL), triglycerides (TG), low-density lipoprotein cholesterol (LDL), IMA, and atherogenic index of plasma (AIP) were determined in all subjects at baseline and after 9 months.
Laboratory parameters
Thyroid function tests were performed by electrochemiluminescence assay. Anti-TPO antibody was measured using enzyme-linked immunosorbent assay (ELISA) kits with a normal range of 0.1-34.0 IU/L. TC, TG, HDL, LDL-C direct levels were estimated using Roche kits in fully automated biochemistry analyzer.
IMA estimation
IMA was estimated by colorimetry using the method developed by Bar-Or et al. [7]. The absorbance of the assay mixture was read at 450 nm using ELISA reader. IMA was reported in absorbance units.
AIP was calculated using the formula (log [TG/HDL cholesterol]), and value over 0.5 has been proposed as the cutoff point indicating atherogenic risk [8]. The association between TG and HDL cholesterol reflected by this ratio depicts the balance between atherogenic and protective lipoproteins.
Statistical analysis
Comparisons between the patients at two-time points were performed using paired t-test for normally distributed data and Wilcoxon signed-rank test for nonparametric distribution. A p<0.05 was considered statistically significant.
RESULTS
Thyroid function tests of both treated and untreated groups at induction and after 9 months of follow-up were compared (Table 1). Following thyroxine replacement, there was a significant decrease in TSH and increase in FT4 and T4, whereas no significant changes were observed in T3, anti-TPO values. Thyroid autoimmunity was evident in 17 (85%) of subjects in the treated group. The untreated group showed an increase in TSH and in FT4 which was not statistically significant Lipid and OXs markers were compared in both treated and untreated groups ( Table 2). In the treated group, a significant decrease in LDL was observed. There was no significant change in TC, TG, HDL, and AIP after treatment. IMA an indicator of OXs was also reduced after LT4 replacement. The untreated group had no significant alteration in any of the estimated parameters.
DISCUSSION
This study explores the effects of LT4 replacement therapy on OXs level and lipid profile in patients with SCH. LT4 substitution showed the favorable effect on lipid profile in SCH subjects of the present study. Earlier studies have shown inconsistent results. Few of the studies have reported no change in TC, TG, LDL, and HDL [9,10], whereas few showed a significant decrease in TC and LDL after LT4 replacement [11,12]. Majority of the studies [13] seem to have no significant effect on serum HDL and TG expects for few [14,15]. A significant decrease in LDL and a non-significant decrease in TC after the restoration of euthyroid state were observed in the present study.
In untreated SCH subjects, there was no significant variation in any of the biochemical parameters except for a further elevation in TSH. The percentage alteration in TSH and FT4 was 23% and 12%, respectively, after 9 months. Karmisholt et al. [16] reported that a 40% increase in TSH and 15% decrease in FT4 from the initial values can be considered significant in untreated stable SCH with TSH initially up to 12 mU/L in an 1-year follow-up. IMA as measured using the ACB test is currently the most promising biomarker for early detection of ischemic stress [17]. Recent studies have reported a strong association of IMA with oxidative stress (OXs) and its generation depends on the extent of OXs [18]. Studies suggested that elevated IMA levels can be a clinically useful marker of oxidative damage to protein and OXs in hypothyroidism [19]. However, results of IMA in SCH are inconsistent and inconclusive [20][21][22]. In the present study, LT4 replacement in SCH patients caused a significant decrease in IMA levels. Our results contradictory to the finding of Erem et al. [23], wherein serum IMA levels did not decrease significantly after replacement. Ma et al. [19] reported a significant positive association between IMA levels and TPOAb in overt hypothyroid subjects and its reduction after LT4 replacement. Elevated anti-TPO in Hashimoto's thyroiditis is found to be associated with OXs; similarly, hyperlipidemia of any cause is also reported to be associated with an increase in OXs and IMA levels [18,24,25]. In the present study, coexistence of elevated anti-TPO and high cholesterol levels (total and LDL) was found to be associated with high IMA levels at baseline which reduced on LT4 replacement.
The mechanism of OXs in hypothyroidism seems to be multifactorial because thyroid hormone (T3) is associated with the regulation of prooxidant and antioxidant balance [26]. Direct effects of thyroid hormones on the regulation of antioxidant enzymes, protein, and vitamin are the proposed mechanisms associated with increased OXs [27,28]. The plausible explanations for altered OXs markers in SCH are attributed to the direct effects of TSH on OXs and inflammatory processes [29]. In contrast to this hypothesis, other studies supported the concept that OXs itself can alter circulating thyroid function parameters and can trigger the autoimmune process resulting in underactive thyroid condition [30,31].
The current study differs from most of earlier studies with respect to the TSH cutoff considered and recruitment of relatively young subjects without pre-existing alterations and other comorbidities at baseline. Smaller sample size and replacement therapy which are not placebocontrolled are the major limitations of this study. Estimation of albumin adjusted-IMA would have provided further insights into the level of IMA. As our study group did not have any other complications except for slight alteration in TSH, this limitation is not likely to affect the conclusions drawn.
Dyslipidemia in SCH is often associated with altered LDL. OXs in SCH if not given due attention can cause oxidation of LDL resulting in oxidatively modified LDL, a potent proatherosclerotic mediator. The results of the current data demand conduction of large-scale prospective studies with more potent markers to elucidate the role of thyroid autoimmunity on lipids and OXs and to define the role of L-T4 therapy on atherogenic lipids and OXs in SCH subjects with mildly elevated TSH.
CONCLUSION
Thyroxine substitution therapy had a favorable influence on lipid profile and OXs, where it significantly reduced both LDL and IMA.
ACKNOWLEDGMENT
We thank all the patients and hospital staffs for their cooperation during the study. | 2019-03-17T13:02:23.992Z | 2017-05-01T00:00:00.000 | {
"year": 2017,
"sha1": "37aca3e86fa9aa437e6c8f48cf87bc9a964ea8b3",
"oa_license": "CCBYNC",
"oa_url": "https://innovareacademics.in/journals/index.php/ajpcr/article/download/17373/10834",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "b5738118c5dee9155fa1c41841a42c6958073d9d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257152883 | pes2o/s2orc | v3-fos-license | Associations between welding fume exposure and neurological function in Japanese male welders and non‐welders
Abstract Objectives There are some studies reporting the association between (manganese [Mn]) exposure to welding fume and neurological dysfunction. This study examined the relationship between Mn exposure and neurological behavior in Japanese male welders and non‐welders using biological samples, which to date has not been assessed in Japan. Methods A total of 94 male welders and 95 male non‐welders who worked in the same factories were recruited. The blood and urine samples were obtained from all the participants to measure Mn exposure levels. Neurological function tests were also conducted with all participants. The sampling of the breathing air zone using a personal sampler was measured for welders only. Results The odds ratios (ORs) for the Working Memory Index (WMI) scores were significantly higher among all participants in the low blood Mn concentration group than those in the high blood Mn concentration group (OR, 2.77; 95% confidence interval [CI], 1.24, 6.19; P = .013). The association of WMI scores and blood Mn levels in welders had the highest OR (OR, 3.73; 95% CI, 1.04, 13.38; P = .043). Although not statistically significant, a mild relationship between WMI scores and blood Mn levels was observed in non‐welders (OR, 2.09; 95% CI, 0.63, 6.94; P = .227). Conclusions The results revealed a significant positive relationship between blood Mn and neurological dysfunction in welders. Furthermore, non‐welders at the same factories may be secondarily exposed to welding fumes. Further research is needed to clarify this possibility.
| INTRODUCTION
Welding is the joining of metallic components by melting the metals using heat or pressure.3][4] In Japan, there are approximately 180 000 metal welding and fusion cutting workers, and they account for 0.3% of the total working population. 5The Japanese government had announced to partially revise laws and regulations in 2020 with the aim of strengthening measures to prevent welders' health hazards related to Mn and welding fumes. 6any reports on the toxicity of Mn relate to respiratory toxicity and neurotoxicity.Poor ventilation in the workplace was associated with decreased lung function among shipyard welders, although there was no relationship between Mn concentrations in the respiratory zone of the workplace and acute decreases in lung function. 7espiratory 30 630 symptoms such as nasal congestion and dry cough have been observed in welders. 8Studies on neurotoxicity among those exposed to Mn are much more numerous than on respiratory toxicity.In a study of male workers at Korean shipbuilding companies, there was no relationship between the development of Parkinson's disease and airborne Mn levels. 91][12] Park et al. 13 reported that a higher blood level of Mn reduced neurological functions in welders, such as the Working Memory Index (WMI), and verbal intelligence quotient.Among alloy manufacturing plant workers exposed to Mn, the workers in the high-exposure group exhibited poorer performance in addition, symbol digit, finger tapping, and digit span tests.
In Japan, a few studies have been conducted on neurotoxicity or respiratory toxicity among welders exposed to Mn.A 56-year-old welder working for 30 years whose serum and urine Mn levels were high developed postural instability and writing clumsiness. 14One study investigated the relationship between welding fume exposure and lung function among 143 male welders. 15Another study investigated the relationship between respirable dust exposure and pneumoconiosis by examining of 1006 chest X-ray films of workers including shipyard welders. 16nfortunately, two of these three studies did not measure metal concentrations in workers' biological samples but examined the strength of the residual magnetic field of externally magnetized lungs or environmental chemical concentrations and biological effects.
Therefore, this study examined the relationship between Mn exposure and neurological behavior in welders and non-welders using biological samples which to date has not been assessed in Japan.
| Study participants
A total of 94 male welders from 7 factories in Japan were included in this study.These include one shipbuilding industry, an automobile manufacturing industry, two factories for manufacturing construction materials, and three steel industries.Forty-eight workers treated high-strength steel, 29 treated mild steel, 15 treated carbon steel, and two
Results:
The odds ratios (ORs) for the Working Memory Index (WMI) scores were significantly higher among all participants in the low blood Mn concentration group than those in the high blood Mn concentration group (OR, 2.77; 95% confidence interval [CI], 1.24, 6.19; P = .013).The association of WMI scores and blood Mn levels in welders had the highest OR (OR, 3.73; 95% CI, 1.04, 13.38; P = .043).Although not statistically significant, a mild relationship between WMI scores and blood Mn levels was observed in non-welders (OR, 2.09; 95% CI, 0.63, 6.94; P = .227).
Conclusions:
The results revealed a significant positive relationship between blood Mn and neurological dysfunction in welders.Furthermore, non-welders at the same factories may be secondarily exposed to welding fumes.Further research is needed to clarify this possibility.
K E Y W O R D S
manganese, neurological function, welder, Working Memory Index treated stainless steel as the base material.Three workers were engaged in Tungsten Inert Gas welding using Argon (Ar) gas for shielding gas, and 91 in Metal Active Gas (MAG) welding using CO 2 gas for shielding gas.Sixteen workers occasionally engaged in MAG welding using CO 2 and Ar gas for shielding gas.The welding wire used was Japanese Industrial Standards Z 3312 YGW11, YGW12, and YGW18.
Ninety-five male non-welders who worked in the same factories were recruited as control participants.Recruited non-welders were not engaged in welding work at the time of this study, even if they had previously engaged in welding work.We recruited until the number of participants was almost the same as that of the welders.The nonwelders included 77 clerical workers, 6 manufacturing line workers, 3 product designers, 3 product inspectors, and 6 manufacturing managers.The welders and non-welders were aged 20 years or older and were recruited from April 2021 to June 2022.
| Questionnaire survey
Data on age, smoking and drinking habits, welding exposure-years, current neurological findings (drooling, muscle twitching, numbness and tingling in hands and feet, and excessive sweating), and current respiratory symptoms (cough, shortness of breath, rhinorrhea, nasal congestion, wheezing, and sputum) were obtained through a self-administered questionnaire.Regarding welding exposure-years, we ascertained not only current welding experience, but past welding experience as well in addition to self-administered questionnaire.Fatigue symptoms self-awareness scores were determined using the Workers' Fatigue Accumulation Self-Assessment Checklist. 17Hand grip strength was measured in both the dominant and non-dominant hands with a digital grip strength dynamometer (TKK5401; Takei Scientific Instruments Co., Ltd.).After holding the grip strength meter in an upright position and adjusting the second joint of the index finger to 90°, the measurement was repeated twice alternately with the dominant and non-dominant hands.The dominant hand was determined by asking participants if they were right-or left-handed.The mean value was recorded in kilograms.A rest period of at least 10 min was provided between grip strength and finger tapping measurements to prevent fatigue affecting the grip strength results.Abnormalities related to the skeletal muscles of the hands and arms were confirmed verbally before measuring grip strength.Two welders responded that there was an abnormality in their non-dominant hands; thus, we assessed the non-dominant hand grip strength of 92 welders.
| Finger tapping
Finger tapping measures the maximum speed of repetitive finger movement.The fingers used are the index and middle fingers of the dominant and non-dominant hands, respectively.Performance is evaluated as the mean number of taps during three 10-s trails for each hand. 18
| Working Memory Index
The Wechsler Adult intelligence Scale-IV (WAIS-IV) has subset WMI comprised of digit span forward, digit span backward, digit span sequencing, and arithmetic sections, and is recalculated considering the influence of age on these scores.The arithmetic section requires a participant to mentally solve arithmetic word problems, presented orally, within a specific time limit. 19,20Ninety-two welders underwent WMI because two welders refused to participate due to lack of time.
Grip strength, finger tapping, and WMI were performed before work to avoid fatigue.
| Blood and urine sampling
The participants provided 8 mL blood and 10 mL urine samples at the end of their working shifts to medical doctors for measurement of metal concentration.The collected blood and urine were given to the staff of SRL (SRL, Inc.) within 2 h after sampling.Blood cadmium (Cd), nickel (Ni), Mn, chromium (Cr), and lead (Pb) and urine Cd, Mn, and Cr concentrations were determined at SRL.The detection limit for each metal were 0.2, 0.2, 0.2, 0.03, and 1.1 μg/dL for Cd, Ni, Mn, Cr, and Pb, respectively, and in the urine samples were 0.5, 1.1, and 0.3 μg/L for Cd, Mn, and Cr, respectively.
| Breathing air zone sampling using a personal sampler for welders
Welders' breathing air zone sampling using a personal sampler was measured by a professional measurer from an external organization (Japan Industrial Safety & Health Association) in basic accordance with the guidelines for personal exposure measurements of chemical substances established by Japan Society for Occupational Health. 21he Air Check 2000 sampler (SKC Inc.), NWPS-254 sampler (Shibata), and TF98R PTFE binding filter (Shibata) with 2.5 L/min air flow rate were used to measure respirable dust concentration and total dust concentration during work.In order to determine Mn concentrations of welding fumes, the samples collected on the filters after extraction were analyzed using Agilent 7800 Quadrupole ICP-MS (Agilent Technologies).Using the air sampling data, the 8-h time-weighted average (8 h-TWA) of respirable dust, TWA of respirable Mn, and 8 h-TWA of respirable Mn were calculated.
| Statistical analyses
Two-group comparisons were performed using the Mann-Whitney U test, Fisher's exact test, or multivariable logistic regression analyses.When metal concentration was not detected, we imputed the data by 1/10 for each detection limit.All participants were automatically divided into three groups (each containing about a third of the participants) according to metal concentrations in their blood and urine using a statistical software.The tertile 1 (T1) group contains participants with low levels of metal concentration, the tertile 2 (T2) group contains participants with intermediate levels of metal concentration, and the tertile 3 (T3) group contains participants with high levels of metal concentration.The odds ratio (OR) for neurological dysfunction risk and the corresponding 95% confidence intervals (CIs) were estimated after adjusting for the effects of age, body mass index (BMI), smoking habits, drinking habits, and factory, and welding exposure-years.All statistical analyses were performed in STATA (StataCorp.LLC); statistical significance was P < .05(two-sided).
| RESULTS
Table 1 presents the study population characteristics.The values are mean (standard deviation) or number (%).The welders had stronger grips, fewer numbers of finger tapping and lower WMI scores than non-welders.Sixteen of the non-welders had previous welding experience.
Table 2 shows the distribution of metal concentrations in non-welders and welders.Urine Cd, blood and urine Mn, and urine Cr and blood Pb concentrations of the welders were high in our study (urine Cd, P < .001;blood and urine Mn, P < .001,P < .001,respectively; urine Cr, P < .001;blood Pb, P = .016).The percentage of participants having concentrations below the detection limits for metal concentrations in the biological samples was 89% for blood Cd, 58% for urine Cd, 79% for blood Ni, 0% for blood Mn, 91% for urine Mn, 93% for blood Cr, 3% for urine Cr, and 24% for blood Pb.
Tables 3-5 show the results of the multivariable logistic analyses to estimate the risk of neurological dysfunction, grip strength reduction, the number of finger tapping, and WMI scores.There was no significant relationship between grip strength and finger tapping for both the dominant and non-dominant hands and blood Mn concentrations (Tables 3 and 4).The OR for lower WMI scores were significantly higher among all participants in the high blood Mn group (T3) than those in the low blood Mn group (T1) (OR, 2.77; 95% CI, 1.24, 6.19; P = .013).
Although not statistically significant, a mild relationship was observed between low WMI scores and high blood Mn levels in non-welders (OR, 2.09; 95% CI: 0.63-6.94;P = .227).The association of WMI scores and blood Mn levels in welders had the highest OR, and the relationships were statistically significant (OR, 3.73; 95% CI, 1.04, 13.38; P = .043)(Table 5).In addition to blood Mn, urine Cr, and blood Pb were detectable in many participants, and a logistic analysis was conducted for these two metal concentrations.However, there was no statistically significant relationship between WMI and urine Cr and blood Pb concentrations (Tables S1 and S2).
The distribution of individual sampler results of median (min and max) (mg/m 3 ) were 8 h-TWA of respirable dust; 1.02 (0.01, 10.24), TWA of respirable Mn; 0.189 (0.0001, 2.818), 8 h-TWA of respirable Mn; 0.094 (0.00007, 1.538).Figure 1 shows the distribution of 8 h-TWA of respirable dust, TWA of respirable Mn, and 8 h-TWA of respirable Mn by blood Mn concentration among welders.In the results of all individual sampler results, the respirable dust and Mn concentrations were higher in the group with high blood Mn concentration (8 h-TWA of respirable dust, P < .001;TWA of respirable Mn, P < .001;8 h-TWA of respirable Mn, P < .001,by covariance analysis after adjusting for the effects of age, BMI, smoking habits, drinking habits, factory, and welding exposure-years).
| DISCUSSION
Compared to non-welders, welders had higher concentrations of urine Cd, blood Mn, urine Mn, urine Cr, and blood Pb.Lower WMI scores were observed in the high Mn blood concentration group (T3) than in the low Mn blood concentration group (T1) in welders.Although not statistically significant, a mild relationship between WMI scores and blood Mn concentrations was observed in non-welders.
Cd, Ni, Mn, Cr, and Pb are well-known metals contained in welding fumes. 4,22,23These metals are found in the blood and urine of welders due to occupational exposure. 23,24In this study, urinary Ni concentration was not greater in welders than in non-welders, and there was no difference between them.Ni is a metal often used in welding using stainless steel as a base material. 24Since this time the study was conducted in factories using highstrength, mild, or carbon steel, whose raw material is iron as the base metal, it is possible that Ni was not significantly detected in the population of welders in this study.
In this study, we found a relationship between welders with a high blood Mn concentration and a lower WMI score.Similar to previous studies, there was an association between blood or urine Mn concentrations and WAISrelated tests in welders. 13,18One study reported 9.6 μg/L (range 5.1-15.3)for the mean blood Mn of welders and their WMI scores was reduced by exposure after considering dilation of employment. 13Another study demonstrated blood Mn and urine Mn of workers exposed Mn ranged from 4 to 18 μg/L and from 0.7 to 7 μg/L, respectively. 18The high-concentration exposure group had reduced finger tapping and digit span scores. 18Our results had similar or slightly higher values compared to these Mn concentrations.Therefore, it would be acceptable to consider whether there is a relationship between blood Mn levels and neurological dysfunction in the welders of our study.
Although not statistically significant, a mild relationship between WMI scores and blood Mn concentrations was observed in non-welders that were presumably not directly occupationally exposed to high Mn levels.The blood Mn concentration in adult males was 1.3 μg/dL (median) in Japan in a previous report using general population data, 25 therefore, the blood Mn concentration of non-welders was not high in comparison.However, the effect of Mn exposure on neuronal function cannot be clarified using its relationship with blood concentrations of Mn at any time point. 13Mn concentrations in the environment, duration of Mn exposure, and usage of personal protective equipment (PPE) to prevent Mn exposure are important factors in determining whether Mn exposure affects neurological function. 13In an Italian study targeting residents exposed to Mn, the Mn dust concentration near the ferroalloy industries factories was high, and the residents living near the factories had a high incidence of Parkinson's disease. 26In a study of residents near an Mn manufacturing plant, the group with higher blood Mn levels (median 7.5 μg/L) showed decreased neurological function, such as poorer learning and recall. 27In our study, non-welders who had been working at the current factory for 15 years (median) and did not wear PPE during work.Depending on the job, non-welders will also be in and out of the factory.Concerning these facts, it is considered that non-welders, like the residents near factories in previous reports, were in an environment where they were likely to be secondarily exposed to Mn from factories for a long period of time.Our findings suggest that secondary exposure to Mn from factories may have decreased the WMI of the non-welders.
In addition to Mn, working memory of the participants in our study may have been affected by chemical factors, such as metals and chemical substances [28][29][30] and other factors, such as occupation, task difficulty, fatigue, stress, and sleep quality [31][32][33][34][35] This time the participants work with high-strength, mild, or carbon steel, with iron as the base material.7][38] The aluminum contained in the wire flux affects memory by modifying hippocampal calcium signal pathways. 28Long-term exposure to carbon monoxide generated by carbon dioxide gas arc welding in the welding process may cause health problems, such as deterioration of memory in the welders. 29,30However, our study did not examine the concentrations of these chemical factors in biological or environmental samples.We analyzed the relationship between fatigue symptoms, self-awareness scores, occupations, and WMI.There was no statistically significant difference between them and WMI in our study.However, even among welders in the same factory, work processes and task difficulty are widely different at individual levels.These differences might be possibly related to WMI.In addition, we have not obtained the data of stress level, sleep quality, and working forms, such as night shifts.In the future, before concluding that there is ) (25th and 75th percentiles).
*P values were obtained using an analysis of covariance adjusted for age, body mass index, smoking habits, drinking habits, factory, and welding exposure-years.T, tertile; TWA, time-weighted average.
a relationship between Mn exposure and WMI, it is necessary to examine various factors that are related to and carefully examine whether there is a relationship between Mn exposure and WMI.Blood Mn concentration increased by approximately 1 μg/L for each mg/m 3 × month of (unprotected) cumulative exposure in welders. 13In the general population, blood Mn levels are affected by diet, especially tea, nuts, and vegetables. 39According to Figure 1, there was a relationship between the blood Mn concentration and the Mn in the breathing area of the welders.Therefore, the source of blood Mn concentrations for welders was likely to be Mn exposure from factories.However, there was no significant difference between the likes and dislikes of vegetables and blood Mn concentration (P = .934)(data not shown).Although this was not clear because we did not conduct a detailed dietary survey, it is possible that the source of Mn exposure in participants was not due to the diets.
In the Ordinance on Prevention of Hazards Due to Specified Chemical Substances, doctors can order the measurement of Mn concentration in urine or other biological samples of welders for secondary health checkup.However, the main excretion route of Mn is through the liver and feces, and excretion into the urine is small. 40ur study also found few participants had detectable urine Mn, and participants with high blood Mn did not necessarily have high urine Mn levels detected.Blood Mn has been reported to correlate more sensitively with neurological findings than urine, 18 thus when examining the relationship between Mn and biological effects, blood concentration should be measured rather than urine.
This study has several limitations.First, in this study, we focused solely on Mn as the metal in welding fumes that affected WMI.There is a need to measure other factors, which could be possibly related to WMI.Second, we did not measure the breathing air zone samples by personal sampler for non-welders.Factory workers who work near a welding site may be secondarily exposed to welding fumes from the welding site.In the future, it will be necessary to conduct studies in which personal samplers are obtained for welders and non-welders working near the welding site.Third, we did not conduct detailed dietary surveys, thus we could not accurately determine whether there was an effect of Mn oral exposure from food.Information on dietary Mn concentrations is necessary to determine whether occupational exposure is involved.Fourth, our study involved a small sample size.Although not statistically significant, higher Mn concentrations may be associated with the lower grip strength of non-welders.Since skeletal muscle abnormalities affect grip strength measurements, we verbally confirmed the absence of skeletal muscle abnormalities among the study subjects before measuring grip strength in this study.
However, it is desirable to confirm the absence of skeletal muscle abnormalities with a specialist.In the future, it is necessary to increase the sample size and investigate the effects of secondary exposure in detail.
| CONCLUSION
There was a significant relationship between blood Mn concentrations and lower WMI scores in welders.Furthermore, non-welders at the same factories may be secondarily exposed to welding fumes.Further research is needed to clarify this possibility.
DISCLOSURE
Approval of the research protocol: This study was approved by the Institutional Ethics Committee at the University of Occupational and Environmental Health in 2020 (R2-011).Informed consent: Written informed consent was obtained from all participants.Registry and the registration no. of the study/trial: N/A.Animal studies: N/A.Conflict of interest statement: The authors declare that there is no conflict of interest.
T A B L E 1
Study population characteristics.
Drooling, muscle twitching, numbness and tingling in hands and feet, excessive sweating.
Note: Values are mean (standard deviation) or number (%). a P values were obtained using Mann-Whitney U test.b P values were obtained using Fisher's exact test.c e Number of welders = 92.
(1.24-6.19) .013
3esults of the multivariable analysis for the relationships between hand grip strength and blood manganese (Mn) concentrations.valueswere obtained using multivariable logistic regression analysis adjusted for age, body mass index, smoking habits, drinking habits, factory, and welding exposure-years.Results of the multivariable analysis for the relationships between finger tapping and blood manganese (Mn) concentrations.valueswere obtained using multivariable logistic regression analysis adjusted for age, body mass index, smoking habits, drinking habits, factory, and welding exposure-years.Results of the multivariable analysis for the relationships between WMI and blood (Mn) concentrations.Distribution for individual sampler results of welders by blood manganese (Mn) concentration sub-groups (tertiles).Values are Median (mg/m3 Note: When metal concentration was not detected, we recorded 1/10 of above the detection limits.Abbreviations: Cd, cadmium; Cr, chromium; Mn, manganese; Ni nickel; Pb, lead.aP values were obtained using the Mann-Whitney U test.T A B L E 2 Distribution of metal concentrations by non-welders and welders.A B L E 3 b P T A B L E 4 b P T A B L E 5 Abbreviations: CI, confidence interval; OR, odds ratio; T, tertile; WMI, working memory index.aMean of WMI is 92.9.b P values were obtained using multivariable logistic regression analysis adjusted for age, body mass index, smoking habits, drinking habits, factory, and welding exposure-years.cBold values indicate P < 0.05.F I G U R E 1 | 2023-02-25T06:16:23.346Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "71498ccc50aca0beddbef8cf5a90111e2e168bb3",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/1348-9585.12393",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "323abdc6333ad78afbadb21f18177fd8b1593eef",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
271127000 | pes2o/s2orc | v3-fos-license | The Effects of a School-Based Physical Activity Program on Physical Fitness in Egyptian Children: A Pilot Study from the DELICIOUS Project
Background: Ensuring the physical fitness of Egyptian children is of paramount importance to their overall well-being, given the unique socio-cultural and educational barriers they face that may hinder their active participation. As part of the DELICIOUS project, the “Be Fit Program” aims to increase the level of physical fitness among Egyptian school-aged children. This study explores the effectiveness of a structured, six-week physical activity (PA) program in improving various facets of physical fitness in children, including body composition, speed, coordination, muscular strength, and cardiovascular endurance. With the increasing prevalence of sedentary lifestyles, such efforts are imperative to improve overall health outcomes. Methods: A cohort of 125 children, aged 8.50 to 12.25 y (mean age 10.19 ± 1.03 y), participated in the study. Their body composition, speed, coordination, strength, and aerobic fitness were assessed before and after the Be Fit Program using the revised International Physical Performance Test Profile. Paired t-tests were used to detect changes between the pre- and post-tests. Results: Following the six-week intervention, statistical analyses revealed significant improvements in coordination and lower body strength (p < 0.01). Aerobic endurance showed marginal improvements, approaching statistical significance (p = 0.06). Conversely, there were no statistically significant changes in body composition, speed, or upper body strength (p > 0.05). Conclusions: The study confirms that tailored, non-competitive physical activities can positively influence specific fitness components in Egyptian children. However, achieving holistic improvements across all targeted fitness domains may require further strategic adjustments or a longer program duration. This pilot study underscores the importance of culturally tailored, school-based PA programs and highlights the continued need for research and program refinement to comprehensively improve children’s fitness in the Egyptian context.
Introduction
The physical fitness in children is increasingly recognized as a cornerstone of public health, with profound implications for long-term well-being and development [1,2].Physical activity (PA) in childhood not only provides immediate health benefits but also sets the stage for a healthier lifestyle in adulthood [3,4].From a physical perspective, regular PA improves cardiovascular health [1], enhances muscle and bone strength [5], betters body composition [6], and elevates overall fitness levels [7].Socially, participation in PA promotes teamwork [8], boosts social skills [9], and strengthens peer relationships [10].Psychologically, it is associated with improvements in mood, a reduction in symptoms of depression and anxiety, increased self-esteem, and better cognitive function [11].The diverse benefits highlight the critical importance of integrating regular PA into the daily routines of children.However, there is a worrying global trend of declining PA and fitness hypertrophic changes in muscle fibers, processes that are generally more gradual [29,30].Furthermore, the rate of muscular development varies considerably between individuals, particularly in children who are at different stages of physical maturation and development [31].
This study aims to evaluate the influence of the Be Fit Program-a six-week PA intervention-on body composition (BMI) and physical fitness levels of Egyptian children.Several fitness components were assessed using the revised International Physical Performance Test Profile (IPPTP) 6-18.The primary hypothesis guiding this research is that participation in the Be Fit Program, a tailored and non-competitive PA intervention, will lead to notable improvements in various components of physical fitness in children, with certain aspects showing more pronounced improvement than others.This hypothesis is based on the premise that tailored, engaging physical activities can have a greater impact on specific fitness domains.The methodological framework, which includes pre-and post-intervention measurements, allows for a comprehensive analysis of the program's influence on different facets of children's physical fitness and body mass index (BMI).This study promises to provide invaluable insights for public health policy, and to serve as a blueprint for future interventions aimed at increasing physical fitness among the pediatric population, thus charting a course towards a healthier future for Egyptian children.
Participants
A total of 137 children were initially recruited from local schools in Assiut governorate, Egypt.One hundred twenty-five participants (mean age = 10.19 y, SD = 1.03) completed all required measures.Inclusion criteria required the enrolment of children between the ages of 9 to 11 y who had no health problems that could hinder or jeopardize their participation in a PA program.In addition, participants with acute or chronic medical conditions that limit PA were excluded to ensure the safety and suitability of the Be Fit Program for all involved.To ensure a balanced representation, both boys and girls were included to reflect the gender diversity of the school population.Prior to commencement, informed consent was obtained from each child's parents or guardian in accordance with strict ethical protocols.All participants and their guardians properly duly completed and signed informed assent/consent forms in accordance with the Institutional Review Board of Assiut University.In particular, the children selected were typically developing, with no reported history of chronic disease or physical disability that might affect their ability to participate in the prescribed physical activities outlined in the Be Fit Program.
Analyses were conducted on a refined sample of 125 participants after excluding individuals who (1) lacked data for both pre-and post-measures (n = 8) and (2) attended less than 75% of the PA program sessions (n = 4).Prior to data analysis, a comprehensive sample size calculation was performed using G*Power 3.1.9.7 [32].This calculation determined that a minimum of 122 participants would provide sufficient statistical power to detect small to moderate effect sizes (d = 0.3) [33], assuming a desired statistical power = 0.95 and alpha = 0.05.Therefore, our final sample size of 125 was considered adequate to achieve the required power.The demographic characteristics of the final sample are shown in Table 1.
Be Fit Program
The Be Fit Program, derived from the BOKS Elementary PA Plans [21], was a carefully structured six-week PA program.This initiative included a comprehensive range of activity plans tailored to improve different functional movement skills.Each session consisted of a carefully designed sequence, including warm-up routines, skill introductions, movementrelated activities and engaging games designed to reinforce the skills introduced.Sessions were held three times a week, with each session lasting between 40 and 45 min.To ensure seamless delivery and supervision, at least one researcher was present during all sessions.Two senior undergraduate students specializing in physical education carefully curated the activities and actively engaged with the children, providing constant encouragement and motivation.The program's diverse repertoire of exercises aimed to improve motor endurance, strength, speed, and coordination, and included activities such as running, jumping, and various strength-building exercises (for more detailed information on the BOKS Elementary PA Plans, see https://trainerhub.activekids.org/s/,accessed on 25 May 2024).Central to the Be Fit Program was its tailored approach.Initial assessments of each child's baseline fitness level facilitated the tailoring of activities to their individual fitness abilities and skills.This individualized approach ensured that the exercises provided an appropriate level of challenge while remaining achievable, thereby encouraging sustained engagement and a sense of achievement among participants.In addition, the Be Fit Program supported the non-competitive ethos inherent in the BOKS model.The program prioritized the enjoyment of PA over peer competition, with a deliberate emphasis on individual progress and goals.This philosophy was underpinned by a culture of celebrating personal achievement, with trained coaches, including postgraduate students from the Faculty of Physical Education, playing a key role.By recognizing and celebrating individual milestones, the program cultivated a supportive and nurturing environment in which children were encouraged to participate enthusiastically, regardless of their initial fitness levels.
Physical Fitness
IPPTP is a robust and validated instrument tailored to the assessment of physical fitness, carefully designed for practical use [34].Based on the methodologies of Bös and Mechling [35] and the German Motor Test 6-18 [36], this tool comprises eight test items that comprehensively cover the five fundamental dimensions of physical fitness: endurance, strength, speed, coordination, and flexibility.In our study, we used six fitness tests that were carefully designed to assess different aspects of physical fitness.The 20 m dash tests speed by measuring the time (in milliseconds) taken to sprint 20 m using a stopwatch.The sideways jumping test assesses agility and coordination by recording the number of sideways jumps completed within 15 s.Upper body strength and endurance are assessed by the push-up test, which records the total number of push-ups performed in 40 s.Similarly, the sit-ups test measures core strength and endurance by counting the number of sit-ups performed at the same time.The standing long jump test assesses lower body strength by measuring the distance jumped from a standing position in centimeters.Finally, the 6 min run test measures cardiovascular endurance by recording the total distance covered in six minutes, measured in meters.Each test is conducted under standardized conditions to ensure consistency and reliability in assessing the main dimensions of physical fitness, which include endurance, strength, speed, coordination, and flexibility.In addition to these test items, key constitutional data such as height, weight, and BMI were carefully recorded using a FullMedi scale (Full Medical Co., Ltd., Hefei, China).Table 1 provides a brief overview of the test items used in our study.For further explanation of these test items, the reader is referred to the available manuals [34,36].
Procedure
Informed consent was carefully obtained from the parents and children before the study began.Potential participants were thoroughly informed of the aims and procedures of the research prior to enrolment in the Be Fit Program.Each participant received a comprehensive information packet delineating the research protocols, inclusive of parental consent and child assent forms.The program was conducted after school hours on school premises and was supervised by two experienced postgraduate physical education students.At the start of the program, baseline data were carefully collected, including age, height, and weight, BMI was calculated (body weight [kg]/body height [m] 2 ).Both before and after the Be Fit Program, participants underwent the IPPTP 6-18, with explicit encouragement to exert maximum effort during these assessments to ensure an accurate assessment of their physical fitness gains.
Statistical Analysis
Summary statistics were used to report fitness and body composition.Normality of data distribution was confirmed using the Shapiro-Wilk normality test.Mean differences in the fitness tests (including the 20 m dash, standing long jump, jumping sideways, push-ups, sit-ups, and 6 min run) and body composition variables between the pre-and post-intervention were assessed using two-tailed paired t tests.The significance level was set at p < 0.05.All statistical analyses were performed using SPSS version 25.0 (IBM, Corp., Armonk, NY, USA).
Fitness Test Performance
The results of the fitness tests conducted before and after the Be Fit Program are summarized in Table 3. Paired t-tests showed no significant changes in BMI, 20 m dash, push-ups, and sit-ups (p < 0.05).However, statistically significant improvements were observed in jumping sideways, with scores increasing from 21.34 ± 3.36 to 21.71 ± 3.23 counts (p = 0.02), and in the standing long jump, with distances increasing from 107.42 ± 13.46 to 108.70 ± 12.85 cm (p = 0.03).There was also a marginally significant improvement in the 6 min run test, with the distance covered increasing from 943.88 ± 195.19 to 959.73 ± 234.02 m (p = 0.06).
Discussion
This pilot study aimed to evaluate the effectiveness of the Be Fit Program, an individualized school-based PA initiative, in improving the physical fitness of Egyptian children.Our results show notable improvements in specific fitness components, particularly coordination (as evidenced by improved performance in the lateral jump test), and lower body strength (as evidenced by improvements in the standing long jump).However, the program did not have significant effects on other fitness parameters such as speed, upper body strength, and overall body composition (as indicated by BMI).These results suggest that while the program was effective in targeting certain facets of physical fitness, further interventions of longer duration or greater intensity may be required to achieve substantial improvements in other areas.
Our results confirmed our initial expectations, showing varying degrees of responsiveness between different fitness components following a 6-week exercise intervention.In particular, we observed no significant changes in the dash, push-up, and sit-up tests.However, there were notable improvements in aerobic fitness and statistically significant improvements in both the side jump and 6 min run tests.The data from our study, which highlight the differential response of different fitness components to the 6-week PA intervention, provide valuable insights into the dynamics of physical fitness development in Egyptian children.The significant improvements observed in coordination and lower body strength are likely due to the Be Fit Program's emphasis on functional movement and motor skill development.These findings are consistent with previous research suggesting that motor skill-related fitness components can be effectively improved through targeted PA in a relatively short period of time, due to the rapid adaptability of the neuromuscular system in children [37].Given the ongoing neurological maturation of children and the plasticity of their developing neuromuscular systems [38], activities that focus on coordination and agility can rapidly improve motor skills, balance, and coordination, crucial facets of childhood development [39].The increased performance in both the side jump and standing long jump tests underscores the success of the intervention in these specific areas, highlighting the responsiveness of neuromuscular coordination and agility to the Be Fit Program.These improvements potentially contribute to improved overall motor performance and physical well-being.The observed improvements in coordination and explosive strength underline the effectiveness of the Be Fit Program over 6 weeks in promoting these specific areas of physical fitness.This may be due to the program's targeted activities to improve motor skills such as jumping and dynamic movements, which play a key role in children's physical development.
The slight nonsignificant enhancement observed in aerobic endurance, as assessed by the 6 min run test, suggests that a six-week period may not be sufficient to induce substantial changes in aerobic capacity, a parameter that typically requires sustained and progressive overload to improve [40,41].However, this marginal improvement may also indicate the relatively rapid adaptability of children's cardiovascular and respiratory systems to aerobic exercise.Aerobic fitness, which is characterized by the body's efficiency in using oxygen, can improve through the increased heart and lung capacity and increased blood flow, even within a relatively short intervention period such as six weeks [42].This is particularly true in children, whose bodies are more adaptable and respond more quickly to aerobic stimuli [24,[43][44][45].Although the improvement in aerobic endurance did not reach statistical significance, this observation highlights the need for either a longer intervention period or an increased aerobic component within the program to achieve more pronounced aerobic improvements.
The 6-week PA intervention did not result in changes in body composition, consistent with observations that such interventions often have minimal impact on reducing childhood obesity [46].Similarly, the association between physical fitness and BMI in children has been inconsistent [47].Conversely, sustained improvements in cardiovascular fitness and BMI have been reported following long-term PA interventions [48][49][50][51][52].In addition, one study documented a reduction in BMI following a 14-week intervention combining dietary, behavioral, and PA components [53].The lack of observed changes in BMI in our study may be due to baseline weight status, as PA interventions tend to be more effective in reducing BMI in obese children compared to normal weight or overweight children [52].In addition, factors such as age, gender, and pre-intervention PA levels have been identified as important determinants of intervention effectiveness [54].Younger children, in particular, have a greater degree of metabolic flexibility, leading to more pronounced and rapid changes in response to PA.This phenomenon is particularly pronounced in prepubertal children, whose bodies and metabolisms adapt to exercise more efficiently than those who are in or have completed puberty [55].In girls, the onset of puberty, which occurs earlier than in boys, leads to hormonal changes that affect fat distribution, muscle growth, and metabolism.In particular, significant improvements in BMI with PA have been documented in prepubertal girls due to their increased responsiveness to exercise before the onset of major hormonal changes [56,57].In conclusion, the effectiveness of PA interventions in modifying body composition, particularly BMI, is influenced by a complex interplay of factors, including intervention duration, participants' baseline weight status, and individual characteristics such as age, sex, and initial PA levels.Long-term, comprehensive interventions that take these variables into account hold promise for improving child health and combating obesity.
The lack of significant changes in speed and upper body strength within the Be Fit Program highlights the need for longer intervention periods and targeted training modalities.These fitness components rely on gradual physiological adaptations, such as muscle hypertrophy and metabolic improvements, which require longer durations and specific training stimuli to manifest noticeable improvements.In essence, the duration and intensity of the program may have been insufficient to induce changes in these specific areas of fitness.This observation is consistent with existing literature suggesting that short-term interventions may not adequately address certain fitness parameters in children [58].The development of muscular strength and endurance typically requires progressive overload and sustained training over time [59,60], particularly in pediatric populations where muscular growth is influenced by growth and maturation stages [61].Children's muscles respond differently to strength training compared to adults [62], so significant gains in strength and endurance may require durations longer than six weeks, especially without specific emphasis on these areas.
In addition, unique cultural and environmental factors prevalent among Egyptian children, such as high academic pressure and limited opportunities for PA outside of school, are likely to have influenced the results of the Be Fit Program.In Egypt, the emphasis on academic achievement often takes precedence over physical education, which may encourage sedentary lifestyles among children.Additionally, the lack of accessible and safe recreational spaces limits the range of physical activities available, restricting opportunities to engage in the variety of exercises essential for the development of different fitness components [63,64].To effectively address these challenges, interventions such as the Be Fit Program need to include extending the duration and intensity of sessions, incorporating targeted exercises to address key fitness components, and better integrating PA into community spaces and school curricula to alleviate academic constraints.For instance, introducing after-school sports programs or weekend community fitness events can provide additional PA opportunities.Collaborating with local schools to integrate short, frequent exercise breaks during the school day can help balance academic and physical activity demands.Furthermore, engaging community leaders and parents to raise awareness of the benefits of physical fitness and advocating for policy changes that prioritize physical education in schools are essential steps [65].Building partnerships with local community centers or sports clubs can also enhance the accessibility and appeal of PA for children.By adopting this strategy, such programs can be more effective in promoting children's sustainable physical development and fitness, ultimately cultivating a healthier and more active generation.
In summary, this pilot study evaluated the effectiveness of the "Be Fit Program", a school-based PA initiative aimed at improving the fitness levels of Egyptian children.The study found significant improvements in coordination and lower body strength, as evidenced by improved performance in the jumping sideways and standing long jump tests.However, other fitness components such as speed, upper body strength, and BMI did not show significant changes.While the program marginally improved aerobic endurance, it did not produce significant improvements in aerobic capacity, suggesting a potential need for a longer or more intensive intervention period.Cultural and environmental factors unique to Egyptian children, including high academic pressure and limited opportunities for PA, may have influenced these results.The lack of change in speed, upper body, and core strength suggests that short-term interventions may not be sufficiently influential in these areas.This highlights the importance of tailored interventions to effectively target specific fitness components in children.However, it is important to recognize the limitations of the study, particularly the lack of a control group.Therefore, the results should be interpreted with caution and future studies should consider including a control group for a more robust analysis.Despite its limitations, this pilot study highlights the potential of school-based programs such as the Be Fit Program to promote meaningful improvements in children's physical fitness within a limited timeframe.To achieve more comprehensive fitness improvements, future iterations of the program could integrate a wider range of activities focusing on upper body and core strength, while also implementing strategies to encourage greater and sustained participation in PA outside of structured sessions.Further research should explore the longitudinal effects of continued participation in such programs, possibly incorporating modifications such as longer duration or more frequent sessions per week.Moreover, qualitative assessments of participants' motivation, enjoyment, and overall engagement with the program could provide deeper insights into how these factors influence the effectiveness of school-based PA initiatives.Additionally, future studies should comprehensively consider other critical elements like sleep and nutrition to better understand their combined impact on fostering a healthy future for children and adolescents.Lastly, although socioeconomic background could potentially influence participation in PA, our study did not include a measure of socioeconomic status.However, given that all participating children were enrolled in a national private school and predominantly belonged to the middle class, it was difficult to analyze its impact.Nevertheless, future research is encouraged to incorporate considerations of socioeconomic background when examining the effects of PA programs in developing countries, as it may provide valuable insights into how these factors influence program outcomes.
Conclusions
In conclusion, the DELICIOUS pilot study on the implementation of the Be Fit Program sheds light on the potential effectiveness of school-based PA interventions in improving specific dimensions of physical fitness in children.The program resulted in significant improvements in coordination and strength, accompanied by marginal improvements in aerobic endurance among Egyptian children.However, the lack of significant changes in speed of action, strength endurance, and body composition suggests the need for program adaptations or longer intervention periods to achieve comprehensive fitness improvements.Furthermore, the limited impact on specific fitness components may reflect broader cultural and environmental factors that influence children's overall PA levels and health, including sedentary lifestyles, academic pressures, and limited access to recreational facilities.This study highlights the importance of tailored, non-competitive PA in school settings and provides a basis for future research and program development aimed at promoting holistic physical fitness in children.The findings provide a valuable contribution to public health initiatives targeting children's health and well-being and advocate the introduction of personalized PA programs in educational settings.By emphasizing individualized approaches to physical fitness, such initiatives can better address the diverse needs and abilities of children, ultimately promoting a healthier and more active generation.
Table 1 .
Test items of the IPPTP-R.
Table 3 .
Paired t-test analysis of mean differences in fitness tests between pre-and post-intervention assessments, including means (SDs). | 2024-07-14T15:06:07.699Z | 2024-07-01T00:00:00.000 | {
"year": 2024,
"sha1": "cd65db15e4131714fcd5500e5c17061356717968",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/children11070842",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "988fbf22c1fef4e79076d51b694f40334c89e25a",
"s2fieldsofstudy": [
"Education",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
271878776 | pes2o/s2orc | v3-fos-license | The cerebellum computes frequency dynamics for motions with numerical precision and cross-individual uniformity
Cross-individual variability is considered the essence of biology, preventing precise mathematical descriptions of biological motion1–7 like the physics law of motion. Here we report that the cerebellum shapes motor kinematics by encoding dynamic motor frequencies with remarkable numerical precision and cross-individual uniformity. Using in-vivo electrophysiology and optogenetics in mice, we confirmed that deep cerebellar neurons encoded frequencies via populational tuning of neuronal firing probabilities, creating cerebellar oscillations and motions with matched frequencies. The mechanism was consistently presented in self-generated rhythmic and non-rhythmic motions triggered by a vibrational platform, or skilled tongue movements of licking in all tested mice with cross-individual uniformity. The precision and uniformity allowed us to engineer complex motor kinematics with designed frequencies. We further validated the frequency-coding function of the human cerebellum using cerebellar electroencephalography recordings and alternating-current stimulation during voluntary tapping tasks. Our findings reveal a cerebellar algorithm for motor kinematics with precision and uniformity, the mathematical foundation for brain-computer interface for motor control.
Introduction
Individual variability has long been a defining yet challenging aspect of biological sciences, distinguishing it from the exact sciences like physics or chemistry.While biological mechanisms are qualitatively valid, calibrating parameters on an individual basis is often necessary.A few notable exceptions, such as the trinucleotide RNA codes for amino-acid translation, have led to significant breakthroughs in biology and medicine.These codes, which are quantitatively precise and universally applicable across cells, have ushered in the era of genetic engineering, gene therapies, and RNA vaccines.However, similar precision and generalizability in the neural dynamics of motor control have yet to be achieved.
Our brains are capable of generating diverse motor behaviors, covering a highly complex set of spatiotemporal kinematic patterns.Although the mechanisms of motor control are often nonlinear and multidimensional, recent studies suggest that the cerebellum plays a crucial role in linearly coding the kinematics.The cerebellum regulates the end-point precision of reach movement 1,2 , motor-state changes of skilled movement [3][4][5] , eye saccades 6,7 , tongue 8 and harmaline-induced movements 9 .The cerebellum is also adept at maintaining temporal accuracy 10 , establishing specialized cortical connections 11 and forming rapid olivocerebellar circuits 12 for handling fast kinematics.The evidence suggests that our central nervous system may use the cerebellum as a linear encoder to build complex motor kinematics.However, individual variability remains an intrinsic feature of these time-domain observations.Fortunately, insights into human cerebellar disorders have shed light on the cerebellum's role in motor kinematics coding.Cerebellar dysfunctions lead to the breakdown of motor kinematic control in a unique feature linked to motor frequencies.Essential tremor, the most common movement disorder, is characterized by involuntary rhythmic movements with a consistent motor frequency, linked to excessive cerebellar oscillations [13][14][15] .Conversely, cerebellar ataxia features arrhythmic involuntary movements, that are associated with Purkinje cell loss 16,17 .These abnormalities strongly suggest that cerebellar diseases have neuronal coding dysfunctions in forming motor frequencies.
This study investigates the potential of cerebellar frequency coding in shaping motor kinematics.We explored the frequency building blocks at both cellular and population levels and established that motor frequency coding is not only biologically robust but also mathematically precise and generalizable.This suggests a cerebellar algorithm capable of creating complex motor kinematics with designed frequency dynamics.
Results
Frequency-dependent cerebellar oscillations precisely report motor rhythms.
Our initial investigations focused on whether the cerebellum encodes the motor frequencies of self-generated rhythmic movements in mice.To trick the mice into generating motor behaviors at a pre-determined frequency, we applied a horizontal vibrating platform that can vibrate at a specific fixed frequency or frequency as a function of time (Fig. 1a and Video S1).Wild-type mice were trained to develop active motor compensation to the vibrations and could walk and stand freely on the platform (Video S2).Self-generating motion can be calculated by subtracting the pre-designed sinusoidal platform vibrations from the head-mounted accelerometer signals, including both vibration and active motion (Fig. 1b).Both platform and head signals were detected simultaneously with accelerometers of the same design.When the mouse was at rest, the head moved with the platform, leading to similar waveforms of accelerometer signals from the head or the platform (Fig. 1b, gray part).When the mouse performed compensatory movement to cancel out the platform vibrations, the head signals were dampened by motor compensation (Fig. 1b, orange part and Video S2).The vibrations also allowed multiple muscles and joints to react at the same rhythm, which enhanced the frequency information across cerebellar topography.During 16-Hz platform vibrations, simultaneous local field potential (LFP) recordings from the cerebellar cortex revealed corresponding 16-Hz cerebellar oscillations (Fig. 1c-f).Based on this initial observation, we trained the mice with a protocol including multiple vibratory frequencies (Fig. 1g), covering the physiological frequency range of spontaneous motor behaviors 18 .
We first performed a cross-correlation analysis between cerebellar LFPs and mouse motions (Fig. 1h).Compatible with previous knowledge, cerebellar signals are positively and significantly correlated with motions (Fig. 1i-j).However, it is possible that the cerebellum LFPs predominantly reflects the sensory inputs.We therefore cross-correlated cerebellar LFPs with accelerometer signals, which reflected overall motions and therefore corresponding overall sensory inputs.While the accelerometer signals also had strong frequency-dependency (Fig. 1d), they were poorly correlated with cerebellar signals (Fig. 1k), suggesting a motor-predominant contribution of the cerebellar LFPs.While the cerebellar LFPs significantly represent motor kinematics, the crosscorrelogram are highly variable across time and across individual mouse (Fig. 1i-j), indicating a qualitative valid and quantitative imprecise scenario.
Next, we processed the same signals in the frequency domain (Fig. 2a-d).The trained mice consistently produced movements at corresponding motor frequencies, with notably enhanced cerebellar LFP amplitudes (Fig. 2c).However, the enhanced LFP amplitudes were highly variable and did not follow the frequency-dependent increment of motion powers (Fig. 2d), unable to precisely correlated with motor amplitudes (Fig. 2e).In contrast, peak cerebellar oscillatory frequencies accurately encoded motor frequencies, demonstrating minimal individual variability and underscoring the cerebellum's potential role in quantitative motor-rhythm coding (Fig. 2f).
The extracted frequency in Fig. 2f is the section-based average of frequency-dependent motions.If the cerebellum truly engages in the rhythm control of motor kinematics, the frequency coding should precisely reflect kinematic details.We performed a second-by-second analysis of all recordings, examining frequencies and amplitudes on a second-by-second basis (Fig. 2g-h).
The cerebellar frequency consistently matched the motor frequency across all mice and throughout most of the 2,160 data points, highlighting a robust, quantitatively precise coding mechanism (Fig. 2g-j).By comparing the time and frequency domains, the imprecision of cerebellar kinematic coding is mainly contributed by the amplitudes mismatches between cerebellar and motion signals (Fig. 2k).Next, we evaluated the interposed nucleus of the deep cerebellar nuclei (DCN), the output structure of the motor cerebellum.The DCN LFPs were significantly but variably correlated with the motor kinematics in the time domain (Supplementary Fig. 1), whereas LFP frequencies consistently matched motor frequencies across all examined mice and all 2,880 data points (Supplementary Fig. 2).
In summary, the cerebellum accurately encodes motor frequencies during self-generated rhythmic movements in mice, with minimal observable individual variability.
DCN neurons calculate motor frequencies throughout populational coding.
LFPs are spatiotemporal summation of neuronal signals.We need to understand the building block at the single-cell level.To understand these signals at the single-cell level, we simultaneously recorded single-unit activities and LFPs from the interposed nuclei of the DCN and analyzed corresponding motor kinematics in freely moving mice (Fig. 3a-b and Supplementary Fig. 3).We first evaluated whether DCN neuronal firing rates can represent motor frequencies.The motor frequencies were poorly correlated with neuronal firing rates, burst rates, or their mean firing rates (Fig. 3c), against a simple rate-coding algorithm.We next evaluated whether the changes in firing probability, instead of the firing rate itself, could have a tuning periodicity to represent motor frequencies.We leveraged vector strength spectrum analysis [19][20][21][22][23] , a mathematical method using frequency vectors to unbiasedly extract probability tuning strength across frequencies (Fig. 3d).The vector strength frequencies were high variable at the single-cell level (Fig. 3e).However, a specific frequency emerged with improving prominence when more and more neurons were recruited (Fig. 3f).This populationally encoded frequency converged toward the matched DCN oscillatory frequency and motor frequency with the same numerical value (Fig. 3g-h), with increasing signal-to-noise ratio during the recruitment (Fig. 3i).This populational coding mechanism remained valid across all tested frequencies (Supplementary Fig. 4).Next, we applied autocorrelation to explore the intrinsic tuning of neuronal firing probabilities (Fig. 3j-n).Similar to the results of vector strength analysis, the autocorrelogram did not generate consistent tuning frequency at the single cell level but faithfully reported the motor frequencies at the populational level (Fig. 3j-n and Supplementary Fig. 5).
If the DCN neurons contribute to the generation of motor frequencies, the neuronal firing times should not be random but periodically tuned to the phases of the frequency-dependent motor kinematics.To validate the prediction, we extracted the instantaneous phases of motor kinematics based on the neuronal firing times (Fig. 3o) and quantified the phasic bias by polarity index, a numerical index ranging from 0 (purely random firings) to 1(complete phase-locked firings) 13 .
While some units exhibited higher polarity when compared to the shuffled data (Fig. 3p-q), all units have relatively low polarity index (<0.4)(Fig. 3r); therefore, no single neuron can explain the precise frequency coding of motor kinematics.Notably, the neuronal firings have stronger recruitment of higher biased units and greater polarity indexes at the populational level (Fig. 3r-s, Supplementary Figs.6-7).Direct visualization of simultaneously recorded single units also supported the prediction of the abovementioned frequency and phase analysis with populational recruitment (Supplementary Figs. 8).
Despite the findings on various tuning frequencies, the average firing rates of DCN neurons remained similar (Fig. 3c).We also performed computational modeling of noisy DCN neurons with the baseline firing rates at 20-22 Hz.When receiving inhibitory inputs of PCs at the frequency of 16 Hz, the populational tuning frequency converged to 16 Hz, while the mean firing rates stayed the same (Supplementary Fig. 9).This supports the experimental data, indicating that DCN neurons can adapt their population tuning frequencies to encode motor frequencies without significantly changing their intrinsic firing properties.
Taken together, the DCN neurons encode the frequencies of motor kinematics throughout populational recruitment.While each neuron generates noisy or stochastic signals, the neuronal population achieves a high signal-to-noise ratio and precise frequency coding.This confirms that LFPs, as spatiotemporal summations of these population activities, accurately reflected the synchronized frequencies between neuronal codes, LFPs, and motor kinematics.
Rhythmic DCN stimulation induces motor rhythms.
To establish the causality of the frequency-coding mechanism in motor kinematics, we optogenetically stimulated DCN neurons in Thy1:ChR2-EYFP mice and recorded the resultant motor kinematics using a pressure-sensing force plate 13,18 (Fig. 4a).Rhythmic stimulation led to a periodic increase in neuronal firings (Fig. 4b).Consistently, the single-unit firing rates were way above the motor frequencies (Fig. 4c), against the rate-coding algorithm.Instead, the rhythmic optogenetic stimulation generated motor rhythm at the stimulating frequencies, and the populational tuning frequencies precisely converged to the motor frequencies at all tested scenarios (Figs.4d-h).Phase analysis further verified the consistent feature of populational recruitment at all stimulating frequencies (Supplementary Fig. 10).
We also evaluated cerebellar LFPs simultaneously recorded with the motor kinematics (Fig. 4i-j).The optogenetic stimulation led to increased but varied amplitudes of cerebellar oscillatory strengths and motor rhythms (Fig. 4k-l).However, cerebellar and motor frequencies were always matched (Fig. 4m).The second-by-second analysis revealed amplitude variations across time, while the oscillatory and induced motor frequencies were always matched (Fig. 4n).Comparison between time and frequency domains confirmed that amplitude variability contributed to the imprecise cerebellar coding of rhythmic movements, while frequency information remained numerically precise (Fig. 4o).
Furthermore, a strong phase relationship between DCN firings and cerebellar LFPs indicated potential interactions between the cerebellar cortex and DCN (Supplementary Fig. 11).We also explored the role of axonal projections from Purkinje cells (PCs) to DCN in this frequency-coding process (Supplementary Fig. 12).Rhythmic stimulation of PC axonal terminals generated rhythmic motions at the stimulating frequencies with matched populational coding mechanism, phase recruitment, and cerebellar oscillations across all tested mice (Supplementary Figs.12-14).
Computational modeling also echoed the same results (Supplementary Fig. 9).
Taken together, DCN neurons generate populational tuning of firing probabilities throughout PC-to-DCN modulations.
The cerebellum generates dynamic frequency evolution for non-rhythmic movements.
While previous results detailed the cerebellum's encoding of rhythmic movements, most everyday movements are non-rhythmic.Theoretically, any finite signal, whether rhythmic or not, can be fully represented and reconstructed in the frequency domain.Non-rhythmic signals can be constructed using dynamically changing instantaneous phases/frequencies and amplitudes (via Hilbert transform) or multiple sets of these components in linear combinations (via Hilbert-Huang transform).Therefore, if the cerebellum can generate highly dynamic frequencies across time, it has the potential to create non-rhythmic complex motor kinematics with the same frequency coding mechanism.
To explore this hypothesis, we introduced floor vibrations with a linear chirp waveform to mice-a complex, non-rhythmic waveform characterized by constantly changing frequencies in a designed linear trend (Fig. 5a-c and Video S3).This waveform is a strictly non-rhythmic pattern in which the instantaneous frequencies at any two moments are different.Using a linear chirp vibration from 4-25 Hz in 30 seconds, the mouse cerebellum generated dynamic cerebellar oscillations and compensatory motions with matched frequency dynamics of the designed protocol (Fig. 5d-e).These self-generated cerebellar oscillations correlated strongly with compensatory motions but showed minimal correlation with residual body movements recorded by an accelerometer (Fig. 5f-g).Consistently, while frequency-dependent amplitudes of both cerebellar oscillations and motions were significantly increased (Fig. 5d), the magnitudes of increment remained poorly correlated in second-by-second analysis, therefore prohibiting the precise amplitude coding of motor kinematics (Fig. 5h).
Next, we optogenetically illuminated the DCN with the same linear chirp in Thy1:ChR2-EYFP mice.Cerebellar oscillations can be reliably generated with precisely matched time-frequency dynamics.More importantly, the mice developed complex motor kinematics with the motor frequencies that matched the cerebellar oscillatory frequencies at nearly every time point (Fig. 6ad and Video S4).Analysis of DCN single-unit activities during chirp stimulation revealed a unique pattern of neuronal recruitment consistent with the prediction from stimulation dynamics (Fig. 6fi).The neurons' ability to follow these complex temporal dynamics supports their role in forming rapidly changing frequency dynamics.Consistently, the populational DCN firing probabilities were faithfully tuned to Hilbert-based instantaneous phases/frequencies, cerebellar LFPs, and motion kinematics (Supplementary Fig. 15).
Besides linear chirp, we further pushed the complexity of frequency dynamics by optogenetically illuminating DCN with complex chirp waveforms (Fig. 6j-k).Like the simpler linear chirps, complex chirp illumination evoked corresponding dynamics of cerebellar oscillations (Fig. 6l-n) and neuronal firings (Fig. 6o-q), thus generating matched frequency dynamics of mouse motor kinematics.While we achieved frequency precision for simple or complex motor kinematics, the motor amplitudes remained imprecisely correlated (Figs.6c & 6n).
Therefore, this approach has yet to generate functional or skilled movements, which requires precise coding for both motor frequencies and amplitudes across all time points.Taken together, the cerebellum reports complex frequency dynamics and matched motor kinematic frequencies in self-generated, non-rhythmic movements.Optogenetic stimulation confirmed that the cerebellum can causatively construct non-rhythmic motor kinematics by dynamically encoding motor frequencies.With the preserved algorithm and numerical precision of frequency coding across all tested mice, we can optogenetically create complicated motor kinematics with designed motor frequencies.
Cerebellar frequency coding predicts skilled tongue movements.
The vibration platform and force plate targeted global body motions with multi-joint synchrony.we sought to determine whether the cerebellar frequency-coding algorithm could predict more localized, skilled movements.Therefore, we investigated the tongue movement during licking behaviors with simultaneous electrophysiological recordings from the dentate nucleus of DCN (Supplementary Fig. 16a-c).Consistently, the frequencies of dentate LFPs were highly correlated with the licking rates (Supplementary Fig. 16d), and the single-unit activities were tuned with the dentate LFPs at the populational level (Supplementary Fig. 16e-h).
Taken together, the cerebellum encodes frequency dynamics for complex motor kinematics, which is evident in global body movements and skilled tongue movements.
The human cerebellum engages in rhythm control of volitional movements.
To examine whether the human cerebellum also engages in frequency control of volitional movements, we analyzed cerebellar EEG and corresponding surface electromyographic (EMG) signals of healthy subjects performing rhythmic tapping at 4, 5, and 6 Hz (Fig. 7a-b, and Table S1).Mirroring our findings in mice, cerebellar oscillations were detected during finger tapping, closely matching the EMG signal frequencies in a second-by-second analysis (Fig. 7c-f).
To probe the causal role of frequency coding in the human cerebellum, we employed transcranial alternating current stimulation (tACS) to modulate cerebellar oscillations.Using strong currents to modify the frequency of cerebellar oscillations may be dangerous.Therefore, we evaluated the frequency stability of motions by applying 4-Hz tACS to healthy subjects during 4-Hz finger tapping (Fig. 7g-i and Table S1).Similar to the effects of bidirectional modulations of tremor amplitudes by cerebellar tACS 24 , in-phase or anti-phase stimulation may bidirectionally change the stability of motor rhythms.We utilized a 4-Hz click sound to aid subjects in adjusting their tapping frequencies and recorded accelerometer-based kinematics during both sound-on and sound-off periods.The amplitude-independent kinematics were extracted to evaluate frequency stability (see methods).During the sound-off periods, tACS was found to either increase or decrease tapping frequency stability (Fig. 7j), demonstrating effective frequency modulation.
During the sound-on period, the tapping kinematics were tightly guided by the sound, therefore revealing a better correlation to the 4-Hz waveforms without a difference to tACS manipulation (Fig. 7k-l).
Taken together, the cerebellar circuit of the healthy subjects also actively engages in frequency coding of volitional movements.Manipulation of cerebellar oscillations could enhance or suppress the frequency stability of motor rhythms.
Discussion
In this study, we provided mouse evidence and supporting human evidence that the cerebellum encodes motor frequencies for physiological motor kinematics.The frequency is encoded by the integrative phasic-tuning of neuronal firing probabilities at the populational level.
While the motor amplitudes are highly variable and contribute to the variability of cerebellar kinematic coding in the time domain, the cerebellum encodes motor frequency with quantitative precision and generalizability across individuals without the need for additional calibration.This level of precision allows us to engineer frequency dynamics for complex motor kinematics in mice.Among many cerebellar functions, cerebellar rhythm coding emerges as a numerically precise and generalizable algorithm, potentially serving as a mathematical backbone for future quantitative studies of neural dynamics.The key features of frequency coding are summarized in Supplementary Fig. 17.
There are limitations in this study.First, the study design did not include topographical information about different muscle groups, which have been described in the cerebellum 1,25 .We applied a vibration platform for physiological global movements with multiple muscle groups activated at the same frequency.This approach enhanced the frequency-related information against the background but lost the topographical information of muscle groups.The skilled licking movements only involved tongue muscles and minimized topographical concerns.Future studies are required to demonstrate topography-based frequency coding for detailed motor kinematics.
Second, we presented the human evidence that supports the causative roles of frequency modulation of the cerebellum by tACS interventions.However, we did not have the single-cell level of evidence to describe whether mouse and human cerebellar oscillations are generated based on the same mechanism of populational neuronal recruitment.Addressing this gap will likely require intra-surgical recordings or other methods capable of capturing detailed neuronal activity.
The impact of this work is to reveal the cerebellum's use of a simple yet mathematically precise algorithm to manage the diversity and complexity of various movements.However, this is just one aspect of many cerebellar functions, such as motor learning and cognitions.Notably, frequency precision cannot be directly transferred to motor precision.For example, skilled movements like reaching, which typically involve simple trajectories and are characterized by lowfrequency kinematic components, may suffer from temporal imprecision due to the inherent mathematical trade-off between spectral and temporal resolution.This issue could compromise the precision of such skilled movements.Notably, the cerebellum employs a different strategy for enhancing motor precision, specifically by fine-tuning the deceleration of movements 1,2 .Moreover, while the olivocerebellar pathway can precisely construct dynamic motor frequencies, the counterpart mechanism for motor amplitude coding remains elusive and more complex.Our current findings (Figs. 2, 4-7) and previous studies 13,18,26 in both mice and humans suggested that the amplitudes of frequency-dependent cerebellar oscillations significantly influence frequency-dependent motor amplitudes.Yet, variations in motor amplitudes under consistent levels of optogenetic stimulation or cerebellar oscillations indicate the presence of additional mechanisms beyond cerebellar oscillations and populational coding.Future research needs to elucidate the mechanisms responsible for encoding instantaneous amplitudes, which are crucial for constructing functional motor kinematics.
Motion recordings in freely moving mice
Motion signals of mice were amplified and detected using a 15x22 cm force-sensitive platform (Convuls-1, Columbus Instruments), allowing the mice to move freely.The platform linearly converted the applied weight into voltage for recording, with a conversion rate of 0.45 Volts per Newton (or 141 millivolts per 32 grams of mass per gravity), enabling the platform to sense subtle weight changes caused by the mice's motion.The data were then low-pass filtered at 250 Hz and then digitized at 1,000 Hz using a DAQ device (Cerebus, BlackRock microsystem).Detailed information regarding the systems and settings can be found in our previous paper 13 .
Optetrode implantation and electrophysiology recording
Optetrode 27 , a combination of tetrode and optical fiber, was applied to record single unit activity, deep LFPs and perform optogenetic manipulation simultaneously.The construction of the optetrode involved threading tungsten tetrodes (California Fine Wire Company) and an optical fiber (ThorLabs, FT200UMT) through a microdrive screw (Renishaw) in a 3D-printed tower to stabilize and secure them.Each individual tungsten wire of the tetrode was threaded through the channel holes of the electrode interface board and anchored them using gold pins.Additionally, we utilized small screws (Antrin Miniature Specialties, 0.089 inches in diameter, 0.0625 inches in length) as electrodes to record the LFPs of brain surface of mice.
During the surgery, 3-month-old mice were fixed on the stereotaxic frame under anesthesia with isoflurane.The optetrodes were implanted at the DCN (AP, -6.24 mm; ML, ±2.1 mm; DV, -1.9 mm from dura), and the screws were implanted on bilateral cerebellum surface (AP, -6.24 mm; ML, ±2.1 mm).To identify the implanted trajectory of the optetrode, NeuroTrace TM Dil (ThermoFisher, N-22800), a tissue-labeling paste, was applied to coat the surface of the optetrode.
After the implantation, we applied dental cements (Superbound, Sun Medical Co., LTD) on the skull to secure the electrodes in place at the end of the surgery.
Electrophysiology signals were sampled at a rate of 30,000 Hz using a DAQ device (Cerebus, BlackRock microsystem or Open Ephys) for subsequent offline analysis, which will be described in detail in the following sections.
Optogenetic stimulation in the cerebellum
We utilized a custom-written LabView code to trigger the output of a diode laser (Cobolt, 473 nm) through a multifunction I/O device (NI 782258-01).This setup allowed us to precisely and linearly tune the output power at a frequency of 2 MHz.The laser power was adjusted individually for each mouse and ranged from 0.5 mW to 5 mW.To ensure accurate light power levels, daily calibrations were performed using power meters (Thorlabs) before the experiments.
In the experiments using multiple stimulating frequencies, trains of blue light (25% duty cycle) at 8, 12, 16, 20, 15 and 10 Hz were sequentially given for 90 seconds, separated by 300-second light-off periods.In the chirp stimulation experiment, linear chirp waves (30 seconds, from 4 Hz to 25 Hz) and complex chirp waves
Vibration platform
We applied a customized vibration platform with optical grating to ensure precise control of vibration frequency and its sinusoidal vibrating waveform up to 120 Hz at the amplitude of 3-mm horizontal vibrations.Two cameras were set to capture the front view and top view of the vibration platform.In the experiments using multiple vibrated frequencies, the platform vibrated at 8, 12, 16, 20, 15, and 10 Hz sequentially, with a duration of 90 seconds in each frequency and separated by 2 minutes of non-vibrating periods.In the chirp vibrated experiment, 10 times chirp vibration periods (30 seconds, from 4 Hz to 25 Hz) were separated by 30 seconds of non-vibrating periods, and we repeated the protocol for 10 times in each experimental section.We
Single-unit spike sorting and burst detection
Spikes were sorted by either two sorting tools, Offline Sorter TM (OFS) software and Kilosort3 software 28 .Electrophysiology data acquired through optetrode were high-pass filtered at 250 Hz, and the noise were reduced through digital referencing.Offline Sorter TM focuses on those with higher amplitude, and extracts them as spikes.Subsequently, it performs K-means clustering to assign each extracted spike to specific single units.Kilosort3 models the electrophysiology data as a sum of template waveforms triggered on the spike times, enabling the identification and resolution of overlapping spikes.The detection criteria of DCN bursts followed previous studies (74, 75).The inter-spike interval within a burst should be equal or smaller than 15 ms.The minimal spike counts within a burst was 4.
Spectrum analysis of motion and LFP data
The LFP data underwent spectrum analysis following the procedures consistent with our previously works 13,29,30 .The digitized data was down-sampled to 1,000 Hz for analysis.Frequency domain analysis was performed using in-house MATLAB scripts.Welch's method with a Hanning window (each segment is 1-second long and overlaps half of the samples) was utilized to estimate power spectral density (PSD, μV 2 /Hz for LFP data and mV 2 /Hz for motion data).For fixed frequency stimulation, each PSD data point was calculated from a 20-second window with a 1second shift.For chirp wave stimulation, a 1-second window without overlap was applied.
Vector Strength Analysis
The analysis of single-unit spike timing modulation was carried out using vector strength analysis.It was introduced by Goldberg and Brown in 1969 and has been widely utilized to quantify the phase-locking and synchronization of a spike train, indicating whether a single unit fires at specific phases of a particular modulation frequency [19][20][21][22][23] .The spike timings of individual units were obtained using the methods discussed earlier, and these spike times, represented as a vector (t), were converted into phase angles (p) using the following formula: where f represents frequency.Phase angles were adjusted to range from - to .The vector strength (v) is then calculated with the equation below 31 : where n is the count of spikes, p is the vector of phase angles, and i is the imaginary unit.Since a higher number of spikes often leads to a smaller vector strength, we normalized the vector strength to account for this bias 32 .We first generated a distribution of random vector strengths for n number of spikes by calculating vector strength with n random phases in 20,000 iterations.The mean and standard deviation of this distribution are then calculated, and the normalized vector strength is the original vector strength subtracting the mean and dividing the standard deviation.
The above steps only result in the vector strength at a certain frequency.In order to obtain a vector strength spectrum illustrating the frequencies at which the spike train achieves phaselocking, the aforementioned steps were repeated for each frequency ranging from 1 Hz to 50 Hz with a 0.01 Hz increase.The resulting spectrum was then subjected to the removal of exponential decay and smoothed using a Gaussian-weighted moving average.Prominent peaks with prominence larger than 1% of the mean intensity in the smoothed spectrum were subsequently identified.The prominence criteria prevent from reporting random fluctuations, and the definition of prominence is defined in MathWorks documentation page.By iterating the steps above 10 times with shuffled spike times and averaging them, we acquired the shuffled vector strength that served as a control.
To assess the contribution of population coding, we summed the normalized vector strength spectra from individual units in a random sequence one by one.This resulted in a cumulative spectrum and we examined the signal-to-noise ratio (SNR) from the inclusion of 10%, 20%, 40%, and 80% of the total units.The SNR is defined as follow: The range of "noise" pertains to a bandwidth of 5 Hz characterized by the least intensity.To mitigate potential bias, this iterative procedure was replicated 100 times.All these procedures were executed using an in-house MATLAB script.
Correlation spectrum (autocorrelogram)
We conducted an analysis of the firing modulation of single units to assess their periodic activity.The single-unit data was down-sampled from 30,000 Hz to 250 Hz and subsequently binarized into an array containing either 0 (indicating time without spike firing) or 1 (indicating time with spike firing).This binary array underwent autocorrelation using a maximum lag of 1 second, resulting in an autocorrelation function (ACF).To determine the firing modulation of the single unit, we applied the fast Fourier transform (FFT) to the ACF with a frequency resolution of 0.1 Hz.We extracted significant frequency components by identifying peaks in the frequency spectrum of the firing modulation, with a prominence exceeding 1% of the mean intensity.As with our vector strength analysis, our examination focused on the frequency range of 0-30 Hz, which corresponds to linear motor kinematic coding.We replicated the random-recruiting approach on the vector strength spectrum.The definition of the signal-to-noise ratio remained consistent.All the procedures detailed above were implemented using an in-house MATLAB script.
Spike-phase analysis
To examine the phasic tuning relationship between the single unit firing probability and the continuous data (cerebellar LFP and the motor kinematics), we coupled the single-unit spikes time with the instantaneous phase of the continuous data.First, both the single-unit spikes time and the LFP were down-sampled from 30,000 Hz to 1,000 Hz to facilitate effective filtering.Next, we applied a band-pass filter to the continuous data with a range of ± 3 Hz around the frequency of interest (e.g., 4, 8 ,12 ,16, 20, 15 and 10 Hz).Utilizing the Hilbert transform, we calculated the instantaneous phase of the filtered data and corrected it by π/2.Extracting the phase corresponding to each single unit spike time, we visualized these extracted phases as polar histograms.Furthermore, we introduced a control by shuffling the instantaneous phases and pairing these randomized phases with each spike time, resulting in shuffled polar histograms.To quantify the phasic bias, we computed the polarity index 13 .This index involves summing each phase as a unit vector and then dividing by the total number of vectors.The polarity index ranges between 0 (indicating a purely random distribution across phases) to 1 (indicating a completely biased distribution towards a specific phase).
Correlation analysis of cerebellar LFP data
To examine the relationship between cerebellar LFP and various signals (vibrated signals, accelerating signals, motion signals, and chirp stimulation signals of laser), we calculated their cross-correlation using an in-house MATLAB script based on the "xcorr()" function.The crosscorrelation was computed with a 1-second window that shifted along the data.We extracted the maximal values from each calculation, resulting in a time series of the maximal cross-correlation between cerebellar LFP and the other signals.
2D correlation analysis of chirp stimulation pattern
An ideal stimulation pattern generated from the chirp wave mentioned previously was obtained by aligning each stimulation point at 0 and plotting all stimulation points from -50 ms to 200 ms.The evoked potential of DCN single units were aligned and plotted in the same way, resulting in 2D binary matrices of the same dimension.The 2D correlation coefficient between the ideal stimulation pattern and the experimental results was calculated, producing a single value indicating the similarity between the patterns.Shuffled patterns were generated by permutating the timepoints of DCN single-unit spikes.All the steps mentioned above were achieved by in-house MATLAB script.
Computational simulation
The neuron model.We used the leaky integrate-and-fire model as described previously [33][34][35] .In the model, the membrane potential V of a neuron is given by where is the membrane capacitance, is the membrane leak conductance, is the membrane resting potential, is the synaptic conductance, is the synaptic gating variable, is the synaptic reverse potential and is other input currents.We further simplified the model into the following equivalent form by dividing both sides by C, which leads to The conductance on the right-hand side of the equation is absorbed into 1 ⁄ .As a result, ′ is a unitless variable, and the input has the unit of voltage.In the model, we also added Gaussian noise as the membrane current, which is given by √, where is a Gaussian distributed noise with zero mean and unit standard deviation describes the magnitude of the noise.Adding the noise term into the equation above leads to where is the synaptic time constant and is the time of k-th input spike.The delta function () is ∞ at = 0 and 0 elsewhere.We modeled the excitatory and inhibitory (GABAergic) synapses.The time constant ( ) equals 2ms for both types of synapses, and the reverse potential ( ) is 0mV for the excitatory and -70mV for the inhibitory synapses.
The network model.The network contains two neural populations, PC (Purkinje cells) and DCN (deep cerebellar nucleus), and each population contains 100 neurons.Each PC neuron receives a noise input ( = 10 ) and a sinusoidal input = sin(2) with amplitude: A = 80 mV and modulatory frequency: f = 16Hz.The PC neurons project to the DCN neurons with one-to-one connections via GABAeregic synapses ( ′ = 0.7).The DCN neurons are known to exhibit spontaneous activity, which is modeled by applying a constant membrane current ( = 20) and a Poisson spike train (100 Hz) through the excitatory synapse ( ′ = 0.3) to each DCN neuron.These inputs elicit a spontaneous firing rate of about 20-22 Hz in each DCN neuron.
LFPs.The LFPs of DCN are derived by calculating the mean EPSC (excitatory postsynaptic current) and mean IPSC (inhibitory postsynaptic current) across all DCN neurons and then taking the average of the two mean currents.The EPSC contributes to the negative component of the LFP, while the IPSC contributes to the positive component of the LFPs 36 .We did not consider the distance factor of the neuron in relation to its contribution to the LFPs because we only modeled 100 DCN neurons, and no topographical correlation between these neurons was assumed.
The simulation protocol.We performed a 20,000 ms simulation in each trial.The first 5,000 ms was the resting period in which no sinusoidal input to the PC neurons was provided.PC neurons generally did not fire without the sinusoidal input.Therefore, the DCN neurons were not driven by the PC neurons and only exhibited spontaneous firing activity.After resting, the trial entered a 10,000 ms stimulation period in which the sinusoidal input to the PC neurons was turned on.After the stimulation period, the sinusoidal input was removed, and the trial entered a 5,000 poststimulation resting period.The spike times EPSC and IPSC of all DCN neurons were recorded during the trial.
The data analysis.We calculated the power spectrum density of LFP and analyzed the vector strength of the spike trains of the DCN neurons using methods similar to those described in the Methods section of the main text.The LFP spectrum was calculated using Welch's method with a Hanning window of 1s.The vector strength was calculated for different numbers of recruited units (neurons) to reveal the effect of population coding.The vector strength was normalized by subtracting the mean and then divided by the standard deviation of the random baseline data, which was calculated based on the vector strengths of 1,000 randomized spike trains.
Tissue clearing and histological validation
After completing the behavioural experiments, mice were perfused transcardially with 4% paraformaldehyde.Their brains were retrieved for further examination of electrode placement and fluorescent expression.Coronal or sagittal sections of were cut with a thickness of 500 μm using vibratome, and underwent tissue clearing with RapidClear (Bio-East technology) for 1 week.The histology images were acquired with fluorescent confocal microscope (SP8, Leica).We assessed both the electrode placement and the fluorescent expression pattern of Thy1-ChR2-EYFP and calbindin x Ai32.In cases where improper electrode placement or insufficient fluorescent expression was observed, the corresponding electrophysiology data from those mice were excluded from further analysis.
Human subjects
10 healthy subjects received cerebellar EEG recordings during volitional tapping, and 6 healthy subjects received the tACS study.We recruited these subjects from two institutions: the Neurological Institute at Columbia University Irving Medical Center, New York, USA, and the Cerebellar Research Center at National Taiwan University Hospital, Yun-Lin Branch, Yun-Lin, Taiwan.Before participating in the study, all subjects provided written consent.The research protocols were approved by the Institute Review Board at both Columbia University and National Taiwan University Hospital.Further detailed information about the demographic of the subjects can be referred to Table S1.
Cerebellar EEG recordings and analysis for healthy subjects performing volitional tapping
The cerebellar EEG recordings were also performed with the same lead settings as our previous works 13,26,37 .In healthy subjects performing volitional tapping, the EEG signals were sampled at 512 Hz with a 64-channel EEG machine (Quantum, Natus Medical Inc.).The signals also received band-passed filter at 0.3~128 Hz.Muscle activities were recorded by surface EMG, also sampling at 512 Hz by the same EEG machine and band-pass filtered between 20-128 Hz.Surface EMG data were then enveloped based on the 20-millisecond of root-mean-squared value by in-house MATLAB function.The pre-processed EEG and enveloped EMG data then underwent the same spectrum analysis described in the previous section.
Accelerometer measurements and transcranial alternating current stimulation (tACS)
In tACS experiments, the acceleration of finger tapping and EEG were recorded using the Brain Vision acceleration sensor MR (3D) and the actiCHamp system (Brain Vision LLC, Morrisville, NC, USA).To perform tACS, we utilized a Soterix Medical 1x1 tES mini-CT device to generate a stimulated waveform, which was then delivered using two 5 x 5 cm SNAPpad sponge (Soterix Medical Inc., Woodbridge, NJ, USA) consisting pre-inserted carbon-rubber electrode, at an intensity of 2.5 mA.These sponge electrodes were firmly secured in place using a head and arm SNAPstrap.The stimulation electrode was targeted 2 cm lateral to the inion, covering the right cerebellar hemisphere, while the reference electrode was positioned on the deltoid muscles of the right arm.
The experiment involved a sound-guided, rhythmic tapping task using the index finger.Baseline recording involved 2 minutes of tapping, including 1 minute of tapping with 4 Hz guided audio sound, and 1 minute of tapping without any audio.After a short rest interval, tACS was delivered for 2 minutes during the tapping task at 4 Hz.The audio cue was applied for 1 minute in every tapping period and then turned off.After stimulation and a rest interval, the tapping task was repeated and recorded again for 2 minutes, including 1 minute of tapping with guided audio sound and 1 minute without any audio.
To assess the phase stability between the accelerometer-recorded motion and tACS, we applied the phase-sensitive cross-correlation.We transformed the motion while preserving its frequency dynamics and eliminating amplitude fluctuations by extracting their Hilbert-based instantaneous phases and replacing with the time-dependent phases of a unit vector.The transformed motion was then cross-correlated with the 4-Hz sine waves, to evaluate the rhythmicity between 4-Hz tapping and a perfect 4-Hz signals.The maximal cross-correlation values were calculated.To ensure fair comparisons among subjects, we normalized the mean cross-correlation values (averages of crosscorrelation values across the entire experiment) to 1.
Statistics
Non-parametric analyses were conducted for datasets with sample sizes below 35 or those not following a normal distribution.We applied the Mann-Whitney U test, Wilcoxon signed-rank test, and Kruskal-Wallis test for independent samples, paired groups, and multiple groups, respectively.For datasets with sample sizes exceeding 35 and meeting the homogeneity test for normal distribution, student's T-test, paired T-test, and one-way ANOVA were employed for independent samples, paired samples, and multiple groups, respectively.Raw data points were illustrated in the figures.The vector strength spectrum peaks converged to the motion frequency throughout the random recruitment of units.Intensity is in arbitrary units of vector strength (no unit), LFPs, or motions (mV).The blue spectrum represents the mean vector strength of recruited units, the black spectrum represents the DCN LFP, and the purple spectrum represents the motion.(g) Frequency convergence of motions, LFPs, and vector strengths in all trials.The top two subplots showed the frequency spectrum of motion (top) and cerebellar LFP (middle).Light lines represent single trials, and heavy lines represent the averages of all trials.The bottom figure showed all peaks with sufficient prominence (see methods) detected in the vector strength spectrums throughout the random recruitment of units.The color gradient from green to blue reflected increasing units recruited to calculate the vector strength spectrum.The color depth indicated the level of prominence (n= 138 units from 8 mice.Units with minimum spike number < 10 were excluded to avoid unreliable computation of vector strength).(h-i) Quantitative analysis of vector strength spectrums.Peak frequency differences to motions (h) from vector strength spectrum (left four, green to blue) or from DCN LFPs (rightmost, gray), and the signal-to-noise ratio (SNR, Fig. 3i), indicating peak significance of corresponding vector strength spectrums.(j-n) The tuning frequencies of neuronal firing probabilities via autocorrelation spectrum (j) with a representative trial (k), group analysis (l), and quantification (m-n).(o) Scheme of the phasic tuning of SU firing probabilities to the instantaneous phases of motion.(p-q) Representative polar plots.DCN neurons had a greater phasic bias to the phase of motion, quantified by the polarity index.(r-s) Group analysis of cumulative probabilities (r) and values (s) of polarity indexes.DCN neurons revealed stronger phasic tuning to 16-Hz compensatory motion at the populational level (n = 138 units from 8 mice).See methods for detailed definitions of burst detection, vector strength, and peak prominence.Error bars denote S.D. ***p < 0.001, One-way ANOVA (i, n), Wilcoxon matched-pairs signed rank test (s).
used Open Ephys acquisition board to record neural electrophysiology signals, mouse accelerating signals, and vibrated signals.Mouse accelerating and vibrated signals were captured through a headstage containing an accelerometer and an accelerometer attached to the vibration platform, respectively.The signals were recorded and digitized at the sampling rate of 30,000 Hz.To obtain the compensated motion signals, we applied a band-pass filter within the frequency range of 3-30 Hz to the vibrated signals and the mouse accelerating signals.We subtracted the mouse accelerating signals from the vibrated signals, resulting in the compensated motion signals. | 2024-08-17T05:09:25.664Z | 2024-07-30T00:00:00.000 | {
"year": 2024,
"sha1": "f1077b8080fb2a8cfb679788cb28eead23772db5",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "PubMedCentral",
"pdf_hash": "f1077b8080fb2a8cfb679788cb28eead23772db5",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
202018812 | pes2o/s2orc | v3-fos-license | A comprehensive study of eriocitrin metabolism in vivo and in vitro based on an efficient UHPLC-Q-TOF-MS/MS strategy
Eriocitrin, a main flavonoid in lemons, possesses strong antioxidant, lipid-lowering and anticancer activities and has long been used in food, beverages and wine. However, its metabolism in vivo and in vitro is still unclear. In this study, an efficient strategy was developed to detect and identify metabolites of eriocitrin by using ultra-high-performance liquid chromatography coupled with hybrid triple quadrupole time-of-flight mass spectrometry (UHPLC-Q-TOF-MS) based on online data acquisition and multiple data processing techniques. A total of 32 metabolites in vivo and 27 metabolites in vitro were obtained based on the above method. Furthermore, the main metabolic pathways of eriocitrin included reduction, hydrogenation, N-acetylation, ketone formation, oxidation, methylation, sulfate conjugation, glutamine conjugation, glycine conjugation, desaturation and demethylation to carboxylic acid. This study will lay a foundation for further studies on the metabolic mechanisms of eriocitrin.
Introduction
Eriocitrin (eriodictyol 7-O-beta-rutinoside), belonging to the dihydroavonoid compound class, is widely found in citrus fruits (lemon, citrus, grapefruit), vegetables, processed products (drinks, wine) and so on. [1][2][3] Modern pharmacological studies show that eriocitrin has strong antioxidant, lipid-lowering and anticancer activities. 4 It plays an important role in effectively preventing and improving oxidative stress, hyperlipidaemia, cardiovascular and cerebrovascular diseases as well as cancer. [5][6][7][8][9] As found in the literature, three metabolites of eriocitrin were mentioned in plasma and renal-excreted urine, detected through HPLC and LC-MS analyses. 10 However, until now, no structural information about the metabolites in bile, faeces, intestinal ora and liver microsomes of rats has been reported.
It is commonly known that drugs can have four pharmacological effects through biotransformation: (1) conversion into inactive substances; (2) transformation of the previously inactive drugs into active metabolites; (3) an alteration of the types of drug pharmacological action; (4) the production of toxic substances. [11][12][13] Thus, it is crucial to study the metabolism of drugs in vivo to ensure safety of use. In addition, as the main metabolic organ of the human body, the liver is rich in enzymes, especially cytochrome P450 enzymes. 14 In addition, the gastrointestinal tract is also an important place for drug metabolism, and its intestinal ora has a signicant impact on drug absorption, metabolism and toxicology. 15,16 Therefore, in this paper, mass spectrometry was used to investigate the metabolism of eriocitrin in rats, liver microsomes and intestinal ora in order to identify the metabolites and structural information of the products, which will lay a foundation for further studies on the toxicity and activity of metabolites and will provide greater possibilities for the development of new drugs.
With the development of technology, quadrupole time-of-ight mass spectrometry has been widely used as a reliable analytical technique to detect metabolites due to its advantages of high resolution, high sensitivity, high-efficiency separation and accurate quality measurement. 17,18 In this study, ultra-highperformance liquid chromatography coupled with hybrid triple quadrupole time-of-ight mass spectrometry (UHPLC-Q-TOF-MS/MS) technology was used. The electrospray ionization (ESI) source was operated in negative ion mode, and full scan combined with multiple mass loss (MMDF) and dynamic background subtraction (DBS) was used to collect data online. MetabolitePilot 2.0.4 and PeakView 1.2 data loading soware were adopted to obtain the precise mass number of metabolites, secondary mass spectrometry and decomposition rules of eriocitrin to infer the possible metabolites. Based on the above methods, 32 metabolites in vivo and 27 metabolites in vitro were nally observed. In addition, the metabolic pathways of eriocitrin were explored and summarized for the rst time, which is an important part of drug discovery and development and can also provide a basis for further pharmacological research.
Animal Research Center of Hebei Medical University (SCXK 2018-004). All the protocols and procedures for animal handling were carried out following the guidelines of the Hebei committee for care and use of laboratory animals, and were approved by the Animal Experimentation Ethics Committee of the Hebei Medical University (Hebei, China). The conditions of temperature (22-25 C), humidity (55-60%) and light (12 h light/dark cycle) were standard for the 8 days prior to use. All rats were fasted but allowed water for 12 h before the experiments. These rats were randomly divided into six groups with three rats per group. Groups 1, 3 and 5 were the control groups for blank blood, bile, and urine and faeces, respectively. Groups 2, 4 and 6 were the drug groups for blood, bile, and urine and faeces, respectively. Rats in groups 2, 4, and 6 were given eriocitrin by gavage, which was dissolved in a 0.5% CMC-Na solution at a dose of 50 mg kg À1 . However, the rats in groups 1, 3, and 5 were given the same dose of 0.5% CMC-Na solution with no eriocitrin. All rat experiments were conducted in accordance with the committee's guidelines on the Care and Use of Laboratory Animals.
2.3.2. Bio-sample collection. The plasma sample collection was completed as follows: approximately 300-500 mL for each blood sample was collected from the eye canthus of rats into 1.5 mL heparinized tubes at 0.083, 0.167, 0.25, 0.5, 1, 2, 3, 6, 9, 12 and 24 h aer gavage. Every blood sample was centrifuged immediately at 1920 Â g for 5 min to obtain the plasma. Aer that, all collected plasma samples were consolidated and stored at À80 C.
Urine and faeces collection. Tthe rats were placed in separated metabolic cages with free access to puried water, and urine and faeces samples were collected over a 0-72 h period aer gavage. 22,23 Finally, all urine and faecal samples were separately combined and stored at À80 C before pretreatment was conducted.
2.3.3. Bio-sample pretreatment. All biological samples were treated with two methods: protein precipitation with methanol and liquid-liquid extraction with ethyl acetate. An aliquot of 2 mL of mixed plasma, bile or urine was taken, and three-fold methanol or ethyl acetate was added to precipitate proteins or extract, respectively. Then, the mixture was vortexed for 5 min and centrifuged at 21 380 Â g for 10 min at 4 C to obtain the supernatant, which was collected and dried under nitrogen ow at room temperature.
Dried and powdered faecal samples (2.0 g) were added to 3fold methanol or ethyl acetate and were ultrasonically extracted for 45 min. Next, samples were centrifuged at 21 380 Â g for 10 min, and the supernatant was dried under nitrogen gas.
150 mL of acetonitrile was added to the residua above and subjected to ultrasonic treatment for 10 min and centrifugation at 21 380 Â g for 10 min to obtain the supernatant that was passed through a 0.22 mm millipore lter before injection into the UHPLC-Q-TOF-MS/MS system for analysis. Samples from the control and drug groups were treated the same.
2.4. Metabolism in vitro by rat liver microsomes 2.4.1. Phase I metabolism. The representative incubation mixture was assembled in PBS buffer (pH 7.4) with a nal volume of 200 mL and contained liver microsomal protein (1.0 mg mL À1 ), eriocitrin (100 mmol L À1 ), MgCl 2 (3.3 mmol L À1 ), and b-NADPH (1.3 mmol L À1 ). 24 Preincubation was conducted at 37 C for 5 min, and NADPH was subsequently added to start the reaction. Aer incubation at 37 C for an additional 90 min, the reaction was stopped by adding 1 mL of ethyl acetate. Next, samples were vortexed and centrifuged for 5 and 10 min, respectively, and the organic phase was collected and evaporated under nitrogen gas. Aer reconstitution in 100 mL of acetonitrile, samples were passed through 0.22 mm millipore lters and stored at À20 C until analysis. Blank groups underwent incubation without the addition of eriocitrin, the control groups were incubated without the addition of NADPH, and the sample groups, which were carried out in triplicate, underwent the full treatment described above. 25 2.4.2. Phase II metabolism. The representative incubation mixture was implemented in PBS buffer (pH 7.4) with a nal volume of 200 mL and contained liver microsomal protein (1.0 mg mL À1 ), eriocitrin (100 mmol L À1 ), MgCl 2 (3.3 mmol L À1 ), and UDPGA (2 mmol L À1 ). Preincubation was performed at 37 C for 20 min; subsequently, UDPGA was added to start the reaction. Aer incubation at 37 C for an additional 1 h, the reaction was stopped by adding 200 mL of acetonitrile. Next, samples were vortexed and centrifuged for 5 and 10 min, respectively. Finally, the supernatant was passed through a 0.22 mm millipore lter before injection into the UHPLC-Q-TOF-MS/ MS system for analysis. Blank groups were incubated without the addition of eriocitrin, the control groups were incubated without the addition of UDPGA, and the sample groups, which were carried out in triplicate, underwent the treatment described above.
2.5.2. Preparation of intestinal ora culture solution. Fresh intestinal contents (3 g) taken from SD rats were mixed with anaerobic culture medium (30 mL) immediately. Aer stirring with a glass rod, the samples were ltered with gauze to obtain the intestinal bacterial liquid.
2.5.3. Sample preparation. Eriocitrin (1 mg mL À1 , 100 mL) was added to 1 mL of intestinal ora culture medium, which was saturated with nitrogen to remove oxygen. Aer incubation for 6 h, the reactions were terminated by adding 3 volumes of methanol. Then, the mixtures were vortexed for 5 min and centrifuged at 21 380 Â g for 10 min. The organic phase was collected and evaporated under nitrogen gas, reconstituted in 100 mL of acetonitrile, and vortexed and centrifuged again for 5 and 10 min, respectively. The supernatant was passed through a 0.22 mm millipore lter before analysis. Blank groups were incubated without eriocitrin, while the control groups were incubated not in intestinal ora culture solution but in anaerobic culture medium, and sample groups were treated as described above.
Analytical strategy
In this study, an efficient UHPLC-Q-TOF-MS/MS strategy was adopted to systematically identify the metabolites of eriocitrin in vivo and in vitro. The strategy was segmented into three steps: rst, online full scan data acquisition was performed utilizing MMDF and DBS settings and the MS/MS spectrum of eriocitrin metabolites to collect data. Next, multiple data processing techniques were adopted by using PeakView 1.2 and Metabo-litePilot 2.0.4 soware, which contained many data-processing tools, such as XIC, MDF, PIF and NIF, and provided accurate MS/MS information to infer and identify the metabolites of eriocitrin. Finally, numerous metabolites were identied on the basis of accurate mass datasets, specic secondary mass spectrometry information and so on. With regard to the isomers of metabolites, clog P values calculated by ChemDraw 14.0 were used to further distinguish them. In general, the larger the clog P value, the longer the retention time will be in the reversed-phase chromatography system. [26][27][28][29][30] According to the above method, a total of 32 metabolites in vivo and 27 metabolites in vitro (12 metabolites in rat liver microsomes and 20 metabolites in rat intestinal ora) were identied and are shown in Tables 1, 2
Mass fragmentation behaviour of eriocitrin
To identify the metabolites of eriocitrin, it is important to understand the pyrolysis of the parent drug (M0). Eriocitrin (C 27 12 Metabolite M7 (C 15 Metabolite M10 (C 15 and m/z 149.0006 (RDA reaction) were detected according to its secondary mass spectrum. Aer RDA cleavage of the metabolite M20, a methoxy group was lost to obtain a secondary fragment ion at m/z 135.0779. Hence, the possible metabolites of it were inferred.
Metabolites M21 and M22 (C 16 H 14 O 6 ) eluted at 16.81 min and 16.99 min, respectively. They had deprotonated molecular ions [M À H] À at m/z 301.0718 and 301.0722, which were 14 Da (CH 2 ) higher than that of M1. Characteristic ions at m/z 287.0544 and 285.0411 were formed through the loss of CH 2 and O, respectively. According to the fragment ions at m/z 165.0185 and 135.0448 produced by RDA reaction, methylation occurred in the A ring. In addition, they were validated with the clog P values of M21 and M22, which were 1.91062 and 2.37062, respectively.
Metabolite M23 (C 16 H 14 O 5 ) eluted at 14.06 min and displayed a deprotonated molecular ion [M À H] À at m/z 285.0763, 14 Da (CH 2 ) higher than that of M2. The noteworthy fragment ions at m/z 149.0046 and 135.0018 were yielded by RDA cleavage, indicating that methylation occurred at position 5 in ring A, so the structure was determined. Metabolite M29 (C 15 This journal is © The Royal Society of Chemistry 2019 Metabolite Q20 (C 16 H 14 In this study, a total of 41 metabolites were identied: 32 metabolites were detected in vivo, including 6 metabolites in plasma, 14 metabolites in bile, 19 metabolites in urine and 13 metabolites in faeces. Meanwhile, 27 metabolites were observed in vitro, including 12 metabolites in liver microsomes and 20 metabolites in intestinal ora. Representative MS/MS spectra are shown in Fig. 4, and the proposed metabolic pathways of eriocitrin in vivo, in rat liver microsomes and in rat intestinal ora are shown in Fig. 5. It is worth mentioning that the loss of C 6 H 10 O 5 , C 12 H 20 O 9 , and C 12 H 20 O 10 was the primary metabolic step that produced further reactions such as the loss of CO, the loss of water, hydrogenation, N-acetylation, ketone formation, oxidation and methylation. Nevertheless, no sulfate conjugation, glutamine conjugation and glycine conjugation occurred in vitro, and desaturation and demethylation to carboxylic acid metabolites were not discovered in vivo but were found in rat liver microsomes.
Although glutamine conjugation and glycine conjugation did not seem to be very common in metabolism, they can occur. [31][32][33] In this study, due to the loss of a powerful sugar group at C-7 in M31 and the 3 0 -hydroxyl in M32, the steric hindrance would decrease, and glutamine conjugation and glycine conjugation could occur at the 5 and 4 0 sites, respectively.
Protein precipitation with methanol and liquid-liquid extraction with ethyl acetate were used in this research to acquire more types and quantities of metabolites. The results showed that 13 metabolites were extracted by methanol and ethyl acetate simultaneously, additionally, 14 metabolites can only be extracted from methanol and 5 metabolites can just be extracted from ethyl acetate, which did increase the types and the number of metabolites of eriocitrin. Meanwhile, we can see that more metabolites were extracted from methanol than from ethyl acetate, which may be related to their ability to form hydrogen bonds and polarity. 34 It has been reported in the literature that eriocitrin has strong antioxidant activity. 4 In this paper, oxidation occurred both in vivo and in vitro and was found to be a vital metabolic reaction of eriocitrin, which may be related to its strong antioxidant activity. In addition, many of the metabolites of eriocitrin have been studied. For example, M1 (N2, Q2), namely, eriodictyol, a natural avonoid compound present in citrus fruits, has been reported to have broad bioactivities such as antioxidant, anti-inammatory, immunomodulatory and antidiabetic activities. [35][36][37] It is worth mentioning that eriodictyol was found to be one of the most potent insulin secretagogues among hundreds of compounds tested. 38 M19b, namely, hesperidin, a type of citrus bioavonoid distributed in foods including grapefruits, oranges and lemons, has many pharmacological activities, such as antioxidant, anti-depression and antitumour activities. [39][40][41] Overall, the identication of metabolites of eriocitrin provides a basis for new pharmacological studies, and these metabolites will be further explored in the future.
Conclusions
In conclusion, an efficient strategy for screening and identifying the metabolites of eriocitrin in vivo and in vitro was established rst by UHPLC-Q-TOF-MS/MS using online data acquisition and multiple data processing techniques. The results showed that a total of 41 metabolites were identied: 32 metabolites were detected in vivo (6 metabolites in the plasma, 14 metabolites in the bile, 19 metabolites in the urine and 13 metabolites in the faeces), and 27 metabolites were detected in vitro (12 metabolites in the rat liver microsomes and 20 metabolites in rat intestinal ora) under the experimental conditions. In addition to identifying the above metabolites, we also elucidated the metabolic pathways of eriocitrin. Moreover, the incubation of liver microsomes and intestinal ora was applied to eriocitrin for the rst time. It was also the rst study to investigate the metabolic mechanisms of eriocitrin in vivo and in vitro, all of which provided reference and valuable evidence for further development of new pharmaceuticals and pharmacological mechanisms, laying a foundation for clinical examination and application.
Conflicts of interest
All the authors have declared no conict of interest. | 2019-09-09T21:21:44.491Z | 2019-08-08T00:00:00.000 | {
"year": 2019,
"sha1": "65cb2984fecfc397620317179ce5aabcb2902677",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2019/ra/c9ra03037a",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0515e456e1e13f7205a0d6bb63bdadb24d7254ed",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
237940155 | pes2o/s2orc | v3-fos-license | Social Shaping for Transactive Energy Systems
This paper considers the problem of shaping agent utility functions in a transactive energy system to ensure the optimal energy price at a competitive equilibrium is always socially acceptable, that is, below a prescribed threshold. Agents in a distributed energy system aim to maximize their individual payoffs, as a combination of the utility of energy consumption and the income/expenditure from energy exchange. The utility function of each agent is parameterized by individual preference vectors, with the overall system operating at competitive equilibriums. We show the social shaping problem of the proposed transactive energy system is conceptually captured by a set decision problem. The set of agent preferences that guarantees a socially acceptable price is characterized by an implicit algebraic equation for strictly concave and continuously differentiable utility functions. We also present two analytical solutions where tight ranges for the coefficients of linear-quadratic utilities and piece-wise linear utilities are established under which optimal pricing is proven to be always socially acceptable.
Social Shaping for Transactive Energy Systems Zeinab Salehi, Yijun Chen, Ian R. Petersen, Elizabeth L. Ratnam, Guodong Shi Abstract-This paper considers the problem of shaping agent utility functions in a transactive energy system to ensure the optimal energy price at a competitive equilibrium is always socially acceptable, that is, below a prescribed threshold. Agents in a distributed energy system aim to maximize their individual payoffs, as a combination of the utility of energy consumption and the income/expenditure from energy exchange. The utility function of each agent is parameterized by individual preference vectors, with the overall system operating at competitive equilibriums. We show the social shaping problem of the proposed transactive energy system is conceptually captured by a set decision problem. The set of agent preferences that guarantees a socially acceptable price is characterized by an implicit algebraic equation for strictly concave and continuously differentiable utility functions. We also present two analytical solutions where tight ranges for the coefficients of linear-quadratic utilities and piece-wise linear utilities are established under which optimal pricing is proven to be always socially acceptable.
I. INTRODUCTION
Recent dramatic increases in rooftop solar, backed by energy storage technologies including electrical vehicles and home batteries, are obfuscating the traditional boundary between energy producer and consumer [3]. At the same time, transactive energy systems are being designed to coordinate supply and demand for various distributed energy sources in an electrical power network [5], where power lines (or cables) enable electricity exchanges, and communication channels enable cyber information exchanges. With transactive energy systems supporting the rise of the prosumer (i.e., an electricity producer and consumer), new ways are emerging to enable a fair [4] and sustainable renewable energy transition [6].
In the recent literature, transactive energy systems have been designed for a wide-range of power system applications. For example, transactive energy systems are supporting microgrid operations [7], virtual power plant operations [8], and the operation of bulk power systems with a proliferation of renewable and distributed energy resources [9]. Such transactive energy systems are primarily focused on market-based approaches for balancing electricity supply and demand, supporting robust frequency regulation throughout the bulk grid [10]. Typically, agents operating in transactive energy subsystem (e.g., virtual power station) are located across a power network and market mechanisms are in place to support individual preferences, enabling agents to compete and collaborate with each other [11]. The aim is a dramatic improvement in power systems operational efficiency, scalability, and resilience. The market mechanisms used in transactive energy networks to enable efficient and valuable energy transactions are typically drawn from classical theories in economics and game theory [12].
In standard welfare economics theory [13] it is suggested that resources pricing can be designed to balance supply and demand in a market. In a multi-agent system with distributed resource allocations, agents have the autonomy to decide their local resource consumption and exchange to optimize their individual payoffs as a combination of local utility and income or expenditure. In a competitive equilibrium, resource pricing is achieved when all agents maximize their individual payoffs subject to a network-level supply-demand balance constraint, which in turn maximizes the overall system-level payoff [14]. More specifically, to balance network supply and demand, resource pricing corresponds to the optimal dual variables associated with supply-demand balance constraint, where the price also maximizes the system-level payoff. The prospect of operating a transactive energy system as a market via optimal pricing under a competitive equilibrium has been widely studied in [15]- [21].
Early and recent studies in both the economics and engineering literature, are primarily focused on controlling resources price volatility [23]- [26], rather than the resources price itself. Consequently, in practice, the optimally computed price to balance supply and demand in a market is potentially not socially acceptable. For example, in February 2021, the wholesale electricity price in Texas was considered by prosumers to be unacceptably high after widespread power outages. Prosumers that subscribed to the wholesale price reported receiving sky-high electricity bills, which greatly exceeded societal expectations for payment [22].
Several authors have referred to price volatility as a rapid change in the pricing process. The Black Scholes formula [23] and Heston's extension [24] are the most representative models of stochastic price volatility, while others include SABR [25] and GARCH [26]. The authors in [27] argued that previous models did not penalize price volatility in the system-level objective, so they proposed to modify the system-level objective to account for price volatility, whereby constructing a dynamic game-theoretic framework for power markets. The authors in [28] proposed a different dynamic game-theoretic model for electricity markets, by incorporating a pricing mechanism with the potential to reduce peak load events and the cost of providing ancillary services. In [29], the dual version of the system-level welfare optimization problem was considered, where an explicit penalty term on the L 2 norm of price volatility was introduced in the system-level objective, allowing for trade-offs between price volatility and social welfare considerations.
In this paper, we focus on resource pricing rather than price volatility, to support the design of socially acceptable electricity markets. We propose a social shaping problem for a competitive equilibrium in a transactive energy systems, aiming to bound the energy price below a socially acceptable threshold. We focus on parameterized utility functions, where the parameters are abstracted from the preferences of agents. We prescribe a range for the parameters in the utility functions, arXiv:2109.12967v1 [eess.SY] 27 Sep 2021 to ensure resources pricing under a competitive equilibrium is socially acceptable for all agents without creating a mismatch in supply and demand. The idea of introducing parameterized utility functions is informed by the concept of smart thermostat agents in the AEP Ohio gridSMART demonstration project [30], [31]. For transactive energy systems organized as multiagent networks operating at competitive equilibriums, we establish the following results.
• We show the essence of the social shaping problem is a set decision problem, where for strictly concave and continuously differentiable utility functions, the set decision is characterized by an implicit representation of the optimal price as a function of agent preferences. • For two representative classes of utility functions, namely linear-quadratic functions and piece-wise linear functions, the exact set of parameters that guarantee socially acceptable pricing is established, respectively. In our prior work [1], we introduced the concept of social shaping of agent utility functions, which was followed by an investigation into social shaping with linear quadratic utility functions [2]. In the current manuscript, the results on conceptual solvability and solutions for piece-wise linear functions are new, which are supported by the presentation of a series of new and large-scale numerical examples. This paper is organized as follows. In Section II, we introduce two multi-agent transactive energy systems, which are implemented in either a centralized or distributed transactive energy network. In Section III, we motivate the problem of social shaping, to enable the design of socially acceptable electricity pricing. In Section IV and Section V, we present conceptual and analytical solutions to the social shaping problem, respectively. Section VI presents concluding remarks.
II. MULTIAGENT TRANSACTIVE ENERGY SYSTEMS
In this section, we introduce two multi-agent transactive energy systems models, and we recall some fundamental definitions and results related to such systems.
A. Transactive Energy Systems as Multi-agent Networks
We present two simple yet representative setting where n agents indexed in the set V = {1, ..., n}, each with local energy supply and demand, form a transactive energy system. The multi-agent network is designed to support microgrid operations, or would support distribution grid operators with managing ubiquitous behind-the meter renewable energy resources. For simplicity, we assume a lossless electrical network, with extensions to include both real and reactive network and load losses possible. When there is no external energy resource (e.g., a microgrid operating in isolation from the wider power grid), the n agents seek to form an energy market where the overall energy supply and demand are balanced. Such a market must incorporate the diverse interests of individual agents, while ensuring market efficiency.
Multiagent Transactive Energy Systems (MTES). Each agent i produces electricity with a local energy resource, with a i ∈ R ≥0 representing local power production (in kW).
The overall network generation production is represented by C := n i=1 a i , where C > 0 (in kW). Each agent i makes a decision on its energy consumption load, denoted by x i ∈ R ≥0 (in kW). Upon consuming the load x i , agent i receives a payment (or bill) attributed to its demand preference where θ i is the personalized parameter for the load preferences of agent i. Importantly, any shortfall or surplus of energy for each agent i, represented by a i − x i , must be accommodated by the transactive market. We denote the price per unit energy by λ ∈ R (in $\kWh), and the income (or cost) of a transaction for agent i is thereby λ(a i −x i ) when a i > x i (or a i < x i ).
Multiagent Transactive Energy Systems with Strategic Trading (MTES-ST).
We extend the previously defined MTES, by way of supporting strategic trading decisions for each agent i denoted by e i ∈ R. That is, the income (or cost) of the transactions for agent i is λe i , where e i > 0 (or e i < 0). Importantly, there is an inherent constraint on strategic trading decisions for each agent, represented by e i ≤ a i − x i . Specifically, when a i > x i , there is a physical constraint on e i as the amount of energy sold by agent i cannot exceed a i −x i . Furthermore, when a i < x i , there is a physical network constraint on e i for agent i seeking to purchase x i −a i amount of energy from the market.
B. System-level Equilibriums
We draw on welfare theory from economics, considering an effective market price in the context of rational agent decisions. Specifically, for both MTES and MTES-ST, we consider the concepts of competitive equilibriums and social welfare equilibriums to precisely quantify price effectiveness and agent rationality.
Let a = (a 1 , ..., a n ) ∈ R n be a vector representing the local production profile, or otherwise the electricity supply available from n agents. Let x = (x 1 , ..., x n ) ∈ R n be a vector representing the local consumption profile, or otherwise the electricity demand for n agents. Definition 1. For the MTES, a competitive equilibrium is the pair of (1) price denoted by λ * ∈ R, and (2) local consumption profile denoted by x * ∈ R n , under which the following two conditions are satisfied.
(i) The local consumption profile x * maximizes each individual agent payoff; i.e., each x * i is a solution to the following optimization problem (ii) The local consumption profile x * balances the total energy consumption and supply across the network; that is, Definition 2. Let e = (e 1 , ..., e n ) ∈ R n denote a strategic decision profile. A competitive equilibrium for MTES-ST is a triplet of (λ * , x * , e * ), under which the following conditions are satisfied.
(i) The pair (x * , e * ) maximizes the individual payoff of each agent, i.e., (x * i , e * i ) is a solution to max (3) (ii) The strategic decision profile e * balances the total resource supply and demand across the network, i.e., The aforementioned competitive equilibriums support the establishment of an effective market with supply and demand balanced. Rational agents decisions are support by way of maximizing their individual payoffs. From classical welfare economic theory, competitive equilibriums guarantee Pareto optimality in the sense that no agent can change a decision without reducing the payoff of other agents [13], [14], [32]. Next, we consider optimality at a system-level, in the context of a social welfare problem (i.e., in the absence of a market).
Definition 3. (i) For the MTES, a social welfare equilibrium
x * is achieved by way of solving the following maximization problem (ii) For the MTES-ST, a social welfare equilibrium (x * , e * ) is achieved by solving the following maximization problem The social welfare equilibrium is consistent with the rationale of a utility-based system planner that designs and enforces energy consumption decisions across all agents. Such a system planner is not concerned by individual payoffs of each specific agent, but rather, the planner aims to maximize energy allocations across the network, even at the expense of some agents receiving suboptimal allocations. In what follows, we investigate conditions where both the competitive and social welfare equilibriums are equivalent.
Proposition 1 (as in [1]). Suppose each h(·; θ i ) is a concave function over the domain R ≥0 . Then for both MTES and MTES-ST, the competitive equilibriums and the social welfare equilibriums are equivalent.
Equivalence for the MTES refers to a direct mapping of the competitive equilibrium pair (λ * , x * ), to the social welfare equilibrium x * . Or otherwise, if x is a social welfare equilibrium for the MTES, then there exists λ * ∈ R such that (λ * , x ) is a competitive equilibrium. Equivalence for MTES-ST is similarly defined, providing a guarantee that market-based agent decisions coincide with utility-based planner decisions. More specifically, by solving the optimization problem for the social welfare equilibrium for either MTES or MTES-ST, the optimal dual variable is a Lagrangian multiplier associated with the supply-demand balance constraint, which is the optimal price for the competitive equilibrium [1].
Importantly, there is a critical difference between MTES and MTES-ST in terms of optimal pricing. For MTES, the prices under a competitive equilibrium can be either positive or negative. In contrast, for MTES-ST the price under a competitive equilibrium is always non-negative. Next, we explore connections between the two MTES and MTES-ST models.
Proposition 2. Suppose each h(·; θ i ) is a concave function over the domain R ≥0 . Assume λ * 1 > 0 and λ * 2 > 0 correspond to optimal pricing signals for the competitive equilibrium for MTES and MTES-ST, respectively. Then there holds λ * 1 = λ * 2 , and the agent decisions for MTES and MTES-ST are the same for the respective competitive equilibriums.
Proof. See Appendix VII-A.
C. Practical Implementations of Competitive Equilibriums
In Fig. 1 we illustrate two approaches to implementing MTES or MTES-ST, each approach including three sequential phases (i.e., P1, P2, P3). Approach 1: Centralized computation via an aggregator P1 The aggregator directly communicates with each agent i ∈ V to collect their individual load preferences θ i and local power production a i . P2 The aggregator computes the overall network generation production C, then the social welfare equilibirum x * , and identifies the corresponding Lagrangian multiplier λ * , which is the optimal price. P3 The aggregator directly sends (λ * , x * ) to each agent i ∈ V. Approach 2: Distributed computation without an aggregator P1 Each agent i ∈ V selects their individual preference vector θ i , and communicates (θ i , a i ) to other agents. P2 Each agent i ∈ V runs, for example, a distributed processor agreement protocol [33] to receive vectors θ = (θ 1 , . . . , θ n ) and a = (a 1 , . . . , a n ) , and each agent computes the overall network generation production C. P3 Each agent i ∈ V independently solves a social welfare equilibrium x * and identifies the corresponding Lagrangian multiplier which is the optimal price λ * . Each of the two implementations are underpinned by Proposition 1 that states the competitive equilibriums are equivalent to the social welfare equilibriums. That is, the resulting x * is guaranteed to be a competitive equilibrium. Since the social welfare equilibriums depend only on the network generation production C, the processor agree protocol for the vector a can be replaced by a distributed average consensus algorithm [34], where each agent obtains the average of all a i , or equivalently, the network capacity C.
III. THE PROBLEM OF SOCIAL SHAPING
In this section, we motivate and define the problem of social shaping in a multi-agent transactive energy network. We focus on social shaping agent preferences to support optimizationbased design of electricity prices.
A. Motivating Example
Here, we provide a motivating example to highlight conditions where agent preferences significantly influence optimal pricing under competitive equilibriums. Example 1. Consider a MTES with four agents, where the electricity supply available is a = (a 1 , a 2 , a 3 , a 4 ) = (5,8,7,0) . Each agent i has a linear-quadratic utility function of the form h( 3,10) and (m 1 , m 2 , m 3 , m 4 ) = (6, 5, 6, 5). The social welfare equilibrium is computed by solving the optimization problem in (5), which yields x * = (5.12, 4.65, 5.41, 4.82) . The Lagrangian multiplier corresponding to the supplydemand balance constraint Next, let m 4 take values in the interval [5,30]. We sample the interval uniformly with a step-size of 1 to obtain 26 different values for m 4 . For each m 4 , the social welfare equilibrium and optimal price are computed. In Fig. 2, we present the optimal price λ * and the optimal resource allocation x * 4 of agent 4 as functions of m 4 . From Fig. 2, we observe that the optimal price λ * at m 4 = 30 exceeds the optimal price at m 4 = 5 by a factor of 57. Furthermore, we observe that the preferences of a single agent significantly alter the electricity price, and influence the distribution of resources allocations.
B. Social Shaping Problem
From Example 1, we observed that the optimal price for the MTES is in need of social shaping at a system level. That is, we observed that one agent was able to consume all available resources at a price that potentially prohibited others from accessing the energy. In what follows, we consider optimal pricing below an upper limit, where the upper limit is deemed socially acceptable by all agents. In this way, utility functions will be restricted within a prescribed range within our MTES framework. More importantly, socially admissible utility functions must correspond to socially acceptable optimal pricing. To this end, we define a social shaping problem for MTES and MTES-TS. where the preference θ i ∈ Θ is selected by agent i in the context of their utility function h(·; θ i ) for i ∈ V. Let λ † > 0 denote an upper energy pricing limit, which represents the threshold of socially acceptable prices for agents i ∈ V. Find the set Θ, such that agent preferences (θ 1 , . . . , θ n ) ∈ Θ n can be selected in a way where the optimal price corresponding to the competitive equilibriums satisfies λ * ≤ λ † .
IV. CONCEPTUAL SOCIAL SHAPING SOLVABILITY
In this section, we study the solvability of the social shaping problem at conceptual levels.
A. MTES
Here, we introduce the social shaping problem for strictly concave and differentiable functions. Let each h(·; θ i ) be continuously differentiable and strictly concave. It follows from Proposition 1 that the social welfare equilibrium and the competitive equilibrium exist and are equivalent. Consequently, we can consider either the social welfare problem or the competitive problem.
Let x * i be a solution to (1). Since h(·; θ i ) is continuously differentiable and strictly concave, h (·; θ i ) is strictly monotone. We denote by l(·; θ i ) the inverse function of h (·; θ i ). Specifically, when h(·; θ i ) is strictly concave, h (·; θ i ) is strictly monotone, and as such the inverse must exist. As a result, we can derive Next, substitute (7) into the balancing equality in (2), which yields Since C > 0, there exists at least one agent with x * i = 0, corresponding to x * i = l(·; θ i ). Also, as h (·; θ i ) is strictly monotone, its inverse l(·; θ i ) is also strictly monotone. Consequently, the left-hand side of (8) is the sum of at least one strictly monotone function, implying that the summation result is also a strictly monotone function whose inverse exists. Fixing θ = (θ 1 , . . . , θ n ), then there holds wherel(·; θ) is the inverse of the left-hand side of (8). Next, we denote by χ Θ the maximal value of the set of optimal prices that support all competitive equilibriums when agent preferences are drawn from θ ∈ Θ n . We define the maximal value of the set of optimal prices by From the definition of χ Θ , the following result is immediately clear indicating the social shaping problem is conceptually captured by a set decision problem with respect to Θ.
Theorem 1. Consider a MTES. Suppose each h(·; θ i ) is strictly concave and differentiable over R ≥0 . Let λ † > 0 denote the threshold for socially acceptable energy pricing, such that λ * ≤ λ † . Then any set Θ satisfying χ Θ ≤ λ † solves the problem for social shaping of agent preferences.
B. Homogenous MTES
Consider the situation where agents maintain a common preference, i.e., θ i =θ. For example, each agent i maintains a preferenceθ, which is an average of true preferences across all agents, that isθ = 1 Theorem 2. Consider a MTES with homogenous preferences, i.e., θ i = θ for all i ∈ V. Suppose each h(·; θ i ) is concave and differentiable over R ≥0 . Let λ † > 0 denote the threshold for socially acceptable energy pricing, such that λ * ≤ λ † . Then a solution for Θ for the social shaping problem is given by Proof. See Appendix VII-B.
According to Proposition 2, when the price is positive, MTES and MTES-ST yield the same optimal pricing λ * > 0, and agent decisions. Consequently, we define χ Θ as (10), and introduce the following theorem.
Theorem 3. Consider a MTES-ST. Suppose each h(·; θ i ) is strictly concave and differentiable over R ≥0 . Let λ † > 0 denote the threshold for socially acceptable energy pricing, such that λ * ≤ λ † . Then any set Θ satisfying χ Θ ≤ λ † solves the problem for social shaping of agent preferences.
V. ANALYTICAL SOCIAL SHAPING SOLUTIONS
In this section, we focus on two fundamental classes of utility functions, i.e., linear-quadratic functions and piece-wise linear functions, and we show the social shaping problems can be explicitly solved.
A. Linear Quadratic Utility Functions
We impose the following assumption. is selected from the following set Then λ * is always socially acceptable since λ * ≤ λ † , for all utility functions satisfying b i ≤ b max and m i ≤ m max .
Proof. See Appendix VII-C. m 1 ), and in what follows we verify that the personalized parameter θ i for each agent is within the range prescribed in (13).
Consider n = 10000 agents, and let the socially acceptable threshold λ † take value from {20, 22, 24, 26, 28, 30}. For each possible λ † , we conduct K = 1000 numerical experiments, for each of which we generate a parameter set (b (k) , m (k) ), where k = 1, . . . , K, by following the aforementioned parameter selection process. In each experiment k, we input parameters (b (k) , m (k) ) to compute the optimal price, that is, the Lagrangian multiplier associated with the equity constraint n i=1 x i = i=1 a i , as in (5). In Fig. 4, we present optimal pricing for quadratic utility functions within the range (13) under different upper limits λ † = 20, 22, 24, 26, 28, 30, each corresponding to the respective 1000 numerical experiments.
Next, let λ † = 20 and let n ∈ {100, 1000, 10000, 100000}. For each possible value of n, we conduct K = 1000 experiments with the corresponding personalized parameter sets, θ i , obtained by applying the aforementioned process. In Fig. 5, we present optimal pricing for quadratic utility functions within the range (13), where 1000 experiments are conducted for each system scale n = 100, 1000, 10000, 100000.
In Fig. 4 and Fig. 5, the middle line, and the lower and upper boundaries of each box (interquartile range or IQR) correspond to the median, and 25/75 percentile of the 1000 optimal prices, respectively. The lower and upper whiskers extend maximally 1.5× of IQR from 25 percentile downwards and 75 percentile upwards, respectively. The points that are located outside the whiskers are considered data outliers.
In Fig 4, we observe that the optimal prices in the numerical experiments are below the corresponding upper limit, i.e., λ * = λ † , indicating the optimal pricing being socially acceptable. In Fig. 5, we observe that the optimal prices in the numerical experiments considering various multi-agent system scales, are lower than the upper limit λ † = 20. Our observations in both Fig. 4 and Fig. 5 correspond to, and validate Theorem 4.
B. Piece-wise Linear Utility Functions
We next consider the following assumption.
Then the resulting λ * is always socially acceptable since λ * ≤ λ † , for all utility functions satisfying β i ≤ β max and φ i ≤ φ max .
Proof. See Appendix VII-D.
Example 3. Consider a MTES where each agent i is associated with a piece-wise linear utility function as defined in Assumption 2.
The local resources a i of each agent is obtained following the same process described in Example 2. The network generation production, C, is then, C = n i=1 a i . Let preferences for agent 1, (β 1 , φ 1 ), be represented by β 1 = λ † and let φ 1 be a random number sampled from a normal distribution truncated to the interval [ C n , 10C n ]. Let the personalize parameter pairs for the remaining agents be sampled from two uniform distributions, truncated to the interval (0, β 1 ] and the interval (0, φ 1 ], respectively. In this way, we seek to validate the design of (β max , φ max ) = (β 1 , φ 1 ) in the context of the range in (15).
Let n = 10000 denote the number of agents, and let the upper limit λ † take values from {20, 22, 24, 26, 28, 30}. For each upper limit λ † , we carry out K = 1000 experiments. In each experiment k = 1, . . . , K, a different parameter set of (β (k) , φ (k) ) is obtained upon the aforementioned parameter generation process. For each (β (k) , φ (k) ), we solve the social welfare optimization problem (5), and the optimal dual variable corresponding to the equity constraint n i=1 x i = n i=1 a i is obtained as λ * . In Fig. 7, we present the optimal pricing for piece-wise linear utility functions within the range (15), considering 1000 numerical experiments for each upper limit λ * = 20, 22,24,26,28,30. From Fig. 7, we observe that the dotted line λ * = λ † , sits above all optimal prices identified from the numerical experiments, corresponding to the optimal price being socially acceptable. Next, let λ † = 20 and let n ∈ {100, 1000, 10000, 100000}. For each possible value of n, we conduct K = 1000 experiments with the corresponding personalized parameter sets, θ i , obtained by applying the aforementioned process. We then solve the optimization problem (5) and obtain the optimal price λ * . In Fig. 8, we present the optimal prices for piece-wise linear functions within the range (15), considering the respective 1000 experiments, for each system scale n = 100, 1000, 10000, 100000. In Fig. 8, we observe that all optimal prices in the numerical experiments, considering various system scales, are also below the upper limit λ † = 20 required for social acceptance by the agents. From Fig. 7 and Fig. 8, we observe that piece-wise linear utility functions within the range (15), correspond to, and validate Theorem 5.
C. Further Discussions: MTES-ST
It is worth emphasizing the results in Theorem 4 and Theorem 5 for MTES continue to be valid for MTES-ST. That is, the same set of preference parameters will enable socially acceptable optimal pricing under competitive equilibriums for MTES-ST. Specifically, from Proposition 2, when the price is positive, MTES and MTES-ST are exactly the same in terms of pricing and agent decisions. This equivalence is related to the second part of the sets proposed in (13) and (15), i.e, m max > C n and φ max ≥ C n , respectively. Regarding the first part of the proposed sets, i.e, m max ≤ C n or φ max < C n , Lemmas 8 and 10 in Appendix VII-E show that under these conditions the optimal price for MTES-ST is equal to zero, and thus socially acceptable.
VI. CONCLUSIONS
This paper has considered the problem of shaping agent utility functions in a transactive energy system. In the overall system design, energy supply and demand is balanced while incorporating individual utility preferences for agents, in a way that accommodates socially acceptable pricing. In defining the social shaping problem, we identified a set of preference parameters to provide guarantees that the optimal energy price is below a prescribed threshold. Our established conceptual results indicated that the social shaping problem is characterized by a set decision problem. We also presented two analytical solutions in tight ranges for coefficients of linearquadratic utilities and piece-wise linear utilities, to further demonstrate the application of the social shaping problem to socially acceptable electricity pricing. Future work might include extensions of the social shaping ideas to transactive energy systems operating dynamically, or systems that operate under game-theoretic frameworks.
VII. APPENDIX A. Proof of Proposition 2
It is assumed h(·; θ i ) is concave. Therefore, Proposition 1 states that the social welfare equilibrium and the competitive equilibrium coincide. Consequently, either the social welfare problem or the competitive problem can be solved. Here, we consider the competitive optimization problem of MTES-ST in (3). The inequality constraint x i + e i ≤ a i can be written as where s i ≥ 0 is the slack variable. Additionally, substituting (3) yields an equivalent form for the optimization problem as Let λ * 2 > 0. Then, the objective function in (17) is strictly decreasing with respect to s i . Consequently, in order for (17) to be maximized, s i must be minimized, i.e., s * i = 0. This implies that the inequality constraint x i + e i ≤ a i is active and Taking the summation in (18) implies where C = n i=1 a i . Then, substituting the balancing equality Furthermore, substituting s i = 0 into (17) yields Comparing (21) and (20) with (1) and (2) implies that the problem of MTES-ST is equivalent to the problem of MTES. Therefore, the agent decisions for MTES and MTES-ST are the same for the respective competitive equilibriums. A similar analysis can be done to show λ * 1 = λ * 2 .
B. Proof of Theorem 2
Since h(·; θ i ) is concave, Proposition 1 holds. Therefore, either the social welfare problem or the competitive problem can be examined. Here, we consider the competitive optimization problem of MTES in (1). The optimal solution x * i solves the following optimization problem Letx i be the value x i which maximizes the objective function in the absence of any constraints. Thenx i is obtained when the derivative of the objective function equals zero, i.e, This implies that all agents have the samex i =x. Considering the inequality constraint x i ∈ R ≥0 in (22), ifx ≤ 0 then x * i = 0 for all agents, which contradicts the balancing equality n i=1 x * i = C in (2) (note that C > 0). Consequently, it followsx > 0 and x * i =x, which is positive and satisfies the inequality constraint. Additionally, from the balancing equality n i=1 x * i = C, we yield x * i =x = C n . Then, substitutinĝ x = C n into (23) obtains h (C/n; θ) = λ * .
If θ is selected in such a way that h (C/n; θ) ≤ λ † , then we yield λ * ≤ λ † . This leads to the set Θ in (11).
C. Proof of Theorem 4 1) Preliminary Lemmas: We first introduce some preliminary lemmas which are essential for the proof Theorem 4. Lemma 1. Consider the MTES with the quadratic utility function defined in Assumption 1. The optimal load allocation x * i , which is an optimal solution of the optimization problem (1), is such that Proof. Rearranging the optimization problem (1), x * i is the solution to the following maximization problem: Letx i be the value x i which maximizes the objective function in the absence of any constraints. Thenx i is obtained when the derivative of the objective function equals zero. That is, which impliesx i = m i − λ * bi . Considering the inequality constraint x i ≥ 0 in the maximization problem (26), when which is non-negative and satisfies the inequality constraint. Conversely, when λ * > m i b i , thenx i is negative which does not satisfy the inequality constraint, so x * i =x i . In this case, the objective function is strictly decreasing with respect to x i . Consequently, in order for the objective function to be maximized, x i must be minimized, i.e., x * i = 0. Therefore, Consider the MTES with the quadratic utility function defined in Assumption 1. If Proof. (i) Consider the case n i=1 m i ≤ C. By contradiction, suppose λ * > 0. From equation (25) we yield x * i < m i , and therefore, which contradicts the balancing equality (2). Therefore, it follows that λ * ≤ 0.
Consider two vectors k = (k 1 , ..., k n ) and k = (k 1 , ..., k n ), and let k k denote k i ≤ k i for all i ∈ V . Let m = (m 1 , ..., m n ) and b = (b 1 , ..., b n ). Suppose λ * is the optimal price associated with the pair of vectors (m, b), and let λ * be the optimal price associated with (m , b ). Proof. Suppose λ * > 0. Substituting (25) into the balancing equality As m i and b i increase, λ * must also increase so as to compensate for the change -ensuring the balancing equality (30) holds. Otherwise, the left-hand side of equality (30) would increase, while the right-hand side remains constant, and so the equality would not hold.
2) Proof of the Theorem: Now, we present the proof of Theorem 4. The quadratic utility function in Assumption 1 is concave, so Proposition 1 holds. We investigate two cases.
Case (ii) m max > C n and b max ≤ nλ † nmmax−C . If λ * ≤ 0, then it is socially resilient. Conversely, if λ * > 0, Lemma 3 yields λ * is monotonically increasing with respect to m i and b i , so the highest possible price λ * max is achieved when m i = m max and b i = b max for all agents i ∈ V . Consequently, when all agents select m i = m max and b i = b max , the balancing equality (30) results in and therefore, From equation (32), along with the assumption b max ≤ nλ † nmmax−C in (13), yields λ * max ≤ λ † . Since λ * ≤ λ * max , one obtains λ * ≤ λ † .
Considering (i) and (ii), it follows that as long as (b max , m max ) is constrained in the set S * in (13), λ * will be socially resilient. Proof. Note that h(x i ; θ i ) is a non-decreasing concave function. Therefore, according to Proposition 1 in [1], we yield λ * ≥ 0.
Lemma 5. Consider the MTES with the piece-wise linear utility function defined in Assumption 2. The optimal load allocation x * i , which is an optimal solution of the optimization problem (1), satisfies Proof. We investigate four cases. Case (i) λ * = 0. In this case, the objective function in (1) equals the utility function h(x i ; θ i ) = min{β i x i , φ i β i }, shown in Fig. 6. As can be seen, the objective function is strictly increasing in the interval . Therefore, the optimal solution is achieved at x * i ≥ φ i . Case (ii) 0 < λ * < β i . In this case, the objective function is strictly increasing in the interval x i ∈ [0, φ i ], while strictly decreasing in the interval x i ∈ [φ i , ∞). Therefore, the optimal solution is achieved at x * i = φ i . Case (iii) λ * = β i . In this case, the objective function is constant in the interval x i ∈ [0, φ i ], while strictly decreasing in the interval x i ∈ [φ i , ∞). Therefore, the optimal solution is achieved at 0 ≤ x * i ≤ φ i . Case (iv) λ * > β i . In this case, the objective function is strictly decreasing in the whole interval x i ∈ [0, ∞). Therefore, the optimal solution is achieved at x * i = 0. Lemma 6. Consider the MTES with the piece-wise linear utility function defined in Assumption 2. If By contradiction, suppose λ * > 0. According to (33), λ * > 0 yields x * i ≤ φ i for i ∈ V , and therefore, which contradicts the balancing equality n i=1 x * i = C in (2). Therefore, it follows that λ * = 0.
2) Proof of the Theorem: Now, we present the proof of Theorem 5. From Fig. 6, it is obvious h(·; θ i ) is concave. Consequently, Proposition 1 holds. Now, we investigate two cases.
Considering (i) and (ii), it follows that as long as (β max , φ max ) is constrained in the set S * in (15), λ * will be socially resilient.
E. Lemmas for MTES-ST
Lemma 7. Consider the MTES-ST with the quadratic utility function defined in Assumption 1. The optimal load allocation x * i , which is an optimal solution of the optimization problem (3), is obtained as (25).
(ii) Consider n i=1 m i > C. The inequality constraint x i + e i ≤ a i in (3) yields By contradiction, suppose λ * = 0. According to (25), we obtain x * i = m i , and therefore, x * i > C, which contradicts the inequality in (37). Therefore, it follows that λ * > 0. Lemma 9. Consider the MTES-ST with the piece-wise linear utility function defined in Assumption 2. The optimal load allocation x * i , which is an optimal solution of the optimization problem (3), satisfies (33).
Proof. The inequality constraint x i + e i ≤ a i in the optimization problem (3) can be written as where s i ≥ 0 is the slack variable. Substituting e i = a i −x i −s i into (3) yields an equivalent form for the optimization problem as max xi,si Recall that for MTES-ST, we have λ * ≥ 0 [1]. We investigate two cases. Case (i) λ * = 0. Similar to the proof of Lemma 5, case (i), the objective function in (39) is strictly increasing in the interval x i ∈ [0, φ i ], while constant in the interval x i ∈ [φ i , ∞). Therefore, the optimal solution is achieved at x * i ≥ φ i . Case (ii) λ * > 0. It is proved in Proposition 2 that for λ * > 0, both MTES-ST and MTES yield the same results. Therefore, this part of the proof is the same as cases (ii), (iii), and (iv) in the proof of Lemma 5. Proof. It is known that λ * ≥ 0 [1]. We investigate two cases.
(i) Consider n i=1 φ i < C. By contradiction, if λ * > 0, then the objective function in (39) is strictly decreasing with respect to s i . Consequently, in order for the objective function to be maximized, s i must be minimized, i.e., s * i = 0. Following from (38), we obtain x * i + e * i = a i . Then, considering n i=1 e * i = 0 in (4), we yield where C = n i=1 a i . According to (33), when λ * > 0 then x * i < C, which contradicts the equality in (40). Therefore, it follows that λ * = 0. | 2021-09-28T01:16:24.258Z | 2021-09-27T00:00:00.000 | {
"year": 2021,
"sha1": "782c302a063b818e699fcb0b79aa12c2e4565573",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "782c302a063b818e699fcb0b79aa12c2e4565573",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
} |
234169641 | pes2o/s2orc | v3-fos-license | Determination of Hiesho among Young Japanese Females using Thermographic Technique
Hiesho is the condition of having a cold sensation in one’s hands or feet. This is a well-known health problem for young Asian females. However, the de nition of Hiesho is still controversial. In this study, we aimed to develop a quantitative and non-invasive approach to determine Hiesho. Sixty-three young females participated in this research. Temperature difference (ΔT) between the forehead and foot sole was utilized to de ne Hiesho or non-Hiesho condition, and the result was crosschecked with that of a self-reported questionnaire. Central systolic blood pressure and augmentation index were measured to evaluate subjects’ physiological indicators. The results of the questionnaire showed that 49% of young females (31 of 63 people) reported Hiesho. There was a signi cant difference in ΔT between non-Hiesho and Hiesho (1.85°C and 5.55°C, respectively, p < 0.01). After cross-checking with the self-reported questionnaire, ΔT of 3.64°C demonstrated acceptable reliability and accuracy for de ning Hiesho. Central systolic blood pressure and augmentation index were not different between Hiesho and non-Hiesho. In conclusion, young females with Hiesho had drastically different temperatures at the forehead and foot sole. The temperature difference between the forehead and foot could be used as a quantitative and objective parameter for de ning Hiesho.
Introduction
Hiesho, also known as a cold sensation, is frequently observed in Asian females [1][2][3]. The characteristic symptom of Hiesho is a cold feeling, particularly in the hands and feet, at an environmental temperature at which a healthy person does not feel cold [4]. Hiesho is not only related to several health problems in daily life but is also associated with a higher frequency of chronic disease [4,5]. In Western medicine, Hiesho is not perceived as a remarkable symptom [6]. However, in Japan, approximately 30% of the patients used Kampo medicines because of Hiesho [7].
Currently, the diagnosis of Hiesho is still controversial. The most popular method for de ning Hiesho is through a questionnaire-based survey called Terasawa Hiesho Questionnaire [8][9][10]. As this questionnaire is in Japanese, applying this method in other countries is difcult. Nakamura [8] reported that there was a signicantly higher skin temperature difference in pregnant women with Hiesho. However, young Hiesho patients are still unaware of different skin temperatures.
Cutaneous blood ow is controlled by two types of sympathetic nerve systems: cutaneous vasoconstrictor (CVC) and cutaneous vasodilator (CVD). Increased CVC activity results in constriction of cutaneous blood vessels, while increased activity of CVD results in dilation of skin vessels. CVC does not affect cutaneous blood ow at the forehead, whereas peripheral blood ow is affected by CVC activity [11].
We hypothesized that a difference in forehead and foot plantar temperatures exist in young females with Hiesho. To test this hypothesis, we subjectively de ned Hiesho using a questionnaire, performed thermal check using thermographic technique, and investigated the usefulness of the proposed method in determining Hiesho. The result of this study is expected to provide a quantitative and non-invasive approach to determine Hiesho.
Human Subjects
Sixty-three female students (age: 21.5 ± 1.7 years, height: 157.92 ± 6.07 cm, weight: 51.45 ± 6.16 kg, body mass index: 20.60 ± 2.03 kg/m 2 ) with regular menstrual cycles participated in our experiment. No subject reported a history of health problems including endocrinopathy, cardiovascular disease, gynecological conditions, and connective tissue disease. Subjects were required to abstain from alcohol and caffeine for at least one day and from any food at least 2 h before the experiment.
Before the experiment, all subjects were informed of the purpose and methods of this study and provided written informed consent. Subjects were free to withdraw from the study at any time. This study was approved by the Ethics Committee of Osaka University Hospital (No. 19162, August 2019).
Experimental Environment and Methods
The experiment was conducted from 31st July to 21st October 2019, from 10 a.m. to 4 p.m. Considering that body temperature uctuates depending on the phase of the menstrual cycle, the experiment was conducted only during the follicular phase. The local temperature during our experiment was: maximum 30.1 ± 12.3 C, minimum 22.8 ± 4.7 C, and average 26.9 ± 4.4 C [12]. The indoor temperature was 24.6 ± 0.6 C, and humidity was 54.5 ± 12.3%.
Before the experiment, we asked the subject to rest for 20 min to adapt to the experimental environment. Then, each subject responded to a Hiesho questionnaire. A thermal check that captures temperatures from the forehead and the foot (at dominant foot) was utilized to de ne Hiesho. Finally, we measured blood pressure, pulse wave and augmentation index to investigate the subject s physiological indicators.
De ning Hiesho
In this research, Terasawa Hiesho Questionnaire was used to de ne Hiesho. This questionnaire contains twenty questions, comprising three main questions, ve related questions, and twelve minor questions [8,10]. A subject who answers positively to (1) two or more main questions, or (2) one main and two or more related questions, or (3) four or more related questions, is de ned to be Hiesho; otherwise, a subject is considered to be non-Hiesho. Figure 1 shows the proposed method for de ning Hiesho using a quantitative approach. In this study, an infrared thermometer (©Fluke Thermometer Ti450) was utilized to measure skin temperature. This thermal equipment is loaded with MultiSharp Focus TM , which pro-vides sharp focusing of one image data so that we can acquire high-quality temperature data from the thermal graph. The resolution of this thermometer is 320 × 240 pixels, and the noise equivalent temperature difference is 0.03 C [13]. The thermometer used in this study has been proved to be a highly reliable equipment for collecting temperature data [14].
We asked the subject to sit still on a chair, with their legs stretched on the chair, and ensure that their feet were perpendicular to the ground. Then, we captured a thermal graph image of their entire body. Forehead was dened as the area above the subject s eyebrows and below the hairline. Temperature difference (ΔT) was calculated as follows: ∆T = T f orehead − T plantar (1) where T forehead represents the average forehead temperature and T plantar represents the average planar temperature at the dominant foot.
To de ne the optimal ΔT, the thermal check result was veri ed and evaluated in terms of sensitivity and speci city of the questionnaire results, as well as by the area under the receiver operating characteristics (ROC) curve (AUC) analysis of the validation data.
Physiological Indicators
A non-invasive automated blood pressure monitor with augmentation index function (Omron HEM-9000AI; Omron Healthcare, Kyoto, Japan) was used to measure central systolic blood pressure (cSBP), pulse wave and augmentation index (AI) [15]. A scene of measurement is shown in Fig. 2. Systolic blood pressure, diastolic blood pressure, cSBP, pulse pressure, pulse rate, and AI were measured in each subject.
Data Analysis
Average forehead and plantar temperatures were utilized for calculating the temperature difference. The results of the Hiesho questionnaire and ΔT for each subject were used to calculate the optimal cut-off value. ROC analyses were conducted using the statistical software MedCalc® (version 19.2.5). Disease prevalence was calculated as the percentage of Hiesho based on the ques- Advanced Biomedical Engineering. Vol. 10, 2021. (12) tionnaire results. Student s t-test was used to test the signi cance of difference between two groups, with the signi cance level set at 5%. The t-test was performed using JASP (version 0.13.1.0, The Netherlands).
Results
According to the Terasawa Hiesho Questionnaire , 49.2% of the subjects (31 in 63 people) were de ned to have Hiesho. Figure 3(a) shows examples of thermal images of a non-Hiesho and a Hiesho subject. Dark blue indicates low temperature, while dark red represents high temperature. Compared with healthy subjects (non-Hiesho group at the left), the colors of the feet and the forehead in the Hiesho group (right image) were totally different, indicating that the temperature at the foot sole was much lower than that at the forehead. Figure 3(b) shows the results of the ROC curve to de ne ΔT and Hiesho. Optimal ΔT was 3.64 C, with sensitivity 77.4% and speci city 71.9%. The ΔT of 3.64 C showed acceptable accuracy for de ning Hiesho (AUC = 0.753, standard error: 0.061, 95% con dence interval: 0.634 to 0.857, p < 0.001).
The subjects basic information and their thermal check results are summarized in Table 1. There were no signi cant differences in height, weight, and BMI between the two groups (p > 0.05). There was a difference in age between Hiesho and non-Hiesho groups (22.0 ± 1.5 versus 21.1 ± 1.8 years, p < 0.05), Cohen s d suggested that this was a medium effect size. Table 2 shows the results of thermal check. For the Hiesho group, the average forehead temperature was 35.08 C, which was higher than that in the non-Hiesho group (p < 0.01). On the contrary, non-Hiesho young females had a higher plantar temperature than those with Hiesho (32.60 versus 29.51 C). The ΔT between the forehead and plantar showed signi cant difference between the two groups (1.85 C in the non-Hiesho group versus 5.55 C in the Hiesho group, p < 0.01). Cohen s d indicated that these were large effects. Table 3 shows the results of physiological indicators. There were no signi cant differences between non-
Discussion
In this study, we de ned Hiesho among young females based on a questionnaire. As a result, 31 in 63 people (49%) were de ned to have Hiesho. Previous studies also reported that approximately half of young women in Japan suffered from Hiesho [16][17][18], similar to our nding. However, it has to be emphasized that body temperature in Hiesho patients varies in different seasons [19], and performing experiments in the same season is recommended. The age of Hiesho group was higher than that of non-Hiesho group (22 versus 21.1 years), even though the difference was less than one year. Most of the subjects aged 22 years were in their fourth year of university, and were facing job hunting and undergoing hospital training. Compared to more junior students, they may have had more mental stress, irregular diet, and lack of sleep, which were related to Hiesho [20]. The average skin temperature at the forehead in the Hiesho group (35.08 C) was signi cantly higher than that in the non-Hiesho group (34.43 C). A similar result was also found in a previous study [8], in which Nakamura observed that Hiesho patients had a higher core temperature at the forehead, resulting in higher skin temperature. During our experiment, the indoor temperature was set at approximately 25 C. Under similar conditions, the forehead skin temperature of females ranged from 34 to 35 C [21]. However, owing to lower metabolism, heat production in the Hiesho group was lower than that in the non-Hiesho group [22]. The heat loss must be decreased to maintain a balance between heat production and heat loss to keep a constant core temperature. Accordingly, cutaneous vasoconstrictor reduced blood ow at the extremities, resulting in a low skin temperature but higher core temperature [23]. We considered that this was the reason why Hiesho patients felt cold in an environment where healthy subjects did not.
Nakamura [20] summarized two main factors that resulted in Hiesho. The rst is internal factors such as sympathetic nerve, blood ow, female hormones, vasomotor nerve, and ying-yang balance. The second is external factors including mental stress, smoking, overworking, and sleep deprivation. The exact mechanism of Hiesho is still unclear, but some studies have indicated that Hiesho might be a heritable phenotype [1] and associated with hypersensitive to the surrounding environment [24,25]. Although Hiesho is not perceived as a remarkable symptom, a new concept that deals with cold feeling at extremities, called Flammer syndrome (FS), has been observed recently in Western medicine [6]. According to Flammer et al. [26], FS describes the phenotype of people with a predisposition for an altered reaction of the blood vessels to stimuli such as coldness, emotional stress, and hypoxia. Symptoms of patients with FS seem to be consistent with those of females with Hiesho in Japan. Similar to a statistical report [27], FS occurs more often in females than males in Europe [28,29]. We acknowledged the signi cance of investigating the reasons for Hiesho both from the viewpoints of West- In this research, we proposed a quantitative method for de ning Hiesho by the skin temperature difference between the forehead and the foot sole. After cross-checking with the result of a subjective questionnaire, our proposed method had a sensitivity of 77.4% and a speci city of 71.9%. Meanwhile, measuring different temperatures using an infrared thermometer provided a non-invasive method to discriminate Hiesho rapidly and with acceptable accuracy. Recently, several researchers have sought to solve this health problem. A previous study proposed to measure the blood ow from the radial artery at the wrist of Hiesho patients by studying Doppler ultrasound peripheral vascular ow [30]. In this six-month followup study, the authors observed that blood ow could be an effective approach for monitoring personal health of Hiesho patients. However, the difference in blood ow between Hiesho patients and healthy subjects was not known because they did not perform control experiment and there were only four subjects in their study.
In this study, we measured blood pressure, pulse wave, and augmentation index to evaluate physiological indicators. Although there were no differences in systolic and diastolic blood pressures between non-Hiesho and Hiesho groups, central blood pressure showed a higher trend in subjects who had Hiesho. Previous studies also reported no difference in blood pressure between subjects with Hiesho and healthy subjects [10,31]. It was interesting to notice that pulse rates in both non-Hiesho and Hiesho groups in this study (72.3 BPM in Hiesho and 70.4 BPM in non-Hiesho) were higher than those in previous report (60 BPM in Hiesho and approximately 65 BPM in non-Hiesho group) [10]. Meanwhile, it seemed that young females with Hiesho had higher heart rates. Ogata et al. [10] summarized that higher heart rate in Hiesho group was due to higher sympathetic nerve activity, which can be analyzed through heart rate variability (HRV). However, we did not perform HRV analysis in this study.
It has been shown that augmentation index is closely related to arterial stiffness, and a positive relationship between age and AI has also been reported [32]. We also observed a trend that AI was higher in Hiesho than in non-Hiesho group, and age was different between the two groups. Kohara et al. [33] reported that AI was approximately 65% in women aged from the 20 s, which was higher than those of both Hiesho and healthy groups in our study (52.5% and 56.1% respectively). We speculated that because all subjects recruited in our study were in their early 20s, peripheral blood vessel was softer compared to the average. There were several limitations in this study. According to an epidemiological survey [34], the percentage of women who self-reported Hiesho increased from 30% (under 40 years) to 50% (50 years or older). Hiesho seems to be an age-related health problem. However, only young females in their early twenties were recruited for this research. As physical and psychological conditions change continuously with age for females, examining more subjects across different age groups would be necessary in future study. Meanwhile, as it has been reported that 67% of pregnant women have Hiesho [8] and that Hiesho might be a heritable phenotype [1], a follow-up study on this topic could provide more information on Hiesho and its pathogeny.
In this study, all subjects were Japanese. Hiesho is also known in other countries [1][2][3]. Due to the differences in habits and living conditions, the reason of Hiesho may be different. Thus it is dif cult to generalize the ndings of this research. Meanwhile, 153 men out of 10,000 from the general population in Japan were reported to have Hiesho [27]. As such, Hiesho affects both genders. We did not recruit male subjects in this study.
Conclusion
In this paper, we proposed an objective and quantitative approach to de ne Hiesho. Thermal check results indicate that young females with Hiesho have considerable temperature difference between the forehead and the foot sole. A difference in temperature of 3.64 C between the forehead and the foot sole can be utilized as the threshold for distinguishing Hiesho. Tianyi WANG, et al: Determination of Hiesho among Young Japanese Females using Thermographic Technique (17) | 2021-05-11T00:07:31.448Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "627c3b86ff021010a0e2ffff67329a2753f4c9d3",
"oa_license": "CCBYNC",
"oa_url": "https://www.jstage.jst.go.jp/article/abe/10/0/10_10_11/_pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "a7a5d40dd3368caa0b8441c296a38713f3b58176",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
16591982 | pes2o/s2orc | v3-fos-license | Send Orders of Reprints at Reprints@benthamscience.net Birds and Land Classes in Young Forested Landscapes
In the Mississippi Coastal Plain of the southeastern United States, we explored relationships among bird species and vegetation types and landscape characteristics at four different scales. We modeled abundance of priority avian species from Breeding Bird Surveys using land class metrics at 0.24, 1, 3, and 5-km extents. Our modeling method was logistic regression and model selection was based on Akaike's Information Criteria and validation with reserved data. Northern bobwhite (Colinus virginianus), red-headed woodpecker (Melanerpes erythrocephalus), northern parula (Parula americana), Swainson's warbler (Limnothlypis swainsonii), prairie warbler (Dendroica discolor), hooded warbler (Wil-sonia citrina), and brown-headed cowbird (Molothrus ater) had models containing positive area or core area variables. White-eyed vireo (Vireo griseus) and gray catbird (Dumetella carolinensis) had models with a combination of area and edge associations at different scales. Acadian flycatcher (Empidonax virescens), red-bellied woodpecker (Melanerpes carolinus), wood thrush (Hylocichla mustelina), and yellow-breasted chat (Icteria virens) had positive edge density models. Modeling at different scales produced more complete habitat associations for most species and landscape variables were more influential at larger extents than the smallest extent. Although Mississippi is heavily forested, the landscape is unexpectedly fragmented, with small areal extents of vegetation types. Managers should seek to provide large extents of a variety of habitats, including historically representative vegetation types such as low density pine, to support persistence of a complete suite of avian species.
INTRODUCTION
When land use transforms a landscape, smaller patches of original vegetation types generally lose characteristic species.Studies have shown that landscape characteristics such as area and isolation affect interior bird presence, as well as pairing and reproductive success, in eastern and mid-western forests fragmented by agriculture and urbanization [1,2].Some area-sensitive species will occupy smaller patch sizes, but may incur high demographic penalties through brood predation and parasitism [3] or reduced food abundance [4].Landscape changes that elevate nest predator abundance probably contribute to the processes that produce avian declines [5] because 1) predation is the primary source of nest mortality for most songbirds [6] and 2) previous reproductive success ultimately influences habitat selection at local [7] and landscape scales.
Any type or state of forest, when not containing agricultural and developed lands that support generalist predators and brood parasites (e.g., brown-headed cowbirds, Molothrus ater), may alleviate some fragmentation issues, including nest predation and parasitism [5,8] as opposed to other land uses.Some studies in forests of the southeastern United States have shown mitigating effects of managed *Address correspondence to this author at the University of Missouri, 203 Natural Resources Building, Columbia, MO 65211, USA; Tel: 573.875.5341;Ext.230; E-mail: hanberryb@missouri.eduforests on nest predation and parasitism.Sargent et al. [9] found that nest predation in South Carolina was greater in hardwood stands enclosed by agricultural fields than in hardwood stands enclosed by mature pine forests, where predation rates did not differ between edge and interior nests between or within stand types.They observed that the low edge contrast in pine-enclosed stands appeared to attract fewer nest predators.In managed pine forests of South Carolina, Wigley et al. [10] reported little brood parasitism for two species and Hazler et al. [11] found that edge did not depress Acadian flycatcher (Empidonax virescens) nest survival.In regenerating site-prepared pine plantations (2 to 6 years old; 2 to 57 ha) of South Carolina, Krementz and Christie [12] discerned no area effect on species richness or juvenile to adult ratios in birds captured in mist nets.Also in South Carolina, in hardwoods of differing areal extents enclosed within a managed pine matrix, Turner et al. [13] found most avian species were present in all hardwood areal classes, regardless of size.Turner et al. [13] concluded that pine plantations with some hardwood midstory and interspersed hardwood patches sustained bird species normally associated with hardwood forests.
In contrast, other studies have demonstrated relationships between characteristics of forest landscapes (e.g.stand size, edge) and measures related to bird reproductive success (e.g.nest predation and parasitism).In northern Mississippi of the southeastern United States, Aquilani and Brewer [14] found that wood thrush (Hylocichla mustelina) nest success was greatest in large, mature oak-pine mixed patches enclosed by pine plantations and other forest types and lowest near clearcut edges.Predation appeared to cause all but one nest failure, which was due to parasitism.In northeastern Alabama hardwood and mixed forest fragments, Keyser et al. [15] also reported that predation rates decreased with increasing stand size.In South Carolina pines, edge increased nest predation and negatively affected indigo bunting (Passerina cyanea) nesting success [16].
Landscape composition and configuration potentially affect specialists, including species of early-and latesuccessional vegetation and open (i.e., fire-dependent) ecosystems, more than generalists, or edge and adaptable species [17,18].Also, land use that results in loss of type or structure may affect avian presence.For example in the southeastern United States, loss of mature pine forests that generally contain specific stand elements (e.g.cavity trees, open midstory) negatively can impact foraging opportunities and dispersal of federally endangered Red-cockaded Woopeckers (Picoides borealis), and ultimately group size and reproduction [19,20]).Woodpecker habitat isolation, combined with population isolation, can make it difficult for woodpeckers to locate breeding clusters.Likewise, patch isolation may prevent declining Bachman's sparrows (Aimophila aestivalis) from detecting of suitable habitat, resulting in consequent physiological or reproductive costs, in South Carolina's managed pine woodlands [21].
Avian abundance may be connected to areal extent, abundance, or age of vegetation types.In addition, habitat selection involves hierarchical choices at different scales [22,23], and the selection process may alter with geographic variation [24].Identification of avian habitat relationships at different scales and regions is needed, both for management guidelines and for research, particularly in areas such as Mississippi, a young forested landscape in the southeastern United States where there has been little research at larger extents.
Our aim was to develop models at multiple scales to predict abundance of avian species of concern in Coastal Plain Mississippi.We explored 1) relationships between bird abundance and landscape (i.e., land type and edge and area associations) variables and 2) how scale affects these relationships.We extracted landscape information at 0.24, 1, 3, and 5-km extents from a land cover layer developed by MS-GAP and then used FRAGSTATS to calculate landscape metrics.We then developed logistic regression models for bird abundance from Breeding Bird Surveys using landscape variables at different scales.
Study Area
Most of Mississippi is part of the Coastal Plain, the largest physiographic region in the Southeastern United States.The Coastal Plain is characterized as humid subtropical, with mild winters, hot summers, long growing seasons of 180 to 320 days [25], and high annual precipitation of 114 to 162 cm [26].Natural disturbances include fire, wind, hurricanes and tornados, flooding, and ice storms.Forestlands cover 7.5 million ha, or 62% of Mississippi [27], and the state's forest base continues to grow [28].About 2.5 million ha are softwoods, half established in pine plantations, and most planted with loblolly pine (Pinus taeda).Approximately 2.8 million ha, or 37% of Mississippi's forested area, is in the seedling-sapling stage (less than 12.7 cm diameter; [28]).Pine tracts in later seral stages are limited, and land use conversion and pine plantation management likely will continue this situation; 94% of southern planted pine is less than 33 years old, whereas 53% of natural pine is less than 33 years old [28].
Overview of Analysis Steps
For our analysis, we correlated bird species abundance from Breeding Bird Surveys (BBS; [29]) with landscape metrics, such as amount of edge and area, for six different vegetation classes at 0.24, 1, 3, and 5-km extents.We extracted vegetation classes at four scales from a land cover layer produced by Mississippi Gap Analysis Project (MS-GAP; [30]).We used a common modeling approach, logistic regression, to model low and high abundance for each of 22 bird species of concern using the land class metrics as predictor variables.We selected final models based on 1) Akaike's Information Criteria, a measure of goodness of fit and 2) predictive performance by models of bird abundance for reserved data that was not used in development of models.
Data Sets
The BBS routes are approximately 40-km long and consist of 50 points that are 0.8 km apart.Volunteers record birds within a 400-m radius of points during three minutes.From the BBS database, we selected all routes, totaling 17, in or bordering the Mississippi Coastal Plain with 3-5 survey years from 1989 to 1995 [31].Most routes had 5 years of data from 1991-1995.Only surveys conducted during acceptable conditions were included [29].We divided each of 17 routes into 5, 10-stop partial route segments about 8 km in length.From each route, we selected the straightest (i.e.non-overlapping) 2 partial routes that were separated by at least 2 segments, approximately 16 km apart.We did this to increase our sample sizes and provide distance in time and space among samples.
The MS-GAP classified Landsat Thematic Mapper satellite imagery from 1991-1993 using 30 m resolution.There are 38 thematic classes with 76.6% classification accuracy for vegetation classes, based on a 1996 accuracy assessment.We reclassified the MS-GAP into 6 vegetation classes for use in analysis: 1) hardwoods (high density, medium density, and bottomland hardwoods), 2) mixed forest, 3) low density pine (low density pine with open canopy and pine savannah), 4) medium density pine (12 to 20 year old pole-sized pine, typically thinned with increasingly more understory vegetation), 5) high density pine (5 to 12 year old pine, little ground layer vegetation), and 6) herbaceous (low herbaceous vegetation, grassy/pasture/range, and recently clearcut forests).We clipped the reclassified grid with buffered extents of 0.24, 1, 3, and 5-km around each partial BBS route, creating grids of each partial route at 4 buffer distances.At the 3 and 5-km extents, we excluded one partial route that ex-tended beyond Mississippi's borders.We chose these buffer distances because a buffered extent of 0.24 km is similar in area to stand-scale studies, whereas a 5 km buffer was the maximum extent before losing information off the Mississippi border.
We used FRAGSTATS [32] to calculate 10 spatial metrics for each class type (Appendix A), producing means for each buffered partial route extent.Primary metrics for each class included mean patch area (AREA; depends on patch size and number), mean core area (CORE; mean core area of patch, excluding 90 m buffer from edge), core percentage landscape (CPLAND; proportional abundance of class type core area, excluding 90 m buffer from edge), and mean area edge density (ED; edge length of patch standardized by area).Supporting factors consisted of patch density (PD; patch number of class type, standardized by area), percentage of landscape (PLAND; proportional abundance of class type, standardized by area), cohesion (COHESION; connectivity of class type), interspersion and juxtaposition index (IJI; class type intermixing), shape index (SHAPE; average patch shape, compared to maximally compact square standard equal to one), and contiguity (CONTIG; patch boundary configuration).
We chose 22 bird species to model, all scoring 19 or greater for the East Gulf Coastal Plain by Partners in Flight ( [33]; Appendix B).Partners in Flight formulated a system to assess conservation status of North American bird species, which allows for identification of priority species for conservation [34].Species were eliminated if they were poorly sampled by point count methods (e.g., waterfowl, seabirds, shorebirds, raptors, nocturnal birds), or if they were extremely rare.We also included brown-headed cowbird, a nest parasite, and blue jay (Cyanocitta cristata), a nest predator, because of their possible impact on declining species, for a total of 24 species.
We averaged BBS counts for each species by partial routes and years to calculate a species mean (Appendix B).We then categorized routes as either low abundance for less than the mean and high abundance for greater than or equal to the mean for each species.However, 0.5 was the minimum value that we allowed for the high abundance category, in cases where the mean was below 0.5.
Statistical Analyses
Our predictor variables were land cover class variables at each extent and our response variable was lesser and greater bird abundance of every species.We used 24 partial routes extents for modeling, while holding 10 routes in reserve for validation.First, we ran t-tests (PROC TTEST; SAS software, v. 9.1, Cary, NC, USA) between land cover class variables of lesser and greater bird abundance to reduce the number of spatial metrics.We retained variables with Pvalues up to 0.1.We removed variables with ≥ 70% correlation, keeping in the following order: CORE, CPLAND, AREA, ED, COHESION, PLAND, IJI, PD, CONTIG, and SHAPE.Then we identified the 5 best, one to 5 variable models, for each species based on logistic regression with score selection (PROC LOGISTIC).We assessed these candidate models with Akaike's Information Criteria corrected for small sample size (AIC c ).We ranked the models from least to greatest AIC c values, and kept as competing models all models that had an AIC c value within 2.0 of the least value model.We also removed models if standard error for parameter estimates was ≥ values for parameter estimates.
To evaluate model accuracy and select the most accurate models of the competing models, we used the competing models to predict lesser or greater abundance in 10 model validation routes not used in model formulation (PROC LO-GISTIC).We classified model fit as correct for a route if predicted probability was greater than or equal to 50% and bird abundance mean fell within the higher abundance category, or alternatively if probability was less than 50% and bird abundance mean was within the lesser abundance category.Final best model selection included models with the best prediction rate, with at least 7 out of 10 routes correct.For each extent, we removed larger models if a nested smaller model predicted equally well, and removed smaller model subsets of larger models that had better prediction rates.
RESULTS
Seven species had models that primarily indicated habitat area or core associations (Table 1).Greater abundance of northern bobwhite was associated with herbaceous vegetation core areas at 3 and 5 km extents.Red-headed woodpecker greater abundance was related to medium density pine area at the 0.24 km extent.Models for northern parula incorporated hardwood core areas at 1 and 3 km.Swainson's warbler was associated with hardwood core metrics at 3 km.Prairie warbler greater abundance involved medium density pine core area.The model for hooded warbler encompassed medium density pine core area.Brown-headed cowbirds were tied to herbaceous vegetation area.
A mixture of area and edge variables developed for two species.White-eyed vireo abundance related to high density pine edge density at 1 and 3 km, as well as medium density pine area and hardwood core area.The best models for gray catbird included low density pine and medium density pine for all scales and variables of core area, percent of landscape, and edge density.
The four species that had positive edge density models, unmixed with positive area or core variables, were associated with hardwood or medium density pine.Models for redbellied woodpecker incorporated hardwood edge density.For Acadian flycatcher, hardwood edge density was a positive model variable.Wood thrush greater abundance was linked to hardwood edge density.Medium density pine edge density and hardwood patch density were model variables for yellow-breasted chats.
Two species did not have models with area or edge metrics directly.The model for Carolina chickadee (Poecile carolinensis) was hardwood patch density.Kentucky warbler (Oporornis formosus) greater abundance was tied to medium density pine patch density.Models for yellow-throated vireo (Vireo flavifrons), prothonotary warbler (Protonotaria citrea), and summer tanager (Piranga rubra) only contained negative variables.Great crested flycatcher (Myiarchus crinitus), eastern wood-pewee (Contopus virens), and blue jay did not have models that met the minimum 70% validation rate, whereas models for brown-headed nuthatch (Sitta pusilla), orchard oriole (Icterus spurius), and field sparrow (Spizella pusilla) did not have reasonable parameter estimates.
DISCUSSION
The most common land classes, which were herbaceous, hardwood, and medium density pine at 65% of the land-scape, also had high edge density (Appendix A).Only the hardwood and herbaceous vegetation types contained unfragmented areas, whereas the hardwoods alone retained at least some minor core area.Therefore, these spatial metrics were either common (i.e.edge) or rare (i.e.core), and associations could have reflected relative availability.Despite this, area was important for northern bobwhite, red-headed woodpecker, northern parula, Swainson's warbler, prairie warbler, and hooded warbler, corroborating previous research [35][36][37].There is evidence that white-eyed vireo is area-sensitive [36] although our models also indicated that scale and vegetation type influenced spatial metric relationships, and that edge may play a role in habitat for these species that often use shrub borders.Likewise, red-bellied woodpecker, Acadian flycatcher, and wood thrush may be area-sensitive [36,38], nonetheless our study suggested that edge also may be part of their habitat at some scales.Conversely, brown-headed cowbirds were linked with herbaceous vegetation area, which is their feeding zone, although edges and core areas may provide breeding opportunities [39].
Model variables incorporated both vegetation type and spatial metrics, which prevents direct interpretation of either variable.However, the relative weight of vegetation type may increase when area and edge density of the identical vegetation type are present, particularly in the same model or at least at the same scale.For example, gray catbird models contained low density and medium density pine variables at different scales, strengthening the overall importance of the pine vegetation type.
The models correctly identified most bird-vegetation types associations, nevertheless there were some vegetation types missing from models.For instance, rather than herbaceous vegetation type for species that use shrub, such as prairie warbler, white-eyed vireo, yellow-breasted chat, hooded warbler, or gray catbird, or open lightly-treed areas for red-headed woodpecker, high and medium pine densities were model vegetation types.If these types represent regenerating stands before canopy closure or non-treed borders then they are an appropriate match, but it is not clear that either high or medium pine density represents that structure, according to Gap documentation.In any event, although the models predicted well without the more likely vegetation type, the absence of the characteristic vegetation type may indicate errors in classification, spurious results from modeling, or bias in the BBS.
The models do have limitations, beginning with the data sets on which they are based.The primary drawback with BBS is that they occur alongside roads, which may limit inferences.However, there are roads throughout Mississippi, where there may be only 1200 ha of roadless areas [40].Road effects also may be contained within 50-100 m [41], whereas the point count stations extend to 450 m.In addition, even though each variable went through a 4 step process to stay in the model, remaining variables may match bird abundance without actually influencing bird abundance.We did not examine reproductive success, and thus density may be uncoupled from habitat quality in some cases.Nevertheless, density can be correlated with reproductive success [42].
Scale selection influenced the importance of variables.In this study, all extents were well-represented, except the more local 0.24 km extent associated with woodpeckers and a flycatcher, perhaps reinforcing that stand elements and microhabitat gain importance at the smaller site scale [43].In addition, the nature of modeling is that variables that best match the scale of the study extent will become more influential.In contrast to our models, using buffers of 50, 100, 500, 1,000, 2,500, and 5,000 m, Mayer and Cameron [44] found that the narrowest and broadest scales explain greater variance than intermediate scales.
Model variables for some species persisted throughout the extents, supporting their importance, but in most cases the inclusion of all four extents contributed to a more complete habitat picture than would one extent alone.Desrochers et al. [45] also found varying area sensitivity by scale, with increasing area-sensitivity at regional scales of 12-24 km, which was beyond the extents of this study.Area sensitivity is emergent at the landscape scale and therefore, should become more apparent at larger scales.
The landscape metric variables used in this study may explain avian habitat selection equally well as stand elements, given a large enough study extent.Howell et al. [46] detected strong landscape variable associations with bird species in both fragmented (340-880 ha) and continuous Missouri forests.Landscape metrics best predicted abundance of 70% of bird species compared to 30% for local variables.Mitchell et al. [47,48] determined that landscape models generally are as effective as stand-scale models, especially for migrants and specialists, in southeastern managed forests.Late-successional and area-sensitive species as well as class type specialists potentially are more affected by landscape than other avian species, for which stand-scale variables may determine species composition [49,50].Indeed, the species without models, great crested flycatcher, eastern wood-pewee, and blue jay, are all generalists [51][52][53].
CONCLUSION
In the Coastal Plain of the southeastern United States, there are declining bird species as well as habitat conversion due to land use [54].Areal extent of land classes was an influential variable in bird abundance models but core and areal extents for all vegetation types were low or nonexistent, indicating that heavily-forested Mississippi may be surprisingly fragmented.Fragmentation can increase interspecific interactions, including competition with edge and generalist species, predation of adults and young, and avian nest parasitism by brown-headed cowbirds, although surrounding lands may buffer these effects.Land planners should focus on increasing the patch size of vegetation types while minimizing high contrast borders.
Vegetation types now common, such as medium density pine, were positively associated with numerous species of conservation concern.However, low density pine savannas, which historically covered the Coastal Plain, provide primary habitat for vulnerable species, including some of the modeled species and species too rare to model in this study but of extreme management concern, such as red-cockaded woodpecker (Picoides borealis) and Bachman's sparrow (Aimophila aestivalis).Regional land management goals should include increasing abundance of rare vegetation types, such as low density pine, to ensure long-term stability of plant and animal communities.
and Land Classes in Young Forested Landscapes The Open Ornithology Journal, 2013, Volume 6 7 Appendix B. contd….
Sauer JR, Hines JE, Fallon J.The North American Breeding Bird Survey, results and analysis 1966-2005.Version 6.2.2006.Laurel, Maryland: USGS Patuxent Wildlife Research Center 2006. a | 2018-05-08T18:07:34.117Z | 0001-01-01T00:00:00.000 | {
"year": 2013,
"sha1": "48f3981fbe945ea74b08f3e2c3eb63027288b19d",
"oa_license": "CCBY",
"oa_url": "https://openornithologyjournal.com/VOLUME/6/PAGE/1/PDF/",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "48f3981fbe945ea74b08f3e2c3eb63027288b19d",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
267234842 | pes2o/s2orc | v3-fos-license | Microscopic extrathyroidal extension does not affect the prognosis of patients with papillary thyroid carcinoma: A propensity score matching analysis
Background Extrathyroidal extension (ETE) in papillary thyroid carcinoma (PTC) can be divided into two categories based on different degrees of invasion: microscopic ETE (micro-ETE) and macroscopic ETE (macro-ETE). At present, there is a consensus that macro-ETE significantly affects PTC prognosis, while the prognostic significance of micro-ETE remains controversial. Methods The clinicopathological and follow-up data for PTC patients who underwent surgical treatment at the Hangzhou First People's Hospital between 2015 and 2018 were retrospectively analyzed. According to the degree of ETE, patients were divided into three groups: non-ETE, micro-ETE and macro-ETE. Cox regression analysis was performed to evaluate the effect of ETE on recurrence-free survival (RFS). The propensity score matching (PSM) method was used to reduce the interference of confounding factors, and Kaplan-Meier curves were utilized to compare the RFS. Results Both micro- and macro-ETE were associated with some aggressive tumor features, including tumor size, multifocality, and lymph node metastasis. Univariate and multivariate Cox regression analyses showed that macro-ETE was an independent risk factor for recurrence, while micro-ETE was not associated with recurrence. The K-M curves showed that RFS for micro-ETE and non-ETE were not statistically different before and after PSM, while RFS for macro-ETE was significantly shorter than that for non-ETE. Conclusion The presence of micro-ETE in PTC did not affect prognosis of patients, suggesting that its treatment should be consistent with the treatment for intrathyroidal tumors. The surgical method and the necessity for radioiodine therapy should be carefully evaluated to reduce overtreatment.
Introduction
Extrathyroidal extension (ETE) is an important factor affecting tumor invasiveness, and is directly related to the choice of surgical methods and postoperative treatment [1].ETE can be divided into two categories: microscopic ETE (micro-ETE) and macroscopic ETE (macro-ETE).At present, there is no dispute that macro-ETE affects the prognosis and requires total thyroidectomy and radioiodine (RAI) therapy [2].Micro-ETE is generally considered to be related to some tumor characteristics, such as size, multifocality, and lymph node metastasis, but its prognostic significance remains controversial.
In the eighth edition of the American Joint Commission on Cancer (AJCC) staging system, micro-ETE was removed from T3 staging, suggesting that it does not have an impact on prognosis [3].In contrast, the American Thyroid Association (ATA) recurrence risk stratification classifies patients with micro-ETE to be at intermediate risk, recommending that they should receive more aggressive treatment and to consider the RAI therapy [4].Similarly, some previous retrospective studies have also demonstrated contradictory results, where the prognostic significance was still uncertain [5][6][7].
However, prior studies were all carried out under natural conditions and were affected by different degrees of confounding factors, such as age, sex, and lymph node metastasis.The present study utilized the propensity score matching (PSM) method to eliminate the influence of confounding factors and further evaluate the prognostic significance of micro-ETE in PTC.This may provide more updated and comprehensive evidence for the understanding and clinical treatment of micro-ETE.
Data collection
The electronic medical records of all pathologically confirmed PTC patients between January 2015 and December 2018 in the Department of Surgical Oncology of Hangzhou First People's Hospital were reviewed.The exclusion criteria were as follows: 1. lack of complete pathological data; 2. combination with other malignant tumors; 3. second operation; 4. mixed PTC (one or more different types of thyroid carcinoma in the gland in addition to PTC [8]); and 5. prior non-curative surgery.A total of 2138 patients were analyzed in the study.The collected data included ETE status, age, sex, tumor size, multifocality characteristics, central lymph node metastasis (CLNM), and lateral lymph node metastasis (LLNM).All of the above pathological data were independently reviewed by two experienced pathologists.The study flowchart was shown in Fig. 1.
Definition
The micro-ETE is defined as the tumor that breaks through the capsule and extends to the perithyroid soft tissue and sternothyroid muscles, which is evident under the pathological microscope.The macro-ETE is defined as the tumor invasion of subcutaneous soft tissue, larynx, trachea, esophagus, recurrent laryngeal nerve, or prevertebral fascia that is visible to the naked eye during the operation.
Surgical strategy
All patients enrolled in the study underwent radical surgery.Thyroidectomy and cervical lymph node dissection were performed simultaneously.For unilateral lesions, unilateral lobectomy and isthmus resection as well as ipsilateral central lymph node dissection were performed.For bilateral lesions and lesions with macro-ETE, total thyroidectomy and bilateral central lymph node dissection were carried out.Lateral lymph node dissection was carried out in patients with cervical lymph node metastasis diagnosed via fineneedle aspiration biopsy or preoperative imaging and confirmed by the intraoperative frozen sections.
Follow-up strategy
All patients were managed postoperatively according to the ATA guidelines.Patients underwent clinical evaluation, thyroid function tests, serum thyroglobulin level measurement, and ultrasound examinations every 3 months during the first year after surgery and every 6 months to 1 year thereafter.In the present study, the end point was recurrence-free survival (RFS), which was the time from the first surgery to the latest follow-up or the first recurrence.If the patient was lost to follow-up, the follow-up time was censored.Recurrence was defined as structural recurrence confirmed by pathology, including local recurrence and distant metastases.
Statistical analysis
R software (R Core Team, Version 4.1.2,Vienna, Austria) was used for all data analysis in the present study.PSM analysis was performed using the "Matchit" package.The matching method used the nearest neighbor algorithm with a ratio of 1:1 and a hole diameter of 0.02.The nearest neighbor algorithm was used as the matching method with the ratio set to 1:1 and the caliper value set to 0.02.Categorical variables were described by frequencies and percentages and compared by the chi-square test or Fisher's exact test.Continuous variables were expressed as mean ± standard deviation (mean ± SD) and compared using the t-test.Univariate and multivariate Cox regression analyses were utilized to evaluate the relationship between clinicopathological features and RFS.The cumulative recurrence curves were generated by the Kaplan-Meier (K-M) method and analyzed using the log-rank test.Bilateral P < 0.05 served as the significance threshold.
Comparison of baseline characteristics and RFS before PSM
A total of 176 patients were lost to follow-up during the study, resulting in 1962 patients included in the final evaluation.This cohort included 1665 patients without ETE, 157 patients with micro-ETE, and 140 patients with macro-ETE.The median follow-up time was 63.42 ± 14.80 months.Overall, 32 patients experienced recurrence, including 31 regional recurrence cases and one distant metastasis case.The baseline patient characteristics are shown in Table 1.There were differences in clinicopathological features among the three groups, except for sex.In general, patients with ETE demonstrated a higher age, tumor size, multifocality proportion, and lymph node metastasis than those without ETE.In addition, compared to patients without ETE, the proportion of patients with ETE receiving RAI treatment was higher.The three groups' recurrence rates were 1.14 %, 2.55 %, and 6.43 %, respectively.There was no significant difference in recurrence rate between micro-ETE and non-ETE groups, while the recurrence rate of macro-ETE increased significantly.The RFS of the three patient groups was also compared (Fig. 2).The RFS of patients with micro-ETE was not statistically different compared to patients without ETE (P = 0.008), while the RFS of patients with macro-ETE was significantly lower (P < 0.001).
Univariate and multivariate cox regression analyses of recurrence risk factors
Univariate Cox regression analysis showed that tumor size, multifocality, CLNM, LLNM, and macro-ETE were identified as significant risk factors for recurrence, while age, sex, and micro-ETE were not associated with RFS (Table 2).Factors with univariate p < 0.05 were included in the multivariate Cox regression analysis.The results showed that multifocality, CLNM, and macro-ETE were significantly correlated with RFS, but size and LLNM were not risk factors for recurrence.
Comparison of baseline characteristics and RFS after PSM
To reduce the influence of confounding factors on the results, PSM analysis was performed on the patients based on the following factors: age, sex, tumor size, multifocality, CLNM, and LLNM.For non-ETE and micro-ETE, a total of 312 patients were included after 1:1 PSM.The baseline data showed that all clinicopathological features were consistent (Table 3).The cumulative hazard curves demonstrated that there was no significant difference in prognosis between the two groups, as before the PSM (Fig. 3).The macro-ETE and non-ETE groups were also compared.After the PSM, the baseline of 258 patients was completely consistent (Table 4).According to the K-M curve analysis, the difference between the two groups was statistically significant (P = 0.016), and the RFS for macro-ETE was significantly lower than that for non-ETE (Fig. 4).The results of Univariate and multivariate Cox regression analyses after PSM were shown in Table 5.
Discussion
It is necessary to improve the layered diagnosis and treatment of ETE and distinguish between them in clinical practice to achieve accurate treatment [9].The present results demonstrated that micro-ETE of PTC may not be a prognostic factor for thyroid cancer patients at our center.The PSM method was used to evaluate the difference between micro-ETE and macro-ETE in PTC, with the purpose of reducing selection bias and eliminating outliers [10].The results demonstrated that there is no significant difference between the prognosis for micro-ETE and non-ETE.This suggests that reducing overdiagnosis and treatment of micro-ETE might be necessary to appropriately reduce the scope of cleaning and avoid unnecessary RAI therapy.Furthermore, the present study also showed that the RFS for macro-ETE was significantly lower than that for non-ETE.This indicated that macro-ETE was more aggressive, suggesting that comprehensive evaluation of macro-ETE should be performed before surgery and that radical surgery and RAI therapy should be carried out.The significance of micro-ETE, also known as minimal ETE in some literature, in PTC remains controversial.In the eighth edition of the AJCC staging system, only grossly evident (macroscopic) ETE involving strap muscles (not microscopic ETE involving perithyroidal soft tissue) affects tumor staging [11].This system considers that micro-ETE detected only in the histological examination has no effect on mortality and proposes that only the macro-ETE is clinically relevant and affects the tumor stage.In the 2015 ATA initial risk stratification, the presence of micro-ETE advanced low risk patients to moderate risk and the recurrence risk associated with micro-ETE ranged from 3 % to 9 % [4].Therefore, a more aggressive initial treatment was strongly recommended, even if micro-ETE had no other adverse features.However, the stage and risk, as well as other factors, such as recurrence and complications, should be taken into account when choosing the surgical procedure in clinical practice.Therefore, understanding the influence of different degrees of ETE on survival and prognosis can help clinicians to realize individualized operation mode selection and avoid overtreatment.
Some previous retrospective studies have suggested that micro-ETE may lead to poor cancer-specific and overall survival outcomes.A retrospective study of 77 patients with micro-ETE by Seifert et al. [12] found that micro-ETE is a statistically significant and independent risk factor for relapse through LNMs and distant metastases.Some reports [13] suggest that all levels of extrathyroidal extension, including microscopic, are associated with an increased risk of lymph node and distant metastases, as well as decreased overall survival.Other scholars hold the opposite view, stating that not all levels of ETE have a poor prognosis.Marques et al. [14] found there was no significant association between micro-ETE and recurrence rate, persistence of disease or disease-specific mortality.Li et al. [11] research results also showed that there was no difference in tumor size, multifocality, lymph node metastasis, and recurrence between micro-ETE and non-ETE patients.However, once the tumor invaded beyond the strap muscles, patients' overall survival decreased.In addition, Patti et al. [15] also found that distinguishing micro-ETE and macro-ETE provides a better predictive probability of recurrence.The present study findings are similar, indicating that micro-ETE is not an appropriate indication for aggressive surgery or RAI treatment for PTC.
The use of radioactive iodine therapy after total thyroidectomy has been practiced for a long time.In the 2009 ATA guidelines, minimal ETE or vascular invasion was considered an 'intermediate' risk feature with a recommendation for RAI therapy.In contrast, the 2015 ATA guidelines altered the RAI recommendations [16,17].Damage to the salivary glands, impaired gonadal function, and secondary neoplasm are common adverse effects in patients undergoing high dose RAI.In the present study, patients with microscopic ETE were not candidates for high dose RAI and had good prognoses.Because micro-ETE is not a risk factor for PTC recurrence, tumors with micro-ETE are biologically less aggressive and there is no need to strengthen the treatment.In particular, patients with macro-ETE in thyroidectomy samples may benefit from the initial RAI.Before further study to clarify the benefits of RAI in patients with micro-ETE, clinicians must carefully review the pathological reports after thyroidectomy and consider the choice of auxiliary RAI in the presence of micro-ETE.
There were some limitations in the present study.First, this was a retrospective study.Even though PSM was performed to reduce selection bias, it did not completely eliminate its impact on the results.Second, given the low incidence of micro-ETE and the small study sample size, a multicenter analysis should be performed to confirm the present preliminary findings.Another limitation came from the difference in the definition of micro-ETE at different centers, which may lead to the lack of universality and objectivity in the histological evaluation of micro-ETE.
Fig. 2 .
Fig. 2. Comparison of recurrence-free survival in PTC patients with different ETE classifications before propensity score matching.ETE, extrathyroidal extension.
Fig. 4 .
Fig. 4. Comparison of recurrence-free survival in patients with macro-ETE and no ETE after propensity score matching.ETE, extrathyroidal extension.
Table 1 |
Comparison of baseline patient characteristics before propensity score matching.
Table 2 |
Univariate and multivariate Cox analyses of risk factors for recurrence before propensity score matching.
Table 3 |
Comparison of baseline patient characteristics between non-ETE and micro-ETE groups after propensity score matching.
Fig. 3. Comparison of recurrence-free survival in patients with micro-ETE and no ETE after propensity score matching.ETE, extrathyroidal extension.K.-c.Jiang et al.
Table 4 |
Comparison of baseline patient characteristics between non-ETE and macro-ETE groups after propensity score matching.
Table 5 |
Univariate and multivariate Cox analyses of risk factors for recurrence after propensity score matching. | 2024-01-26T16:25:47.114Z | 2024-01-01T00:00:00.000 | {
"year": 2024,
"sha1": "32556bd8548fd313097a06f61d552c6013052a7c",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.heliyon.2024.e25280",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b20e0127f017ec99a9e0533fe2af785f209bfe2d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
239520341 | pes2o/s2orc | v3-fos-license | Spatial Rigid-Flexible-Liquid Coupling Dynamics of Towed System Analyzed by a Hamiltonian Finite Element Method
An effective Hamiltonian finite element method is presented in this paper to investigate the three-dimensional dynamic responses of a towed cable-payload system with large deformation. The dynamics of a flexible towed system moving in a medium is a classical and complex rigid-flexibleliquid coupling problem. The dynamic governing equation is derived from the Hamiltonian system and built-in canonical form. A Symplectic algorithm is built to analyze the canonical equations numerically. Logarithmic strain is applied to estimate the large deformation effect and the system stiffness matrix will be updated for each calculation time step. A direct integral solution of the medium drag effect is derived in which the traditional coordinate transformation is avoided. A conical pendulum system and a 180◦ U-turn towed cable system are conducted and the results are compared with those retraced from the existing Hamiltonian method based on small deformation theory and the dynamic software of Livermore software technology corp. (LS-DYNA). Furthermore, a circularly towed system is analyzed and compared with experimental data. The comparisons show that the presented method is more accurate than the existing Hamiltonian method when large deformation occurred in the towed cable due to the application of logarithmic strain. Furthermore, it is more effective than LS-DYNA to treat the rigid-flexible-liquid coupling problems in the costs of CPU time.
Introduction
There are many applications in which payloads are towed by three-dimensional airborne platforms: towed remotely operated vehicles (ROVs) [1], flexible aerial refueling hose system [2], aircraft-towed cables [3], and tethered satellite systems [4]. The missions of three-dimensional towed systems have expanded from mapping the benthic and spatial morphology to various objectives, such as mine hunting, collecting biological samples, fuel make-up, and even exploring other celestial bodies [5]. The dynamics of cable-payload systems have been of major interest for several decades due to their nonlinear features caused by the flexible cable and airflow drag, the prediction of the nonlinear dynamic responses of cable-payload systems has become to be an active field of investigations. The towed systems moved in the fluid medium will be subject to fluid resistance such as airflow drag force and water resistance, coupling with the rigid payload and flexible cables, it will become to be classical and complex rigid-flexible-liquid coupling problems. An analytical solution is difficult to be acquired, and commercial software such as LS-DYNA will cost plenty of computation time to tackle the rigid-flexible-liquid coupling problems [6]. Researchers proposed many numerical methods to analyze the towing process dynamics, mainly including lumped mass method, finite difference method, and finite element method. Asuma and Masayoshi [7] introduced a lumped mass approximation for the towed underwater vehicles, but the environmental water currents are ignored throughout the paper for simplicity. Du et al. [8] discretized the cable into a series of massless springs and lumped-mass points for analyzing the influence of underwater vehicle flow field on the dynamic behavior of the towed sonar cable array, and the drag force on the towed sonar cable array is obtained by calculating the drag force on each lumped-mass point. The lumped mass method is simple and convenient to implement. but it is not effective in handling flexible cables with large deformation and is prone to diverge due to the simplification. Sun et al. [9] applied the finite difference method to study the static and dynamic tug-cable-barge coupling motion problems. Zhao et al. [5] presented a finite difference method taking the axial elasticity, bending stiffness, and torsional stiffness into consideration to study the transient dynamics of straight towing and the U-turn towing process, the numerical simulations are validated by sea trial experimental data. However, the finite difference method cannot be implemented for complex geometries or variable properties of the towed system. Luis et al. [10] proposed a finite element method based on catenary line theory to study the towing dynamics for floating and submerged bodies, and the influence of internal damping was considered. Abhinav et al. [11] modeled the tow cable as a geometrically exact beam and applied a nonlinear finite element method to analyze the three-dimensional transient dynamics of towed underslung systems. Htun et al. [12] proposed a new tether cable element based on the radially multilayered modeling approach and applied the absolute nodal coordinate formulation to study the dynamics of a remotely operated underwater vehicle. Zhu et al. [13,14] proposed a new nodal position finite element to model the towed system dynamics. By assembling elements together, the complex geometries and/or the different properties of cable will be easily modeled, therefore, the finite element method is probably the most appealing technique in existing numerical methods to analyze the dynamic problem of the towed cable-payload system. However, the moving process of the towed cable-payload system is prone to be a long-term work in practical engineering cases, and the numerical simulation is required to be stable and accurate for long-term calculation. Traditional time integration schemes such as Runge-Kutta, Newmark, and Generalized-α methods are effective, but their accuracies are sensitive to the size of time integration step and are prone to the accumulated error due to the co-existence of rigid-flexible-liquid coupling motion [15]. The conservation of system energy cannot be guaranteed, which may even lead to the divergence of the numerical solution over a long simulation period. To tackle this problem, Ding et al. [16,17] proposed a new nodal position method based on Hamiltonian theory to analyze the nonlinear dynamic responses of spatial flexible tether systems which had low numerical error accumulation. The Hamiltonian formalism solved by the Symplectic algorithm [18,19] could preserve the linear and the quadratic conservative quantities of the differential equations. It has been applied to analyze the vibration problems of beam and plate [20] and the tethered satellite system dynamics [21].
Furthermore, the towing cable is prone to nonlinear large strain due to its high flexibility, therefore, the large deformation influence [22] is a critical point to study the three-dimensional rigid-flexible liquid coupling dynamics of towed cable-payload system. Chucheepsakul et al. [23] developed finite element formulations for large strain analysis of extensible flexible marine pipes which transport fluid. Xu [24] proposed a flexible segment model to evaluate the dynamics of slender underwater cables undergoing high tension and large deformations. However, they are limited to the linear elastic cases and the large deformation is exactly related to the large-scale movement. Actually, the real strain to flexible cable is accumulated since the initial time. Assuming that the cable material is homogeneous and isotropic, the accumulated large strain can be described by the logarithmic strain. Logarithmic strain theory with Kirchhoff stress has been applied to the dynamic analysis of spatial truss systems with large deformation occurred [25].
In addition, the medium resistance is usually analyzed in local coordinate systems firstly and subsequently transformed to be the expression in a global coordinate system [26].
The complex discussions about whether the normal or tangential component of resistance is along with the global coordinate axis are required in the transformation. On the other hand, the medium resistance rigid-flexible liquid coupling problem is time-varying in the movement process [27], it is mainly due to that the medium resistance is related to the cable posture and the relative velocity between the moving structure and the medium. Plenty of medium elements and the enormous cost of computation will be required in existing commercial software. A more effective solution method is required to numerically analyze the rigid-flexible-liquid coupling dynamics of towed cable-payload systems.
This paper aims at the three-dimensional rigid-flexible-liquid coupling dynamic problems of the towed cable-payload systems, a numerical model based on Hamiltonian theory is proposed and solved by finite element method, in which the large strain influence, the damping effect, and the aerodynamic effect are taken into consideration. Following the introduction, the Hamiltonian nodal position finite element formulism is derived, a new solution of aerodynamic resistance is derived and expressed in form of the Morison equation, and the Symplectic solution scheme is formulated in Sections 2 and 3. Subsequently, the validation of the proposed model using a conical pendulum system, a 180 • U-turn towed system, and an experimental work published in the literature is presented in Section 4. Finally, Section 5 discusses and concludes the present study.
Dynamic Modeling of Cable Element
Most of the existing finite element methods to analyze the dynamic responses of towed cable-payload systems are derived based on small deformation theory. Besides, the solution of medium drag force is too complex due to the complex discussion about the directions of normal and tangent components. In this section, a discrete formalism is used for dynamic cable modeling. The differential dynamic governing equations are derived based on finite elasticity theory and Hamiltonian theory, in which the logarithmic strain is applied to describe the large strain of the towed cable. The application of logarithmic strain effectively depicts the actual accumulative strain of the flexible cable. Then, the Morison equation is used to represent the influence of aerodynamic, the Rayleigh damping is applied to take the damping influence into consideration. Finally, the Symplectic solution scheme is derived to solve the obtained finite element equations.
Three-Dimensional Tait-Bryan Transformation and Shape Function
The flexible cable undergoes large scale motion and large deformation in the motion process of the towed cable-payload system, the flexible cable is in the taut state when large deformation existed due to the flexibility and the bending and torsion effects are not the major dynamic response factors of the flexible cables, furthermore, the bending effects can be also calculated by the selection of much more suitable straight cable elements, therefore, the bending and torsion effects are neglected in the exiting nodal position finite element method. According to the nodal position finite element theory [6], we discretize the towed cable by n segments.
Consider a two-node straight cable element in a three-dimensional space, the elemental position is described by its nodal coordinates (X i, Y i, Z i, X j, Y j, Z j ) in a global Cartesian coordinate system O-XYZ with the unit base vectors (i, j, k). A local right-hand coordinate system o-xyz to each cable element is defined with ox-axis along with the cable, oy-axis and oz-axis perpendicular to the ox-axis, respectively, and the unit base vectors of o-xyz are (e x , e y , e z ). As shown in Figure 1, ON is set to be the intersection line of surfaces O-XY and o-yz, the Tait-Bryan transformations from the global coordinate system O-XYZ to the local coordinate o-xyz system are θ z -yaw, θ y -pitch, and θ x -roll, respectively.
(1) Rotating OZ counterclockwise by an angle θ z (0 ≤ θ z < π) to overlap OY with ON, the new OY is perpendicular to the new OX, OZ and the local ox. (2) Rotating the new OY clockwise by an angle θ y (0 ≤ θ y < π) to overlap OX with the local ox.
XY and o-yz, the Tait-Bryan transformations from the global coordinate system O-XYZ to the local coordinate o-xyz system are θz-yaw, θy-pitch, and θx-roll, respectively.
(1) Rotating OZ counterclockwise by an angle θz (0≤θz<π) to overlap OY with ON, the new OY is perpendicular to the new OX, OZ and the local ox. The Tait The three rotational angles can be calculated at any instant in the simulation process in which the endpoints of the straight cable element are frequently updated. Correspondingly, the nodal position vector r, velocity vector v, and acceleration vector a of an arbitrary point in a local coordinate system can be expressed in a global system, respectively.
Assume that the local position vector r(x, t) of an arbitrary point x(x, y, z) along the straight cable element is expressed by the linear shape functions and nodal coordinates xe (t) = [xi yi zi xj yj zj] T , such that: the linear shape function N(s) is defined as: The Tait-Bryan transformation matrix T can be derived from the three sequential coordinate transformations.
The three rotational angles can be calculated at any instant in the simulation process in which the endpoints of the straight cable element are frequently updated. Correspondingly, the nodal position vector r, velocity vector v, and acceleration vector a of an arbitrary point in a local coordinate system can be expressed in a global system, respectively. r = r x r y r z e x e y e z T = r x r y r z T i j k Assume that the local position vector r(x, t) of an arbitrary point x(x, y, z) along the straight cable element is expressed by the linear shape functions and nodal coordinates x e (t) = [x i y i z i x j y j z j ] T , such that: the linear shape function N(s) is defined as: where ξ = s/L e (0 ≤ ξ ≤ 1) is the length ratio of the deformed cable element, and s = [(x−x i ) 2 + (y−y i ) 2 + (z−z i ) 2 ] 1/2 is the length along the deformed cable element measured from node i, L e is the instantaneous length of the cable element.
According to the interpolation of the position vector, the given linear shape functions would be appropriate to the velocity vector v(x, t) = [v x v y v z ] T and acceleration vector a(x, t) = [a x a y a z ] T .
v(x, t) = r(x, t) = N(x)a e (t) (5) where v e (t) = [v xi v yi v zi v xj v yj v zj ] T and a e (t) = [a xi a yi a zi a xj a yj a zj ] T are the nodal velocity and acceleration components of the deformed cable element in a local coordinate system.
Elemental Virtual Work and Elemental Matrix
In the movement process of a towed cable-payload system, the cable is in the tensional state with large deformation, especially for the flexible ones. The actual strain of cable element e is accumulated over the movement process, which can be expressed using the logarithmic strain if the cable material is homogeneous and isotropic [28]: where L 0 and L e are the initial length and current length at the moment t of the cable element e, respectively. Compared with the engineering strain, the logarithmic strain has a wider range of application which is suitable for both small and large deformation circumstances of flexible structures [28]. The axial Kirchhoff stress will be correspondingly calculated using the linear constitutive relationship, such that: and E is the Young's modulus of the cable material. Correspondingly, the virtual elastic potential energy δU e of the cable element e would be obtained by integrating the logarithm strain and the Kirchhoff stress over the element and expressed in a local coordinate system, such as: where A 0 is the initial elemental cross-section and 0 V e is the initial elemental volume, both A 0 and 0 V e keep constant in the analyzing process, 0 V denotes that the integral is built based on initial volume. Equation (8) is a definite integral to the elemental volume 0 V e , the upper left subscripts 0 in Equation (8) denote that the integral is calculated with respect to the initial state of the cable element, and the upper left subscripts t denotes the real-time value at time t. I (3 × 3) denotes the 3 × 3 unit matrix. k e is the elemental stiffness matrix and analyzed by δU e . L e is the function of the nodal position components in a local coordinate system and its value is variable in the movement process, especially for flexible cable elements. Therefore, the value of k e is a variable and nonlinear matrix, it should be updated with each simulation time step. Similarly, the elemental virtual kinetic energy δT e can be obtained and expressed in terms of nodal velocity vectors in a local coordinate system, such as: where ρ is the density of cable material, m e is the elemental mass matrix and its value keeps constant in the simulation process, the upper right subscript T denotes the transposed matrix form. The virtual elastic potential energy and virtual kinetic energy of cable element can also be expressed in a global coordinate system and their values remain unchanged. By the application of the elemental transformation T e , the elemental stiffness matrix and mass matrix can be expressed in a global coordinate system: where K e and M e are the elemental stiffness and mass matrix in a global system, the elemental transformation matrix from global system to local system T e depicts the transformation of the elemental position and acceleration components, which can be expressed as: The towed cable-payload system is subjected to gravity and medium resistance in the movement process. Due to the gravity vector remains to be constant, the elemental gravity matrix would be analyzed in a global coordinate system directly. A new solution scheme is presented in this paper to analyze the medium resistance, Morison's equation [29] is applied to modeling the medium resistance and analyzed in a global coordinate system directly. In the solution scheme of medium resistance, the complex transformation process from local coordinate system to global coordinate system is avoided.
The linear interpolation function N G to global OZ-axial direction would be applied to interpolate the elemental gravity components. The virtual potential energy δU Ge can be correspondingly expressed as: in which g = [0 0 -g] T is the gravitational acceleration vector and N G is assumed to be: and the elemental gravity matrix F Ge would be: To tackle the previously mentioned difficulty of complex transformation and enormous computational cost when the medium resistance is analyzed, a direct integral method is proposed in this paper. The medium resistance will be solved by Morison's equation [30,31] when the diameter size D is much smaller than the cable length, (17) in which f d is the medium resistance per unit cable length, f dt and f dn are the tangent and normal components, V r = V-V f is the relative velocity of cable to fluid, V f is the fluid velocity, V rt and V rn are the tangent and normal components. C d is the resistance coefficient which is related to the attack angle α, C dt (α) and C dn (α) are the tangent and normal components, ρ f is the fluid density.
Similar to the cable velocity V = NV e , the fluid velocity of arbitrary location V f is analyzed by applying the same linear shape function N to elemental fluid velocity vector V fe , such as V f = NV fe . Then, the virtual work conducted by drag force would be obtained without the discussion of tangent and normal components, such as: in which F de is the elemental drag force vector, Fourth-order closed Newton-Cotes formulate [32] is applied to analyze the integrant item in which V re is the elemental relative velocity vector, the expansion form of f (s) would be: where s is the real-time length of cable element. Correspondingly, the elemental drag force vector F de would be expressed as: (22) in which the elemental drag force vector F de is related to the real-time length of the cable element, the calculating time step should be small enough to ensure the tiny change of elemental length for each step. Generally, the structural damping of cable especially for large deformed ones would be considered in the spatial movement process. Rayleigh damping is applied to depict the structural damping effect which provides resistance to the structure with time-varying acceleration, such that: where C e is the elemental Raleigh damping matrix, α and β are the Rayleigh coefficients, ω i and ω j are the cut-off frequencies of lower and upper bounds in the frequency domain, and ξ i and ξ j are the corresponding damping ratios, respectively.
Elemental Dynamic Equation in Canonical Form
Based on Lagrangian theory and the elemental mass matrix, stiffness matrix, Raleigh damping matrix, gravity vector and external force vector, the dynamic equation of cable element in the Lagrangian form will be derived as: Equation (24) is widely used in the traditional finite element method to model the cable element dynamics, however, the accuracy of the solution procedure would not be ensured in long-term dynamic simulation cases due to the structural discretization [33]. In this paper, the Hamiltonian equation which conserves the system energy and momentum exactly is used to eliminate the dispersion, Symplectic integration algorithm will be applied to analyze the Hamiltonian equation of the towed system. By the definition of generalized momentum P e = M e V e = [P Xi P Yi P Zi P Xj P Yj P Zj ] T , the elemental Hamiltonian function can be expressed as: in which the upper right subscript -1 denotes the inverse matrix form, P e and X e are the canonical variables and solved in canonical form, such as: .
Symplectic Integration Algorithm for Towed Cable-Payload System
Based on the second-order high accuracy method developed by Ding et al. [6], a new Symplectic integration algorithm is derived to solve the Hamiltonian canonical Equation (26), in which both the linear and the quadratic conservative quantities will be preserved. The effect of large deformation, medium resistance, and structural damping is considered in the movement procedure of a towed cable-payload system.
Assembly of System Dynamic Equations
Based on the continuity theory, the nodal position, velocity, and acceleration of the same node by two adjacent elements always keep consistent, the system dynamic equations will be assembled by all the elemental canonical equations. In this paper, the payload is treated as a lumped mass attached to the tail point of the towed cable, the movement of a towed point is previously identified. Finally, the system dynamic equations will be obtained as: where M is the global mass matrix in which the mass of towed payload is attached to the last elemental node, M −1 is the inverse form of M, F G is the global gravity vector which takes the gravity of towed payload into consideration, K, P, F d are the global stiffness matrix, momenta matrix, and medium resistance force vector, respectively, the dot above M and P denotes the derivative with respect to time.
Symplectic Difference Algorithms
According to the Symplectic algorithm theory, the Hamiltonian canonical variables can be analyzed by a step-by-step iterative approach to preserve the linear and the quadratic conservative quantities. To analyze differential Equation (27) precisely, the second-order Symplectic difference scheme for the k+1 step can be expressed as: where h is the time step, H X (P k+1 , X k ) and H P (P k+1 , X k ) are the partial differentiation form with respect to the Hamiltonian canonical variables X and P, the upper right subscript k+1 and k denote the time step which is under calculation. In each time step, repeated iterations will be required in the solution procedure to confirm a converge value of P k+1 .
Based on the second-order Symplectic difference theory (28) and the system Hamiltonian function, the dynamic equations of the towed cable-payload system (27) will be analyzed by each time step, such that: Omitting the high-order item in Equation (29), the second-order analytical formulation of towed system for k+1 step would be: Due to the system stiffness matrix is a time-varying function respects to the nodal position, thus a derivative matrix K' will be derived as: And, in which I and 0 are the 3 × 3 unit matrix and null matrix, respectively.
Initialization and Solution Procedure
As previously mentioned, the system dynamic Equation (27) are the first-order differential equations that have n position variates and n momentum variates. Initialization of 2n variates will be required for the solution of system dynamic equations, and their values would be updated by each time step. In the solution procedure, the momentum matrix is constructed by the constant mass matrix and velocity vector of the towed system. Figure 2 presents a flow chart for the major steps in the proposed numerical solving procedure. The Symplectic solution code of the towed cable-payload system is compiled in VC++ 6.0 environment which is a product of Microsoft corporation. where t_end is the pre-set calculating termination time, △t = h is the time step in the calculation process which is determined according to the calculation rule of finite element method, such that: Where t_end is the pre-set calculating termination time, ∆t = h is the time step in the calculation process which is determined according to the calculation rule of finite element method, such that: in which l is the characteristic length of cable element, c is the stress wave speed which can be analyzed by the Young's modulus E and density ρ of cable material.
Numerical and Experimental Validations
For nonlinear dynamic systems, many researchers [34][35][36][37] have presented comparative studies between the conventional non-Symplectic algorithms and the Symplectic algorithms. Symplectic algorithms have the conservation characteristic for the conservative Hamiltonian system and have higher calculation accuracy for either conservative or non-conservative Hamiltonian systems. Ding et al. [6,16] have done the detailed validation about the Hamiltonian nodal position finite element method from three aspects: Symplectic conservation features, the higher accuracy, and wider applicability than the conventional numerical methods. In this section, the presented Hamiltonian nodal position finite element method based on logarithmic strain solved by the second-order Symplectic difference algorithm is named HFML.
Flexible Conical Pendulum Modeling
As shown in Figure 3, the dynamic responses of a flexible polyethylene rubber conical pendulum without damping effect are modeled in this section. The pendulum rod rotates about the vertical OZ-axis with the intersection angle θ = π/3 rad, the physical parameters of polyethylene rubber pendulum rod are set as density ρ = 1300 kg/m 3 about the Hamiltonian nodal position finite element method from three aspects: Symplectic conservation features, the higher accuracy, and wider applicability than the conventional numerical methods. In this section, the presented Hamiltonian nodal position finite element method based on logarithmic strain solved by the second-order Symplectic difference algorithm is named HFML.
Flexible Conical Pendulum Modeling
As shown in Figure 3, the dynamic responses of a flexible polyethylene rubber conical pendulum without damping effect are modeled in this section. The pendulum rod rotates about the vertical OZ-axis with the intersection angle θ = π/3 rad, the physical parameters of polyethylene rubber pendulum rod are set as density ρ = 1300 kg/m 3 Firstly, a quasi-static tensional process is modeled in which the static tensile strain of the polyethylene rubber pendulum rod will reach 0.2 which is equal to the theoretical value. This attached lumped weight is applied to excite large deformation rotation of the polyethylene rubber conical pendulum. Due to the inertia of the attached lumped weight, the maximum dynamic axial strain will be much larger than the static tensile strain.
Furthermore, the rotation process was modeled by the proposed HFML, the HMSS presented by Ding et al. [6], and the commercial software LS-DYNA SMP R11.0.0 which is a product of ANSYS Inc.. The commercial software LS-DYNA already has the cable element which can take the large deformation into consideration, besides, we have done the comparison about a two-dimensional pendulum system between theoretical solution and LS-DYNA result which demonstrates the solution of LS-DYNA is accurate and credible. In HFML and Ding's HMSS simulation cases, the calculating time step is △t = 2 × 10 -4 s and the terminal calculating time is t_end = 6 s. The cable element is modeled by a link Firstly, a quasi-static tensional process is modeled in which the static tensile strain of the polyethylene rubber pendulum rod will reach 0.2 which is equal to the theoretical value. This attached lumped weight is applied to excite large deformation rotation of the polyethylene rubber conical pendulum. Due to the inertia of the attached lumped weight, the maximum dynamic axial strain will be much larger than the static tensile strain.
Furthermore, the rotation process was modeled by the proposed HFML, the HMSS presented by Ding et al. [6], and the commercial software LS-DYNA SMP R11.0.0 which is a product of ANSYS Inc.. The commercial software LS-DYNA already has the cable element which can take the large deformation into consideration, besides, we have done the comparison about a two-dimensional pendulum system between theoretical solution and LS-DYNA result which demonstrates the solution of LS-DYNA is accurate and credible. In HFML and Ding's HMSS simulation cases, the calculating time step is ∆t = 2 × 10 -4 s and the terminal calculating time is t_end = 6 s. The cable element is modeled by a link 160 element and the lumped mass is treated as an attached particle in LS-DYNA, the time step of LS-DYNA is self-adapting. Figure 4a-c and d-f show the position and velocity responses of the lumped weight simulated by HFML, Ding's HMSS, and LS-DYNA. The position and velocity responses of HFML match the results of LS-DYNA almost exactly, even for the large dynamic strain much greater than 20% occurred in the swing process. Meanwhile, the non-negligible error exists for Ding's HMSS which is derived based on small deformation theory. By the comparison of HFML and LS-DYNA, the maximum error of the nodal position in Z-axial is less than 0.1%, and the maximum error of nodal velocity in Z-axial is less than 0.5%. The maximum pendulum length extends to 3.4948m and the dynamic logarithm strain is 55.81%. The zero potential energy plan is set to Z = −L. As depicted in Figure 5, the system energy-time curve illustrates the proposed HFML will almost exactly keep the energy of the conical pendulum system constant as expected, a slight oscillation occurred due to the value discontinuity. However, the oscillation of system energy exists in Ding's HMSS for the large deformation calculation, and the oscillation has a tendency to be larger along with the calculation time. The zero potential energy plan is set to Z = −L. As depicted in Figure 5, the system energy-time curve illustrates the proposed HFML will almost exactly keep the energy of the conical pendulum system constant as expected, a slight oscillation occurred due to the value discontinuity. However, the oscillation of system energy exists in Ding's HMSS for the large deformation calculation, and the oscillation has a tendency to be larger along with the calculation time.
Towed Cable-Payload System in 180 0 U-Turn Maneuver
In this section, a towed cable-payload system in the 180 0 U-turn maneuver ( Figure 6) is simulated by HFML and LS-DYNA. The towed system consists of a rubber tow cable and a lumped mass. Yang et al. [38] have investigated the dynamic analysis of cable-towed systems during the ship in a 180 0 U-turn process, the towed body is treated as a lumped mass. In the simulations, large deformation and displacements might occur if the mass of the towed weight is large enough, and then the logarithm strain will be applied. The material properties and sizes of the rubber cable are the same as those in section 4.1. The towed lumped mass is m = 0.4 kg and 4% static tensile strain will be generated. As shown in Figure 6, the 180 0 U-turn maneuver is set in three phases. The first phase is linear towing. The towing starts from the origin point of the global coordinates, and the linear towing trajectory (dotted portion) lies in the Z = 0 plane and along the positive OYaxis, the phase experiences 4s and the acceleration is constant (0.25m/s 2 ). The second phase is circular towing in the Z = 0 plane. The tow node turns left after the first phase and moves constantly along the semicircular towing trajectory with the towing radius R = 2m. The towing tangential velocity is 1m/s. The last phase is linear towing again in the Z = 0 plane. The velocity vector is the same as the last one of the second phase. The towing velocity vector is invariable during the third phase. Hence, the three towing phases have the position boundary conditions:
Towed Cable-Payload System in 180 • U-Turn Maneuver
In this section, a towed cable-payload system in the 180 • U-turn maneuver ( Figure 6) is simulated by HFML and LS-DYNA. The towed system consists of a rubber tow cable and a lumped mass. Yang et al. [38] have investigated the dynamic analysis of cable-towed systems during the ship in a 180 • U-turn process, the towed body is treated as a lumped mass. In the simulations, large deformation and displacements might occur if the mass of the towed weight is large enough, and then the logarithm strain will be applied. The material properties and sizes of the rubber cable are the same as those in Section 4.1. The towed lumped mass is m = 0.4 kg and 4% static tensile strain will be generated.
Towed Cable-Payload System in 180 0 U-Turn Maneuver
In this section, a towed cable-payload system in the 180 0 U-turn maneuver ( Figure 6) is simulated by HFML and LS-DYNA. The towed system consists of a rubber tow cable and a lumped mass. Yang et al. [38] have investigated the dynamic analysis of cable-towed systems during the ship in a 180 0 U-turn process, the towed body is treated as a lumped mass. In the simulations, large deformation and displacements might occur if the mass of the towed weight is large enough, and then the logarithm strain will be applied. The material properties and sizes of the rubber cable are the same as those in section 4.1. The towed lumped mass is m = 0.4 kg and 4% static tensile strain will be generated. As shown in Figure 6, the 180 0 U-turn maneuver is set in three phases. The first phase is linear towing. The towing starts from the origin point of the global coordinates, and the linear towing trajectory (dotted portion) lies in the Z = 0 plane and along the positive OYaxis, the phase experiences 4s and the acceleration is constant (0.25m/s 2 ). The second phase is circular towing in the Z = 0 plane. The tow node turns left after the first phase and moves constantly along the semicircular towing trajectory with the towing radius R = 2m. The towing tangential velocity is 1m/s. The last phase is linear towing again in the Z = 0 plane. The velocity vector is the same as the last one of the second phase. The towing velocity vector is invariable during the third phase. Hence, the three towing phases have the position boundary conditions: Figure 6. Sketch of U-turn cable-towed system.
As shown in Figure 6, the 180 • U-turn maneuver is set in three phases. The first phase is linear towing. The towing starts from the origin point of the global coordinates, and the linear towing trajectory (dotted portion) lies in the Z = 0 plane and along the positive OY-axis, the phase experiences 4 s and the acceleration is constant (0.25 m/s 2 ). The second phase is circular towing in the Z = 0 plane. The tow node turns left after the first phase and moves constantly along the semicircular towing trajectory with the towing radius R = 2 m. The towing tangential velocity is 1 m/s. The last phase is linear towing again in the Z = 0 plane. The velocity vector is the same as the last one of the second phase. The towing velocity vector is invariable during the third phase. Hence, the three towing phases have the position boundary conditions: and the velocity boundary conditions: where a = 0.25 m/s 2 , t 0 = 4 s, t 1 = 4 + 2π s, and t 2 = 6 + 2π s. (X, Y, Z) and (V X , V Y , V Z ) are the position and velocity components of the tow point, respectively. The angular velocity of the circular towing is ω = 0.5 rad/s, respectively. Figure 7 illustrates the towing trajectory of the tow point in the Z = 0 plane.
where a = 0.25 m/s 2 , t0 = 4 s, t1 = 4 + 2π s, and t2 = 6 + 2π s. (X, Y, Z) and (VX, VY, VZ) are the position and velocity components of the tow point, respectively. The angular velocity of the circular towing is ω = 0.5 rad/s, respectively. Figure 7 illustrates the towing trajectory of the tow point in the Z = 0 plane. The high-frequency oscillation is excited in the Z-axis direction mainly due to the gravity of lumped mass and the high towed speed, the dynamic response error in the Z-axis direction would be decreased by reducing the response frequency, such as reducing the time step, adopting longer cable or lower towed speed. The vibrational periods of the responses by HFML and LS-DYNA are 0.618s and 0.665s, respectively. They are slightly different from the theoretical period of the stretching vibration of the cable-towed system, 0.636s. The two simulation periods have an error of about 7%. Figure 8 d-f illustrates that the velocity responses in X-axis and Y-axis directions coincide with each other well. A slight error also occurs in the velocity oscillation in the Z-axis direction. For the 180 • U-turn towing process, Figure 9 illustrates the configurations of the cable calculated by HFML and LS-DYNA. The maximum dynamic strain is 9.75%. The simulating configurations by HFML and LS-DYNA fit perfectly with each other. The comparisons show that HFML using the logarithm strain representation can predict accurately the dynamic large strain configurations of the flexible cable. For the 180 0 U-turn towing process, Figure 9 illustrates the configurations of the cable calculated by HFML and LS-DYNA. The maximum dynamic strain is 9.75%. The simulating configurations by HFML and LS-DYNA fit perfectly with each other. The comparisons show that HFML using the logarithm strain representation can predict accurately the dynamic large strain configurations of the flexible cable.
Experimental Validation of Circularly Towed System in Air
In this section, the proposed HFML will be validated by the towed cable-payload experiments of Williams et al. [39,40]. The aerodynamic resistance effect is taken into consideration due to the relative movement of towed system and air. In the experiments, the
Experimental Validation of Circularly Towed System in Air
In this section, the proposed HFML will be validated by the towed cable-payload experiments of Williams et al. [39,40]. The aerodynamic resistance effect is taken into consideration due to the relative movement of towed system and air. In the experiments, the nylon cable with an attached mass is towed by a ceiling fan, in which the cable length l = 3.0 m, radius r = 7 × 10 −4 m, density ρ = 1195.3 kg/m 3 and Young's modulus E = 1.0 GPa. The blade length of the ceiling fan is l b = 0.645 m. The mass of the towed body is gradually increased from m = 0~10 g. The acceleration of gravity is set as g = 9.8 m/s 2 . For different towed bodies, Williams et al. [39,40] measured the orbital radius of the cable tail in a steady state with the towed angular velocity ω = 7.54 rad/s.
The equivalent description is shown in Figure 10, HMFL is applied in this section to model the rotation processes of the towed pay-load systems in which the cable is modeled by 20 uniform elements, the drag coefficient of the cylinder cable is chosen as C d = 1.2 for towed mass m ≤ 5 g and C d = 1.72 for m > 5 g according to Sun [26]. The time step for HFML is ∆t = 5 × 10 −4 s and the terminal calculating time is dependent on whether the orbital radius of the cable tail can keep constant.
Experimental Validation of Circularly Towed System in Air
In this section, the proposed HFML will be validated by the towed cable-payload experiments of Williams et al. [39,40]. The aerodynamic resistance effect is taken into consideration due to the relative movement of towed system and air. In the experiments, the nylon cable with an attached mass is towed by a ceiling fan, in which the cable length l = 3.0 m, radius r = 7 × 10 −4 m, density ρ = 1195.3 kg/m 3 and Young's modulus E = 1.0 GPa. The blade length of the ceiling fan is lb = 0.645 m. The mass of the towed body is gradually increased from m = 0~10 g. The acceleration of gravity is set as g = 9.8 m/s 2 . For different towed bodies, Williams et al. [39,40] measured the orbital radius of the cable tail in a steady state with the towed angular velocity ω = 7.54 rad/s.
The equivalent description is shown in Figure 10, HMFL is applied in this section to model the rotation processes of the towed pay-load systems in which the cable is modeled by 20 uniform elements, the drag coefficient of the cylinder cable is chosen as Cd = 1.2 for towed mass m ≤ 5 g and Cd = 1.72 for m > 5 g according to Sun [26]. The time step for HFML is △t = 5 × 10 −4 s and the terminal calculating time is dependent on whether the orbital radius of the cable tail can keep constant. Due to that, Williams et al. [39,40] mentioned that the circularly towed system would achieve a reasonably stable motion state and the data about towing process was not offered, the turning radius of the free ends from the experiments by Williams et al. [39,40] when the towed systems achieve stable motion states would be calculated by the proposed HFML in this section. The time for the towed system achieving a steady towed state would be range from 10 s to 160 s with the mass range from 0 g to 14 g. Figure 11 shows the variations of the stable R_tail for different towed mass retraced by HFML and experimental data [39,40]. The predicting results of HFML perfectly agree with the experimental data, HFML could effectively model the aerodynamic resistance. However, the calculation time of HFML is just several minutes which is higher efficient than the experimental study of the towed system moved in the medium. Meanwhile, the proposed HMFL is much more suitable for flexible cable analyses since it would keep convergent even for the mass of towed payload reach to 14 g while the lumped mass method presented by Williams et al. [39,40] would be non-convergent when the mass m ≥ 11 g.
with the experimental data, HFML could effectively model the aerodynamic resistance. However, the calculation time of HFML is just several minutes which is higher efficient than the experimental study of the towed system moved in the medium. Meanwhile, the proposed HMFL is much more suitable for flexible cable analyses since it would keep convergent even for the mass of towed payload reach to 14 g while the lumped mass method presented by Williams et al. [39,40] would be non-convergent when the mass m≥11 g. R_tail (m) m (g) Figure 11. Curves of stable R_tail with different towed mass.
The time costs of the presented HFML and the LS-DYNA is compared. In the simulation of LS-DYNA, the cable is modeled by 10 links 160 elements, and a mass element is applied to model the towed mass 14 g, 6 m × 6 m × 3.8 m air domain is modeled and divided into 136800 elements with an element size of 0.1 m; the material parameters are set to be the same with experimental tether. The simulation is carried out on an Intel (R) Core (TM) Quad i5-3470 CPU @ 3.20GHz and 16 GB RAM. For the simulation up to 10s, it takes about 40 s by the presented HFML compiled in Visual C++ 6.0 environment, while it costs about 2 hours and 25 minutes by LS-DYNA-about 99.55% of computational time is saved by the proposed HFML. The comparison of the calculation efficiency with LS-DYNA shows that HFML is higher efficient and suitable for rigid-flexible-liquid coupling dynamic problems.
Conclusions
A robust Hamiltonian finite element method HFML is presented in this paper to analyze the spatial dynamics of the towed cable-payload system which is a rigid-flexibleliquid coupling problem, in which logarithmic strain is applied to model the large deformation cable. A direct integral method is derived to calculate the aerodynamic effect in which the traditional coordinate transformation is avoided, fourth-order closed Newton-Cotes formulate is applied to analyze the integration of Morison's equation. The proposed Figure 11. Curves of stable R_tail with different towed mass.
The time costs of the presented HFML and the LS-DYNA is compared. In the simulation of LS-DYNA, the cable is modeled by 10 links 160 elements, and a mass element is applied to model the towed mass 14 g, 6 m × 6 m × 3.8 m air domain is modeled and divided into 136800 elements with an element size of 0.1 m; the material parameters are set to be the same with experimental tether. The simulation is carried out on an Intel (R) Core (TM) Quad i5-3470 CPU @ 3.20 GHz and 16 GB RAM. For the simulation up to 10s, it takes about 40 s by the presented HFML compiled in Visual C++ 6.0 environment, while it costs about 2 hours and 25 minutes by LS-DYNA-about 99.55% of computational time is saved by the proposed HFML. The comparison of the calculation efficiency with LS-DYNA shows that HFML is higher efficient and suitable for rigid-flexible-liquid coupling dynamic problems.
Conclusions
A robust Hamiltonian finite element method HFML is presented in this paper to analyze the spatial dynamics of the towed cable-payload system which is a rigid-flexible-liquid coupling problem, in which logarithmic strain is applied to model the large deformation cable. A direct integral method is derived to calculate the aerodynamic effect in which the traditional coordinate transformation is avoided, fourth-order closed Newton-Cotes formulate is applied to analyze the integration of Morison's equation. The proposed HFML can exactly conserve the system energy and accurately predict the dynamic responses of the towed cable-payload system. It is more accurate to analyze the towed dynamic problem with large deformation than existing Ding's HMSS in which the dynamic equations are derived based on small deformation theory. Furthermore, the efficiency to treat the rigid-flexible-liquid coupling dynamic problem of HMFL is much higher than LS-DYNA since plenty of medium elements are not required in LS-DYNA simulations. The proposed HFML would be further applied to the technical improvement of towed-payload systems in more engineering fields.
Data Availability Statement:
The data used to support the findings of this study are available from the corresponding author upon request. | 2021-10-24T15:15:05.535Z | 2021-10-21T00:00:00.000 | {
"year": 2021,
"sha1": "b6b1b0b4f52086b00ca2e755d28f0b65787fd41d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-1312/9/11/1157/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "c9e64ccf78e7ef8e41832a46d2a25e63c89b08d6",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
156055418 | pes2o/s2orc | v3-fos-license | Practices of Care to HIV-Infected Children: Current Situation in Cameroon
Background: To accelerate access to pediatric HIV care in Cameroon, operational challenges in implementing HIV pediatric care need to be identified. The aim of this study was to assess the knowledge, attitudes, and practices of health care workers regarding pediatric HIV infection in Cameroon. Methods: A descriptive cross-sectional study was conducted over a 4-month period (April to August 2014) in 12 health facilities in 7 regions of Cameroon selected using systematic random sampling. Data were collected from interviews with health care providers and managers using standardized self-administered questionnaires and stored in the ACCESS software. Results: In total, 103 health care providers were included in this study, of which 59 (57.3%) were health workers and 44 (42.7%) community agents. Most of the health workers in charge of HIV pediatric care were nurses, requiring effective medical task shifting that was institutionalized in Cameroon. The knowledge of health care providers in relation to pediatric HIV care was acceptable. Indications for prescription of test for early infant diagnosis were known (96.1%), but their attitudes and practices regarding initiating antiretroviral therapy (ART) in infants less than 2 years (5.2%) and first-line ART protocols (25.4%) were insufficient, due to little information about standard procedures. Conclusion: Capacity building of health care providers and large-scale dissemination of normative national documents are imperative to improve HIV pediatric care in the health care facilities.
Introduction
Human immunodeficiency virus (HIV) infection remains a major concern in the world despite the progress recorded in the fields of Prevention of Mother To Child Transmission of HIV (PMTCT) and antiretroviral therapy (ART). 1 In 2012, Africa was the continent most affected by this infection accounting for 91% of HIV-infected children aged less than 15 years. 2 In 2006, the proven effectiveness of early initiation of ART in the first year of life on reducing HIV-related mortality in infants led the World Health Organization (WHO) to recommend routine ART initiation of all HIV-infected infants in 2013. 1 In 2015, this recommendation was, respectively, extended to the children greater than 5 years old and then to every HIV-infected person, irrespective of clinical and immunological stage. [3][4][5][6] Moreover, the strategic United Nations AIDS (UNAIDS) 90-90-90 treatment targets aim to have 90% of people living with HIV (PLHIV) knowing their status, 90% ART coverage of PLHIV, and 90% of viral suppression for PLHIV on ART by 2020. 7 These rapidly changing treatment guidelines may contribute to provider confusion and limit knowledge of pediatric HIV care.
In resource-limited setting, the mean age at ART initiation was often late and varied between 4 and 9 years, after severe immunodeficiency had set in 70% of cases. [8][9][10] In 2012, there was a substantial gap between treatment need and access to ART (34%) among the pediatric population, compared with adults (64%) in the 22 priority countries of the Global Plan. [2][3][4][5][6][8][9][10] This worrisome situation required appropriate solutions and inspired the recommendations of the 2013 WHO pediatric ART guidelines. World Health Organization's 2 Clinical Medicine Insights: Pediatrics new recommendations raised a new challenge for early infant diagnosis (EID) and access to ART before the age of 24 months. In 2012, only 39% of HIV-exposed infants had access to EID services within the first 2 months of life. 11 Few health facilities developed activities for the care of children to the extent they did for adults. Health care teams, in insufficient numbers, were not trained enough in the field of pediatric HIV. 12,13 In 2012, a small proportion of children in need of ART (15%) were actually on ART, compared with adults (49%) in Cameroon. This gap could be related to poor knowledge of pediatric HIV by health care providers in Cameroon. To accelerate access to pediatric HIV care in Cameroon, it seemed sensible to assess care practices in the health facilities. The aim of this study, therefore, was to assess the knowledge, attitudes, and practices of Cameroonian health care providers on pediatric HIV care.
Study design
A descriptive cross-sectional study was conducted from April 18 till August 19, 2014 (4 months) in 12 health care facilities offering pediatric HIV care in 7 regions of Cameroon (Center, East, Littoral, North, North-West, West, and South-West).
Selection of the 12 health care facilities
This was based on the assumption that health facilities located in the same environment and covering pediatric populations of similar sizes had the same type of organization and faced similar difficulties. Health care facilities were selected by using a combination of samples techniques: stratified and systematic random sampling. The following variables were considered for sample stratification: urban or rural location and weight of health care facilities. The latter was based on the number of HIV-infected children currently in care and organized in 5 categories: category I (<50 children), category II (between 50 and 150 children), category III (between 150 and 250 children), category IV (between 250 and 350 children), and category V (>350 children). Using the above criteria, the 137 health facilities caring for HIV-infected children in Cameroon were subsequently organized in 4 groups for simplification of sampling: group 1, categories I and II from urban setting (56 health facilities, n = 1728 children, 32%); group 2, categories III to V from urban setting (5 health facilities, n = 2069 children, 39%); group 3, categories I and II from rural setting (74 health facilities, n = 1277 children, 23%); and group 4, categories III to V from rural setting (2 health facilities, n = 324 children, 6%). In each group, health facilities were classified in increasing order based on the number of HIV-infected children currently in care. The sampling interval corresponded to the total number of children in care divided by the number of health facilities to be selected. The later was obtained by applying the percentage of children followed in each group to 12 health facilities: group 1 (n1 = 4), group 2 (n1 = 4), group 3 (n1 = 2), and group 4 (n1 = 1). The selection starting point was determined by pulling randomly a number between 1 and the sampling interval; the first health facility was the one whose accumulated number of HIVinfected children currently in care integrated this value. Finally, the 12 selected health care facilities included the following: 6 Approved Treatment Centers (ATC) belonging to 4 regional hospitals (Bertoua, Garoua, Bafoussam, and Bamenda) and 2 central hospitals (Laquintinie Hospital, Douala, and "Mother and Child Center of the Chantal Biya Foundation," Yaoundé); 6 treatment units (TU) belonging to 1 regional hospital (South-West regional annex, Limbe) and 3 district hospitals (DH) in Littoral, South-West and North-West regions ("Cité des palmiers" DH, Douala, Kumba DH, Batibo DH), and 2 private hospitals in North-West region (Shisong Catholic Hospital and Mbingo Baptist Hospital, Bamenda).
Study population
Among selected health care facilities, the study population consisted of health care providers in charge of pediatric HIV care including physicians, nurses, psychosocial agents (PSA), community agents, pharmacy clerks, pharmacists, and psychologists who provided written informed consent.
Data Collection
Two standardized and anonymous auto-questionnaires were developed and validated during study preparation meeting including the main investigators and investigators from the Division of Operational Research in Health (DORH) of the Ministry of Public health and from the Center Pasteur du Cameroon (CPC): (1) The "operational level 2" auto-questionnaire administered to the health workers in charge of HIV pediatric care and (2) the "operational level 3" auto-questionnaire administered to community agents in charge of HIV pediatric care. During data collection, an interviewer was always available to guide if necessary the health care providers.
Procedures
The data were collected by 12 interviewers during a 4-months period (April to August 2014) and were entered in an "ACCESS" database by a data administrator of CPC Epidemiology and Public Health Service. The auto-questionnaires were distributed to the staff having consented to participate in the study. The completed auto-questionnaires were validated by the main investigator before data entry.
Ethical Considerations
The
Variables
Using the auto-questionnaires administered to the staff in charge of HIV pediatric care in ATC/TU, the following data were collected: socio-demographic and occupational characteristics (age, sex, function in the health facility, initial training level, number of years at current post), type of services offered from pediatric HIV screening to HIV pediatric care, working conditions, knowledge, attitudes, and practices regarding pediatric HIV care (ART initiation; psychological, social, and nutritional support; HIV status disclosure; and transition to adult service).
Data Analysis
The data were described using frequency and corresponding proportions for categorical variables or medians with interquartile ranges (IQRs) for quantitative variables. The following proportions in relation to health care providers in charge of HIV pediatric care were estimated: knowledge on HIV infection (modes of HIV transmission to children, HIV diagnosis in children, principles of ART initiation, and prevention of opportunistic infections), counseling for HIV screening, and searching for HIV-infected children lost to follow-up. This descriptive analysis was conducted using R software, version 3.4.3.
Study population
In total, 103 health care providers from 12 health care facilities participated in this study, of whom 59 (57.2%) were health workers and 44 (42.7%) community agents ( Table 1) Counseling for HIV screening in children was reportedly practiced by 92.2% of health workers. Respondents reported the main indications of HIV screening to be HIV-exposed infants and sibling of HIV-infected child (85.4%), hospitalization for severe disease (45.8%), and acute severe malnutrition (ASM; 39.0%). In case of refusal of HIV screening by a parent, a discussion with the second parent, with consent from the first parent, was proposed by 47.6% of the health workers, while a minority (1.7%) reported that they will test the child for HIV without the consent of either parents. About 15.5% of participants reportedly did not carry out any specific activity to identify HIV-exposed infants in the labor room. Medical consultation for HIV-exposed or infected children was reportedly practiced by 76.7% of the health workers. The frequency of HIV-exposed-infants' visits was reported to be monthly (50.0%) or adjusted to Expanded Program of Immunization (EPI) schedule (32.0%). Among the 47 health workers who implemented specific activities to track HIVexposed infants, 18.4% of them reportedly used telephone calls (Table 2). Administration of co-trimoxazole in HIV-exposed infant The initial ART regimen was prescribed by the health workers without seeking a second opinion (5.1%) or after preliminary validation by the adult therapeutic committee (52.5%), the pediatric therapeutic committee (35.6%), or the closest ATC/TU therapeutic committee (10.2%).
Follow-up of children on ART. On ART initiation, most of the respondents reportedly followed up the children monthly (62.7%), while the rest reported quarterly (20.3%), every 6 months (15.3%), or in case of disease (13.6%) follow-up. The CD4 count was prescribed every 6 months (83.1%) or not at all (8.5%). Viral load exam as a follow-up biomarker was not requested by 40.7% of the health workers. In health facilities offering therapeutic education (50.5%), 34.0% of the health
Discussion
This is the first study, which aimed to assess the practices of pediatric HIV care in Cameroon. Most of the health workers in charge of pediatric HIV care were nurses, requiring effective medical task shifting, whose efficacy has been proven in Malawi 13 and was institutionalized in Cameroon in 2013. The median seniority at the current post was short (4.4 years [IQR: 2.0-8.0]), showing a high turnover of health care providers in the health care facilities. This situation could prevent health facilities to achieve their goals of care. The knowledge, attitudes, and practices of health care providers were satisfactory in certain aspects of pediatric HIV care but needs to be strengthened. Many facility entry points (maternity, some pathological situations, sexual abuse) which would increase chances of finding pediatric HIV cases were not elicited by the health care providers interviewed, thus portraying some hesitation to propose HIV screening to families with unknown HIV status. Counseling for HIV screening introduced by the health care providers in the entry points of pediatric care is efficient in improving access to health care. 14 Knowledge on many aspects of curative care (ART regimens, management of HIV-tuberculosis co-infection and severe acute malnutrition, and HIV co-morbidities) was limited, translating to a lack of capacity. A little more than a third of health care providers did not receive capacity building in pediatric HIV care. According to 15.5% of health care providers interviewed, there was no specific activity useful in identifying HIV-exposed babies in the labor room. This observation could partly explain the huge proportion of HIV-exposed infants lost to follow-up between the maternity and ATC/TU services in the health care facilities offering EID of HIV. Clear directives must be developed to ensure the continuum of the care between these services. 15 Only 32.0% of health workers coupled HIV-exposed infants' medical appointments with the EPI vaccination calendar. Nevertheless, this strategy has been strongly recommended in Cameroon since 2008, and studies have shown this strategy to be effective in facilitating the integration of services, reducing risks of stigmatization and decreasing lost to follow-up. 16 Therapeutic education activities remained weakly implemented in the health care facilities offering pediatric HIV care. This situation reflects the difficulty of health care providers in addressing frequent adherence issues among children on ART in resource-limited settings. [17][18][19] Capacity building of health care providers is imperative to improve psychological and social support of HIV-infected children.
This study was conducted in only 12 out of 137 health centers in Cameroon in 7 of the 10 regions of the country. This could pose the problem of representativeness and generalization of results despite random sampling. An observational phase of practices could help minimize potential biases of prevarication or social desirability that could be observed in Knowledge Attitudes Practices (KAP) surveys. Notwithstanding the above, the findings of this study reflected the management practices of most health facilities in Cameroon due to the total number of HIV-infected children (approximately 43% [2257/5300]) on ART in Cameroon in the 12 health facilities included in the study and could be used for the operational plan to accelerate the management of pediatric HIV care.
Conclusions
The knowledge of health care providers of pediatric HIV care was acceptable, but their attitudes and practices were insufficient due to little information concerning the standard procedures. Capacity building of health care providers and large-scale dissemination of normative national documents are imperative to improve HIV pediatric care in health care facilities. | 2019-05-21T13:04:17.350Z | 2019-05-01T00:00:00.000 | {
"year": 2019,
"sha1": "a5822dee3072e2ab1ea24ee553d7ad11d0cf561b",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1177/1179556519846110",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a5822dee3072e2ab1ea24ee553d7ad11d0cf561b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
270763621 | pes2o/s2orc | v3-fos-license | Stemness subtypes in lower-grade glioma with prognostic biomarkers, tumor microenvironment, and treatment response
Our research endeavors are directed towards unraveling the stem cell characteristics of lower-grade glioma patients, with the ultimate goal of formulating personalized treatment strategies. We computed enrichment stemness scores and performed consensus clustering to categorize phenotypes. Subsequently, we constructed a prognostic risk model using weighted gene correlation network analysis (WGCNA), random survival forest regression analysis as well as full subset regression analysis. To validate the expression differences of key genes, we employed experimental methods such as quantitative Polymerase Chain Reaction (qPCR) and assessed cell line proliferation, migration, and invasion. Three subtypes were assigned to patients diagnosed with LGG. Notably, Cluster 2 (C2), exhibiting the poorest survival outcomes, manifested characteristics indicative of the subtype characterized by immunosuppression. This was marked by elevated levels of M1 macrophages, activated mast cells, along with higher immune and stromal scores. Four hub genes—CDCA8, ORC1, DLGAP5, and SMC4—were identified and validated through cell experiments and qPCR. Subsequently, these validated genes were utilized to construct a stemness risk signature. Which revealed that Lower-Grade Glioma (LGG) patients with lower scores were more inclined to demonstrate favorable responses to immune therapy. Our study illuminates the stemness characteristics of gliomas, which lays the foundation for developing therapeutic approaches targeting CSCs and enhancing the efficacy of current immunotherapies. By identifying the stemness subtype and its correlation with prognosis and TME patterns in glioma patients, we aim to advance the development of personalized treatments, enhancing the ability to predict and improve overall patient prognosis.
LGG tissue specimen collection
Between December 2021 and June 2022, we obtained five LGG tissue specimens and six normal peritumoral tissue specimens from the Department of Neurosurgery at Zhongnan Hospital of Wuhan University.All enrolled patients had a verified LGG diagnosis through pathology, gave informed consent, and the study received ethical approval from the committee.
LGG cohorts consolidation and preprocessing
We gathered 644 LGG samples from three different cohorts In aggregate, including TCGA-LGG, CGGA, and GSE43378 (grade II and grade III), along with their corresponding clinical and survival annotations.To ensure sample comparability, we acquired data for TCGA-LGG using the Cancer Genome Atlas (TCGA) database (https:// genom ecanc er.ucsc.edu/).And transformed it into transcripts per million (TPM) form, accounting for differences in gene length 16 .As for the remaining five glioma-associated data cohorts, CGGA [17][18][19][20] , GSE43378 21 , Gravendeel 22 , GSE107850 23 and Rembrandt 24 , we retrieved them from the GlioVis database (http:// gliov is.bioin fo.cnio.es/) 25 with the samples belonging to high-grade gliomas proposed.And preprocessed them via the Robust Multichip Average algorithm 26 .The method sva (3.44.0) was applied to mitigate the batch effect caused by different sequencing platforms, equipment, and reagents with the combat function 27 .
Collection and grouping of stem cell characteristics of LGG stem cell subtypes
To comprehensively identify LGG stemness subtypes, we gathered an impressive compilation of 26 sets which includ stem cell-related genes from the highly esteemed Internet basic tool StemChecker, which is accessible at http:// stemc hecker.sysbi olab.eu/ 28 .Subsequently, utilizing the GSVA (Gene Set Variation Analysis) R package (v1.44.5) 29 , all LGG sample' enrichment scores of stem cell-related gene sets were calculated.This was achieved through the strategic implementation of the ssGSEA algorithm.As an integral next step in our study, the ssGSEA scores obtained were then subjected to unsupervised consensus clustering on LGG samples.This was accomplished via the ConsensusClusterPlus R package (v1.60.0) 30, which was executed using the K-means clustering method based on Euclidean distance.In order to ensure optimal accuracy and reliability, the clustering analysis was repeated a staggering 1000 times.Ultimately, the consensusClusterPlus R package and factoextra R package 31 were implemented in order to expertly and effectively determine the ideal number of clusters.
Tumor microenvironment (TME) infiltration exploration
To thoroughly explore the intricacies of TME infiltrations in LGG samples, The well-known CIBERSORT deconvolution algorithm 32 is utilized for assess the relative proportion of common immune cells based on the standardized large sample gene expression profile.With the "CIBERSORT" R package being utilized in conjunction with 1000 permutations and the gene signature of LM22 leukocyte.In this way, the TME fractions in each LGG sample were expertly and effectively quantified.In addition, a rigorous screening process was executed on samples with P < 0.05 as the cut-off of CIBERSORT analysis.In order to ensure maximal accuracy and dependability.Moreover, the incredibly powerful and highly regarded ESTIMATE algorithm 33 was leveraged to estimate the stromal and immune scores for research dataset.This was accomplished through the R package estimate, the algorithm that can estimate the immune and mechanism level of the sample through RNA-seq data, and then evaluate the purity of the tumor.
Predicting immunotherapy response and chemotherapeutic sensitivity
To predict chemosensitivity in LGG samples, we used oncoPredict (V0.2) 34 , an R package based on Genomics of Drug Sensitivity in Cancer (GDSC) (https:// www.cance rRxge ne.org/) 35 , to compute the number of half-maximum inhibitory concentration (IC50) of various chemotherapeutic agents using ridge regression 36 .To assess prediction accuracy, tenfold cross-validation was employed.Additionally, we estimated immunotherapeutic responses in LGG patients via Tumor Immune Dysfunction and Exclusion (TIDE), a network of scientific research tools (http:// tide.dfci.harva rd.edu/).
WGCNA for key module identification
To construct an expression matrix from TCGA-LGG data, we used the GoodSamplesGenes method and sample network approach.We applied a statistical significance criteria value of Z.Ku = 2.5 (Z.ku = (ku-mean(k))/ (sqrt(var(k))) that produced exceptional results 37 .Using the WGCNA R package (V1.71),we generated coexpression networks.Via the branch cutting approach, all genes were split into gene modules with min-Cluster-Size = 30 and deepSplit = 2 set as crucial parameters 38 .We selected modules with high correlation (greater than 0.6) by assessing the variances in module eigengenes.To identify key modules associated with cluster (selected disease characteristics is cluster C1 or cluster C2), we evaluated gene significance (GS) and calculated module membership (MM) based on all genes' value of average GS.Finally, we chose the module considered the most pertinent and crucial, as elucidated in the aforementioned description.
Calculation of the mRNA stemness index (mRNAsi)
We use the OCLR algorithm based on machine learning of a class of logistic regression, which is trained with embryonic stem cells and their differentiated progenitor cells as data set samples, to evaluate the mRNAsi of each LGG sample, following the methodology proposed by Malta et al. 39 .This method is effective in predicting the stemness of cancer, and the OCLR approach has been previously used for this purpose.
Development and validation of a prognostic stemness model
We used single Cox regression analysis (P < 0.05) to identify hub genes which were statistically associated with overall survival (OS) data in the TCGA dataset(TCGA-LGG, CGGA and gse43378) 40 .RandomForestSRC R package (v3.1.1)was utilized for the construction of random survival forest prognosis model to narrow the range of key genes 41 .The criterion for inclusion of genes in the next analysis is that the relative importance is greater than 0.25.After listing all the combinations, we obtained four genes through full subset regression with R package leaps (V3.1) and constructed a signature named stemness-risks core.The risk scores were calculated following the formula: stemness Risk score = , 'n' represents the number of genes in the signature and the value of Expi is the tpm expression profile values of the corresponding genes.We verified the accuracy of survival evaluation via the roc curve of stem cell-related risk score in MMD1(Gravendeel, GSE107850 and Rembrandt) 42 .We also drew the KM curve of three data sets in TCGA (TCGA-LGG, CGGA, GSE43378) separately.We also verified the protein levels of each gene were retrieved from the Human Protein Atlas (HPA) databasee 43 .
Nomogram construction and verification
The nomogram was developed based on the prognostic stemness model, and cross-validation was performed to prevent overfitting 44 .The accuracy of the nomogram was assessed through the examination of a calibration curve., where the 45° line represented the highest degree of prediction potential.Validation of the feasibility and value of clinical applications of nomograms based on decision analysis curves, which was performed using the R package rmda (V1.6) 45,46 .
Hub gene functional exploration
According to the difference in central gene expression, we conducted an enrichment analysis to determine the GO and KEGG pathway with significant differences and enrichment in the C1 and C2 groups of LGG.The c2.cp.kegg.v7.4.symbols.gmtgene set was annotated, and pathways were considered significantly enriched if they met the following criteria: a gene size (n) of 20%, FDR of 25%, |ES|> 6, and P < 0.05.The standard significantly enriched by KEGG and GO pathways was used in the analysis 47 .To investigate the differences in pathways and marker genomes differences between the lower and high stem cell related risk score groups of LGG patients, all 644 patients were first distinguished into two groups based on their stemness-risk scores, with the median value used as the threshold.Next, We conducted gene cluster enrichment analysis (GSEA) through the clusterProfiler R package 48 .
Transcription-level expression validation by RT-qPCR
RT-qPCR is a molecular biology technique used to measure the expression level of RNA 49 .In this technique, RNA is initially reverse transcribed into complementary DNA (cDNA) through the action of the reverse transcriptase enzyme with the SuperRT RT reagent Kit (Vazyme, China), and then the cDNA is amplified using quantitative polymerase chain reaction (qPCR).Real-time monitoring of the amplification process is carried out using fluorescent dyes, enabling the quantification of the initial amount of RNA.The 2(-Ct) technique is commonly used to analyze the data generated from RT-qPCR, where Ct values are normalized to a housekeeping gene and expressed relative to a control sample.In this study, RNA was extracted from both LGG tumors and adjacent normal peritumoral tissues using RNAiso Plus (Takara, Japan).Correspondingly, the quality of RNA was evaluated before performing RT-qPCR using specific primers and commercial kits 50 .The primer sequence is shown in supplementary Table S1
Cell transfection
For transient transfection of siRNAs into glioma cells, the RNATransMate system (Sangon Biotech, China) was employed following the manufacturer's instructions.The siRNAs were chemically synthesized by Sangon Biotech (China).The siRNA sequences for these genes are shown in supplementary Table S2.
Lentiviral infection of glioma cells
We used a 6-well plate under a biosafety cabinet to seed well-conditioned HS683 cells in the logarithmic growth phase, and continuously cultured them in an incubator until the cell density reached approximately 50%, at which point the infection was performed.Fresh culture medium containing polybrene was prepared in advance.Then, the IDH R132H mutant lentivirus purchased from Heyuan company was used for the infection.At the beginning of the infection, the old medium in each well of the 6-well plate was aspirated, and then the prepared polybrenecontaining medium was slowly added along the sidewall of each well.The plate was incubated horizontally in the incubator for 24 h.The next day, the medium was replaced with polybrene-free medium, and the plate was incubated horizontally in the incubator for another 48 h.Finally, the optimal concentration of puromycin was used to select the target cells.A small amount of the selected target cell samples was used for protein extraction, and qPCR was performed to detect the stable knockdown effect.After verification, the cells were expanded using puromycin-containing medium or used for subsequent experiments.
Knockdown of target genes in mutant strains
Under a biosafety cabinet, seed the well-conditioned, selected IDH1 mutant HS683 cells into a 6-well plate.
When the cell density reaches 50-60%, perform the transfection operation with the following system: Solution A consists of 250 μL Opti-MEM and 7.5 μL Lipomaster3000, while Solution B consists of 250 μL Opti-MEM and 4 μL siRNA.After preparation, gently mix with a pipette, add Solution B to Solution A dropwise, mix well, and let it stand at room temperature for 10 min before adding it to the 6-well plate.Replace the medium 24 h later.
Cell counting kit-8 (CCK-8) assay
HS683 cells and HS683 IDH mutant cells were seeded in 96-well plates at a density of 10^4 cells per well.The control and siRNA groups of the HS683 cell lines were cultured for 24, 48, and 72 h, respectively, with each group having three duplicate wells.Absorbance at 450 nm was measured after incubation for 1-2 h at 37 °C and 5% CO2.
Wound-healing assay
After a 6-h period post-transfection, cells underwent harvesting, centrifugation, and resuspension in serumfree culture.Adjusting the cell density to 5 × 10 5 cells per well, a scratch was introduced into the cell layers when they reached 90% confluence using a 200 μl sterile pipette tip.Following the removal of the cell culture medium, suspended cells, and debris, each well was refilled with serum-free medium and left to incubate for 24-48 h.Subsequent observations and photographic documentation focused on the cell migration area.The scratch test was then employed to evaluate differences in cell healing ability based on the migration area.
Transwell assay
Matrigel (Corning, USA) was thawed overnight at 4 °C.Subsequently, 100 μl of diluted Matrigel was added to the chamber.The upper chamber received 200 μl of serum-free medium, while the lower chambers were supplemented with 500 μl of 10% FBS DMEM.A total of 3 × 10 4 harvested cells were seeded in the upper chambers and incubated for an additional 48 h.Following incubation, the invading chamber was removed, and cells on the polycarbonate membrane were fixed with 4% paraformaldehyde and stained with 0.1% crystal violet.Three random fields were selected, and the count of invaded cells was conducted under a microscope.The experiments were replicated in triplicate.
Statistical analysis
All data are completed by GraphPad Prism 5 software and R software (version 4.2.0).Differential expression was evaluated using the Wilcoxon signed-rank test.We use Student's t-test to analyze continuous variables.The statistically significant cut-off was set at 0.05.
Institutional review board statement
This study strictly followed the principles of the Helsinki Declaration, and all normal and glioma samples used in the study were approved by the Ethics Committee of Central South Hospital of Wuhan University (No. 2019048).
Informed consent statement
Patients who provided samples have signed the informed consent form.www.nature.com/scientificreports/
Stem cell landscape and characteristics of LGG
Figure 1 reflects the process of this article, illustrating the steps we took to construct and validate the stem cellrelated risk prediction model and the associated influencing factors.In this study, we developed patterns of stemness enrichment and constructed signatures indicating the association between stemness and risk (Fig. 1).The scores reflecting the enrichment related to stemness in LGG samples were quantified using the ssGSEA algorithm, and 15 gene sets were screened out using univariate Cox analysis (P < 0.05) to depict a prognostic stemness network.Among the above 15 gene sets, all but Hs iPsc Shats are risk factors for OS (Fig. 2A).And LGG patients were categorized into three distinct clusters (Fig. 2B-E).Cluster 2 LGG patients had worse OS, whereas Cluster 1 LGG patients had the best OS (log-rank P = 7.5e−10, Fig. 2F).And Cluster2 was enriched by stemness gene sets, which were found to decrease patients' OS, such as Hs SC Shats, Hs SC Palmer, and HS ESC Bhattacharya.The enrichment of all gene sets with prognostic significance in C1 is lower than that in C2, except for the protective gene set Hs iPsc Shats (Fig. 3A,B).This observation further corroborates the earlier conclusion.Additional understanding of the TME landscape was gained ESTIMATE and CIBERSORT analyses, comparing TME components and stromal and immune scores (Fig. 3C,D).Within the three stemness clusters, Cluster 2 demonstrated an immunosuppressive subtype marked by elevated levels of macrophages M1 and activated mast cells, along with increased immune and stromal scores.On the other hand, Cluster 1 displayed higher antitumor TME components, such as B cells naive and plasma cells, and lower immune and stromal scores.
Molecular characteristics, sensitivity to chemotherapy and responsiveness to immunotherapy vary across stemness subtypes
Currently, the conventional treatment strategy for colorectal cancer patients is surgery plus systemic chemotherapy.To evaluate chemosensitivity, we used a prediction algorithm to estimate the IC50 value of several chemotherapeutic drugs and compared them among stem clusters.As shown in Fig. 4A, the IC50 estimates of Bortezomab, Daporinad, Topotecan, and Staurosporine in Cluster 2 were significantly lower, indicating that this subtype may be more sensitive to these drugs.Additionally, Cluster 3 was found to be more sensitive to temozolomide.Using the TIDE algorithm, we evaluated the immune response of the three subtypes.As the TME environment results suggest, Cluster 1 had higher levels of B cells and other killer cell infiltration with a significantly higher proportion of benefits from immunotherapy than the other clusters.However, the activation of immunosuppressive mast cells and the decrease of M1 macrophages and CD8 T cells may be the reasons for the highest non-response ratio in Cluster 2. As shown in Fig. 4C, the Cluster 1, which has the best prognosis, mostly consists of IDH-mutant types, whereas the Cluster 2, which has the worst prognosis, primarily consists of IDH-wildtype.This is consistent with previous research findings.Similarly, the 1p19q co-deletion, which often indicates a better prognosis, is also mostly found in the Cluster 1.In contrast, the Cluster 2, which has the worst prognosis, only rarely exhibits the 1p19q co-deletion.
Utilizing WGCNA to identify the module and hub genes associated with stemness Cluster
The difference in immune therapy response and survival outcomes between the C1 and C2 groups of LGG patients is the greatest.To identify characteristic genes of this subtype in TCGA, we used WGCNA analysis.The optimal soft threshold power β was determined as 5, ensuring scale-free network constructions with an unscaled R2 value of 0.9 (shown in Fig. 5A).We set a minimum of 300 genes for each module and observed that Eight modules were formed by genes with comparable expression profiles with a clustering dendrogram (Fig. 5B).Among these modules, The green module exhibits a robust positive correlation with gene expression in the Cluster 2 subtype (ME = 0.76, P = 3e−121), while demonstrating the strongest negative correlation with the Cluster 1 subtype.(ME = − 0.67, P = 1e−83) (Fig. 5C).The green module was determined to be the central node and identified 1123 Key genes for subsequent exploration, on the basis of qualification standards of GS > 0.4 and MM > 0.8 (Fig. 5D).
And the GO analysis showed enrichment in DNA-templated DNA replication, chromosome segregation, and DNA replication.Additionally (Fig. 5E), the KEGG analysis revealed that the turquoise module was mainly involved in cell cycle regulation, DNA replication, and the Fanconi anemia pathway (Fig. 5F).
Constructing a prognostic stemness signature and validation
We performed a single Cox regression analysis and identified 73 genes that were substantially linked with OS (P < 0.05).Next, we filtered out genes that were not important for OS using random forest survival analysis.resulting in the selection of 8 genes with relative importance > 0.25 (Fig. 6A).The ROC curve showed that the number of the area under the curve (AUC) in three years was greater than 0.89, which indicated the high accuracy of the selected genes as prognostic markers (Fig. 6B).The 8 genes were assembled into a set for analysis, and Fig. 6C shows that SMC4, CDCA8, DLGAP5, and ORC1 were selected.We developed this formula to calculate the risk value for each sample: Risk score = (0.065 * expression of SMC4) + (0.040 * expression of CDCA8) + (− 0.032 * expression of DLGAP5) + (− 0.047 * expression of ORC1).Using this score, the stemness-risk score was calculated for each LGG patient.The ROC curve analysis showed that 1-year AUC = 0.73, 2-year AUC = 0.76, and 3-year AUC = 0.78 (MMD1) (Fig. 6D).
Correlation between risk characteristics and tumor microenvironment
The heatmap (Fig. 7A) and Spearman correlation diagram (Fig. 7B) showed that macrophage B cells naive, monocyte and mast cell activated had a higher degree in the lower-risk group, indicating that this group may be more responsive to immunotherapy.Conversely, the high stemness-risk group was influenced by T cell CD4 memory rest, M2 macrophages, and M1 macrophages, indicating that immune function was suppressed.The immune score was significantly higher in the high-risk group compared to the lower-risk group.Figure 7C and D show the mean distribution of TME in the two groups.The most important immune response in the lower stemness risk group was T-cell regulation, suggesting differentiation of immune function toward antitumor, while the high stemness risk group had higher levels of immunosuppressive cells, especially M2 macrophages.
Confirmation of the expression of hub genes
The results of qPCR indicated that OCR1 (P < 0.05; Fig. 8A), DLGAP5 (P < 0.05; Fig. 8B), SMC4 (P < 0.05; Fig. 8C), and CDCA8 (P < 0.05; Fig. 8D) , compared to normal tissue, it is highly expressed in glioma tissue.We also investigated the Spearman correlation of mRNA expression levels for the four hub genes.We found that all four genes are strongly positively correlated, which is consistent with our research findings(Fig.8E-J) .Furthermore, we confirmed the translation expression level of the hub genes using the HPA database, and the prognostic biomarkers were found to be stained strongly or moderately.: SMC4 (Fig. 9A), OCR1 (Fig. 9B), CDCA8 (Fig. 9C), and DLGAP5 (Fig. 9D), indicating that these hub genes were translated more in glioma samples.
Investigating the impact of hub genes on HS683 human glioma cell proliferation and migration
To ascertain the involvement of hub genes in HS683 human glioma cell proliferation and migration, we conducted additional experimental investigations focusing on prognostic markers.Subsequently, we implemented loss-of-function experiments to silence four hub genes in HS683 cells, aiming to elucidate the role of hub genes in LGG progression.The efficacy of siRNA knockdown in HS683 cells was validated using qRT-PCR (Fig. 10A).A significant reduction in cell viability within 72 h was observed through the CCK-8 assay after the knockdown of CDCA8, DLGAP5, ORC1, and SMC4 (Fig. 10B).Furthermore, transwell invasion and scratch assay results demonstrated a substantial decrease in HS683 cell migration and invasion following the knockdown of CDCA8, DLGAP5, ORC1, and SMC4 (Fig. 11).After verifying the significant IDH1 mutation in the HS683 cell line through qPCR,, the aforementioned in vitro experiments were also validated in the IDH mutant HS683 cell line.(Figs. 12, 13).Whether in IDH-mutant HS683 cell lines or IDH-wildtype HS683 cell lines, knocking down the four hub genes significantly reduced the invasive growth capability of glioma cells, further proving that these four genes are oncogenes.
Clinical application of risk score
Our previous findings have indicated that stemness Cluster C2 is more responsive to certain chemical drugs but less responsive to immunotherapy (as shown in Fig. 4B).Additionally, our observations have shown that the high stemness-risk group is more sensitive to drugs that are effective on C2, such as Bortezomab, Daporinad, Topotecan, and Staurosporine (as depicted in Fig. 14A).Afterward, we analyzed the correlation between the immunotherapy response and stemness-risk model using the TIDE analysis.The results shown in Fig. 14B indicate that patients who positively responded to treatment had significantly lower stemness-risk scores compared to non-responders (Wilcoxon test, P < 0.001).
We endeavored to construct a nomogram that would provide doctors with a practical and effective tool for prognosis decision-making.Our univariate Cox analysis revealed that both the risk score (P = 5.8E-21) and age (P = 1.5E-19) were significantly associated with the overall survival (OS) of LGG patients (as illustrated in Fig. 14C).Subsequently, we constructed a nomogram using the stemness risk score and age, as illustrated in Fig. 14D.
Discussion
For decades, numerous publications have described the interactions of cancer cells and the immune system.This is also applicable to other components of the TME.However, only recently has research begun to reveal the specific relationship between CSCs and Immune cells reflecting the regulation of immune function in the TME, leading to the development of rational therapeutic strategies to exploit the CSC-immune axis.This study presents the first systematic bioinformatics analysis revealing the molecular subtyping of stemness features in With the unsupervised clustering identification predicated on 26 stemness gene sets' ssGSEA scores for three different stemness subtypes, the C1 subtype is characterized by a high enrichment level of CSCs gene sets which support good prognosis, and an anti-tumor TME pattern such as enriched immature B cells and plasma cell infiltration, which makes it more sensitive to immunotherapy.In contrast, cluster 2 shows a high enrichment of CSCs gene sets which indicates poor prognosis.Cluster C2 showed higher infiltration of M1 and M0 macrophages and stromal scores, indicating that C2 is an immunosuppressive phenotype that may be less responsive to immunotherapy.In order to explore the genomic characteristics of each cell subtype, we conducted a comprehensive WGCNA analysis to screen modules associated with stem cell properties.Among them, the green module has the highest positive correlation with cluster C3 and the highest negative correlation with cluster C2.It is considered the most critical module and will be included in the future study.GO analysis showed that the primary biological process of the green module was the cell cycle.Previous research results show that dysregulation of stem cell cycle regulation may lead to normal tissue carcinogenesis, and in the progression of glioma, disruption of the cell cycle can also lead to the continuous deteriorating of glioma 51,52 .Then, through subset regression and random forest survival analysis, we identified prognostic hub genes within the green module and constructed a four-gene stemness signature based on these hub genes to quantitatively evaluate the prognostic risk of samples.www.nature.com/scientificreports/Then we concluded that the C2 stemness subtype exhibited a higher risk score than the other two groups which means a poor prognosis.Our analysis revealed that LGG patients with lower-risk characteristics have antitumor immunity, high infiltration of monocytes, and both activated and quiescent mast cells.On the other hand, high-risk LGG patients have an abundance of immune-suppressive M1/M2 macrophages and CD4 memory T cells.Among them, M2 macrophage has the highest infiltration content in the high-risk group, indicating their role in enhancing tumor invasiveness and mediating immunosuppression.Impaired CD4+ T effector memory cell function was associated with the proliferation of myeloid-derived suppressor cells (MDSCs) in glioma patients 53 .MDSCs can suppress other immune cells in the tumor microenvironment by inducing the expression of microRNA-101 to promote the CSC phenotype 54 .glioma CSCs produce cytokines, including colony-stimulating factor, TGFβ, and macrophage-inhibitory cytokine, to promote M2 macrophage polarization and MDSC recruitment [55][56][57] , leading to immune suppression and native M2 phenotype polarization.In conclusion, CSCs promote M2 macrophage polarization and MDSC recruitment by affecting the normal expression of MHC-I molecule and the release of immune-suppressive cytokines, facilitating the formation of a tumor immune-suppressive microenvironment 58 .
In terms of chemotherapy, in addition to temozolomide, which is commonly used for glioma patients 59 , we have found that several drugs, including bortezomib, daporinad, topotecan, and staurosporine, show sensitivity to high-risk gliomas.If delivered directly into the tumor through infusion, they will be delivered directly to the brain tumor, providing multiple possibilities for the effective treatment of glioma 60 .In terms of immunotherapy guidance, we conducted TIDE analysis, and LGG patients with lower stemness risk scores often have higher immune therapy responses.This confirms the predictive validity of our stemness model.
The OCLR method, developed for the mRNAsi using datasets of pluripotent stem cells and their progenitor cells 61,62 , plays a crucial role in our analysis.This index evaluates the activity of CSCs and malignant cell dedifferentiation in approximately 644 LGG samples from TCGA 63 .Similar to previous research results, our research shows that LGG patients who had higher mRNAsi values have better prognoses 64,65 due to their strong abilities of self-renewal, metastasis, and treatment resistance, supported by increasing evidence.However, there is considerable heterogeneity in the markers and models of CSCs among different cancers 66 .In our study, we utilized a multitude of CSC gene sets to identify three distinct stemness clusters.We subsequently developed a stemnessrelated prognostic model that exhibited a strong negative correlation with mRNAsi, with a correlation coefficient of -0.71.We also observed that the high-risk stemness group was enriched with markers related to the cell cycle, oxidative phosphorylation, local adhesion, pancreatic cancer, homologous recombination, and progesteronemediated cancer cell maturation, as determined by GSEA analysis.8][69] .Of the four stemness model genes identified in this study, SMC4 regulates the cell cycle and impacts the proliferation, migration, and invasion of glioma 70,71 .CDCA8 is a cell cycle regulator and tumor promoter that produces a marked effect in various malignant tumors 72 .For example, it regulates the migration of glioma cells and inhibits cell apoptosis 73 .Knockdown of DLGAP5 can significantly inhibit cell proliferation and simultaneously induce G2/M phase cell cycle arrest and cell apoptosis 74 .Chen et al. reported that tumor samples with lower expression of ORC1 would reduce the expression of Bcl-2, block the cell cycle, and increase the apoptosis rate 75 .
This study has some limitations.Although it is based on multiple datasets, it lacks further experimental verification and detailed data on immunotherapy and chemotherapy.Follow-up studies should carry out more experiments to clarify the potential molecular mechanism of stem cells acting on LGG.Furthermore, we need
Conclusion
Our study illuminates the stemness characteristics of gliomas, which lays the foundation for developing therapeutic approaches targeting CSCs and enhancing the efficacy of current immunotherapies.By identifying the stemness subtype and its correlation with prognosis and TME patterns in glioma patients, we aim to advance the development of personalized treatments, enhancing the ability to predict and improve overall patient prognosis.
Figure 1 .
Figure 1.Flow chart of the experiment.
Figure 2 .
Figure 2. (A) Interaction network of 15 prognostic stem cell gene sets, The thicker the line, the stronger the Spearman correlation between gene sets.(B) Delta area plot of cluster stability.For each K, the relative change in area under the CDF curve compared to K-1 is calculated, and the point with the slowest rate of area increase is selected as the best K. (C) Sum of squared error (wss) plot of the cluster, when the fall of wss suddenly slowers down, the K = 3 is the best K. (D) Average silhouette plot.The K = 3 with the largest Average silhouette is the best K. (E) Heat map of the consistency matrix at k = 3, where the less white color blocks contain blue, the better the surface clustering effect.(F) Kaplan-Meier curves for LGG samples between three clusters.
Figure 3 .
Figure 3. (A) Heat map of the enrichment scores for different gene sets.(B) Box plots of the enrichment levels of the prognostic gene sets for the three phenotypes.(C) Box plots of ESTIMATE scores for the three phenotypes.(D) Box plots of cibersort scores for the three phenotypes.
Figure 4 .Figure 5 .
Figure 4. (A) Box line plot of IC50 values of chemotherapeutic agents between different stem cell subtypes.(B) Estimating the distribution of immunotherapy non-esponders and responders across three phenotypes using the TIDE algorithm.(C) Alluvial diagram of three clusters, IDH and 1p19q codeletion.
Figure 6 .
Figure 6.(A) Random Forest analysis results of genes screened by single factor Cox analysis, The higher the blue bar, the more important the gene is for OS.(B) AUC curves for random forest results.(C) Mallower's CP diagram for full subset regression, the four genes with the lowest Mallower's CP were selected for prognostic modeling, Mallower's CP value less than the number of independent variables plus one indicates that the model has better predictive power and less complexity.(D) ROC curve in MMD1, AUC > 0.75 proved that this prognostic model is meaningful in the MMD1 data set.(E-G) Kaplan-Meier analysis for three data sets in TCGA (H) Prognostic risk box plot for the three phenotypes (I) Alluvial diagram of three clusters, risk score and OS.
Figure 7 .
Figure 7. (A) Heat map reflecting the degree of distribution of immune cells in samples of different risk score groups.(B) Spearman correlation diagram of factors of risk score, hub genes and mRNAsi.(C,D) Radar plot showing the extent of immune cell enrichment in the lower-risk group (C) and high-risk group (D).
Figure 9 .
Figure 9.Comparison of normal tissue expression and LGG tissue expression of the four key genes in the HPA database: (A) SMC4, (B) ORC1, (C) CDCA8 and (D) DLGAP5.
Figure 14 .
Figure 14.(A,B) Box plot of chemotherapy (A) and immunotherapy response (B) in different risk groups.(C) The results of univariate analysis of risk score, age and gender based on TCGA.(D) A nomogram calculating the patient's OS at 1, 3 or 5 years.The line length reflects the risk level associated with the factor, and the scale enables the evaluation of the risk score. | 2024-06-28T06:17:13.096Z | 2024-06-26T00:00:00.000 | {
"year": 2024,
"sha1": "bba763e22198d3d5fe367fff9a9dd83070b6a54f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1038/s41598-024-65717-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7305c21f250e14ddb57eeeca2f3a2833f8f07b47",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
204512513 | pes2o/s2orc | v3-fos-license | Optical-vortex diagnostics via Fraunhofer slit diffraction with controllable wavefront curvature
Far-field slit-diffraction of circular optical-vortex (OV) beams is efficient for measurement of the topological charge (TC) magnitude but does not reveal its sign. We show that this is because in the common diffraction schemes the diffraction plane coincides with the incident OV waist plane. With explicit involvement of the incident beam spherical wavefront and based on the examples of Laguerre-Gaussian modes we show that the far-field profile possesses an asymmetry depending on the wavefront curvature and the TC sign. These features enable simple and efficient ways for the simultaneous diagnostics of the TC magnitude and sign, which can be useful in many OV applications, including the OV-assisted metrology and information processing.
Optical vortices (OV) are among the most interesting and attractive objects of structured light physics [1][2][3]. In paraxial fields, an OV appears as an isolated point of the beam cross section with zero amplitude and indeterminate phase (phase singularity); upon a round trip near this point, the field phase changes by 2m where the integer m is the topological charge (TC) of the OV. Accordingly, the beam wavefront near an OV is helical, and the OV core (zero-amplitude point) is a center for the local transverse energy circulation being the source of the orbital angular momentum (OAM) [1][2][3][4]. Due to their unique topological and singular properties, beams with OVs find many useful applications associated with the sensitive optical diagnostics and metrology [5][6][7], micromanipulation [8][9][10] and information processing [11,12].
For all fields of the OV application, rapid and reliable recognition of its rotational characteristics (determined by the magnitude and the sign of its TC) is imperative. Usually, the rich and non-trivial rotational structure of a circular OV is hidden due to its symmetry and can be revealed only in some indirect way. Standard approaches to the OV diagnostics are based on the interference with non-singular reference beams or beams with the known singular properties [1][2][3] but such schemes are generally complicated and cumbersome. In many situations, referenceless methods are more appropriate. For example, when a circular OV beam undergoes the astigmatic transformation, its transverse int di t st ensity stribu ion acquires a characteri ic deformation with distinct "fingerprints" of the initial OV structure [13][14][15].
To the best of our knowledge, the most flexible and universal approaches exploit specific features of the OV diffraction in which the helical properties of an OV and its OAM-related circulatory nature are explicitly manifested. The simplest edge diffraction schemes [16][17][18] provide spectacular demonstration of the transverse energy circulation but a reliable detection of the OV "strength" (TC magnitude |m|) requires additional timeconsuming and precise procedures. More efficient methods enabling the "full" (TC magnitude + sign) OV diagnostics are based on the traditional approaches employing a single or double slit [19,20] and strip [21,22] Fresnel diffraction. However, the most suitable and universal means for the OV detection involve the farfield (Fraunhofer) diffraction [23][24][25][26][27][28]. The far-field scheme is, generally, less sensitive to inevitable misalignments, provides advantages of a well defined and stable reference frame as well as a considerable freedom in the choice of the registration plane, and can be easily implemented even in the ultra small-scale experimental environment. Actually, the far-field diffraction ap s i th t te q proache are realized n e recen ly repor d techni ues adapted to the nanoscale OV diagnostics [29][30][31][32].
Despite the diversity of specific practical schemes, the interpretation of the OV-diffraction results relies on some common principles: as a rule, the immediately observable diffraction pattern (DP) contains a set of bright (dark) spots whose number is associated with the TC magnitude, and the overall pattern asymmetry indicates its sign (for example, the far-field diffraction by a triangular aperture [23][24][25]). However, for the most suitable cases of slit or strip diffraction, the far-field intensity pattern appears to be symmetric [24,27,28] and the "full" OV diagnostics becomes unavailable or requires additional observations.
In this Letter, based on the typical example of the Laguerre-Gaussian (LG) beams [1][2][3], we analyze the reasons of this deficiency and propose the simple way for its elimination thus enabling the full OV diagnostics by the far-field slit (strip) diffraction. Additionally, the proposed procedure may contribute to the better visibility of informative details of the DP (e.g., its peripheral bright lobes).
We start with a brief theoretical examination. Let a paraxial monochromatic light beam be described by the usual model with the electric field distribution expressed as and e , , u x y z is the slowly varying complex amplitude (CA) [1,2]. The beam propagates along axis z, and the transverse plane is parameterized by the (x, y) Cartesian frame (see Fig. 1a). The diffraction obstacle (slit) is situated in the plane z = 0, and its special role is highlighted by the special transverse coordinates' notation (xa, ya); the slit is adjusted symmetrically with respect to the beam axis z. We consider the incident LG0m beams with zero radial index for which the incident CA distribution in the diffraction plane is described by is the sign of the OV TC (the winding handedness of the screw wavefront). This expression implies that the diffraction plane coincides with the incident beam waist plane (which is usual in the OV-diffraction studies [16,[19][20][21][22][23][24][25][26][27][28]), and b is the Gaussian envelope waist radius. Then, if the slit width equals to 2 (see Fig. 1a), the DP in the observation plane is calculated via the Fresnel-Kirchhoff integral [33] , , 2 or, in the far-field conditions, In case of the strip dif (2) and (3) thr fraction, the results can be easily obtained from ough the Babinet principle [33]. In Fig. 2, we present the far-field intensity patterns calculated via Eq. (3) for the diffraction scheme depicted in Fig. 1a with = 0.5b and the incident LG beams described by Eq. (1). In full agreement with known results [27,28,32], the far-field slit-DP formed by the incident OV beam with the TC m contains exactly |m| + 1 bright lobes, but, due to its rectangular symmetry, is quite identical for the oppositely charged OV beams. This symmetry is a direct consequence of Eqs. (1) and (3), (4) Another drawback inherent in the multi-lobe DPs of Fig. 2 is that only few central lobes are practically distinguishable. The intensities of the peripheral lobes rapidly decay with the off-axial distance: in case of |m| = 3 the side-lobe intensities are approximately 10% of the central maximum, for |m| = 4 the peripheral peaks P+2 and P-2 hardly reach 2% of the central peak P0, and for higher |m| the side-lobe intensities progressively de e re i creas . Normally, in p sence of no se, this essentially restricts the maximum detectable TCs via the slit-DP. Now the problem is to unite the above-mentioned practical advantages of the far-field scheme with the ability of immediately detecting the TC sign inherent in the Fresnel diffraction (see Fig. 1b). It can be solved based on the known fact [34]: If the beam with initial CA distribution , a a u x y produces the diffracted field described by , u x y (cf. Fig. 1a) (5) is nothing but addition of a spherical component to the beam wavefront with preserving the same intensity profile, which can be readily performed, e.g., by usual focusing (defocusing) schemes. In turn, Eq. (6) means that the far-field (z ) intensity distribution created by diffraction of reproduces (in a changed scale) the DP which could be observed with the non-modified initial beam , a a u x y at a certain finite distance z R behind the screen. In application to the OV beams of Eq. (1) this means that the DP asymmetry indicating the TC sign can be observed in the Fraunhofer plane once the diffraction plane (cf. Fig. 1a) deviates from the incident beam waist plane. The "quality" of the resulting DP is determined by its convenience for the TC diagnostics, which includes not only the asymmetry but also sufficient visibility of the side lobes. For a given incident beam [cf. Eq. (1)] this quality depends on the introduced wavefront curvature characterized by the relative parameter and the on the relative slit width /b. Fig. 3 shows the best examples chosen from a series of far-field DPs calculated for different Rs and /b. It explicitly demonstrates the |m| + 1 bright lobes and, additionally, the asymmetry which indicates the OV rotational properties and the sign of its TC: when Rs > 0 (diverging incident beam), the multi-lobe DP "rotates" in agreement with the incident-beam energy circulation, when Rs > 0 (the case of Fig. 3), the rotation is opposite. Additionally, the side lobes of the DPs are much more intense (in comparison to the central ones) than those pr p a f m esented in the 2 nd row of Fig. 2, which is rofit ble or practical easurements. In experiment, we used a laser beam with the wavelength = 405 nm (k = 1.5510 5 cm -1 ) focused by the convex lens with the focal length f1 = 50 cm (see Fig. 4). At the lens input, an LG beam was formed with the Gaussian envelope radius bi 340 m and slightly convex wavefront so that the focused LG beam converged to the waist cross section at a distance z 60 cm behind the lens, with the Gaussian envelope radius b0 = 0.125 mm. The slit width and position can be adjusted to different focused-beam cross se tions with desirable local beam size b and the wavefront cu vature radius R. Typical experimental results are presented in Fig. 5. In this case, the slit (half-width = 0.5b) was situated at points where the focused beam size equaled to b = 0.14 mm. The first row shows patterns registered when the slit is positioned before the waist (z = 50 cm): the beam converges and the wavefront curvature is negative (R 63 cm, Rs 2); the second row corresponds to z = 70 cm where R 65 cm, Rs 2. The experimental images show a good qualitative agreement with the theoretical ones of Fig. 3: the number of bright lobes (always |m| + 1) discloses the TC modulus whereas the overall asymmetry indicates the sign of m. What is more, relative intensities of the bright lobes in the experimental images are even better balanced than in the theoretical ones, most probably, due to the non-linear response of the CCD device. There are additional bright fringes on both sides of the DPs; however, this "ripple structure" emerging due to stray diffraction is distinctly dif erent from the "main" lobes and practically does not de eriorate the diagnostic possibilities. f t In case of strip diffraction, practically the same DP may be masked by the strong incident-beam radiation. Nevertheless, the strip diffraction can be equally suitable for the OV diagnostics if the incident beam is efficiently screened by appropriate spatial filters or stops. An analogue of such a scheme was recently realized in the nanoscale [32]. In the subwavelength situation, the vectorial nature of the optical field is essential, and the full scattering theory [33] should be applied rather than the scalar diffraction approach employing Eqs. (2), (3). However, qualitatively, the results of [32] (the scattering asymmetry observed when the incident beam is focused onto the nanowire) can be well explained by the diffraction arguments. The diffraction obstacle (nanowire) is not small compared to the longitudinal inhomogeneity of the strongly focused incident beam, so the coincidence of the waist cross section with the "diffraction plane" can only be occasional in [32]. An essential part of the incident light "meets" the obstacle in loc i ations where the inc dent beam possesses a significant spherical wavefront component, which causes the DP asymmetry.
In conclusion, both the theoretical analysis and experimental verification have persuasively shown that the far-field slitdiffraction with controllable wavefront curvature can be efficient means for the "full" OV diagnostics. The TC magnitude |m| and sign can be immediately seen from the number of bright lobes (|m| + 1) and the asymmetry of the intensity distribution. These properties are common with the known Fresnel diffraction techniques but the far-field approach provides practical advantages of a well defined and stable reference frame and is less sensitive to the system misalignments. As a side result, we have proposed a method by which one can reproduce the Fresnel DP, characteristic for arbitrary distance behind the diffraction obstacle (slit), in the Fraunhofer (far-field) plane. A judicious choice of the slit width and the local wavefront curvature may be used for optimization of the DP bright -lobes' positions and visibilities. | 2019-10-13T16:05:57.000Z | 2019-10-13T00:00:00.000 | {
"year": 2019,
"sha1": "048956671848164d42e7e2da814bc1cb795dbe7b",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1910.05775",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "048956671848164d42e7e2da814bc1cb795dbe7b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
7798827 | pes2o/s2orc | v3-fos-license | Feasibility Study on Applying Radiophotoluminescent Glass Dosimeters for CyberKnife SRS Dose Verification
CyberKnife is one of multiple modalities for stereotactic radiosurgery (SRS). Due to the nature of CyberKnife and the characteristics of SRS, dose evaluation of the CyberKnife procedure is critical. A radiophotoluminescent glass dosimeter was used to verify the dose accuracy for the CyberKnife procedure and validate a viable dose verification system for CyberKnife treatment. A radiophotoluminescent glass dosimeter, thermoluminescent dosimeter, and Kodak EDR2 film were used to measure the lateral dose profile and percent depth dose of CyberKnife. A Monte Carlo simulation for dose verification was performed using BEAMnrc to verify the measured results. This study also used a radiophotoluminescent glass dosimeter coupled with an anthropomorphic phantom to evaluate the accuracy of the dose given by CyberKnife. Measurements from the radiophotoluminescent glass dosimeter were compared with the results of a thermoluminescent dosimeter and EDR2 film, and the differences found were less than 5%. The radiophotoluminescent glass dosimeter has some advantages in terms of dose measurements over CyberKnife, such as repeatability, stability, and small effective size. These advantages make radiophotoluminescent glass dosimeters a potential candidate dosimeter for the CyberKnife procedure. This study concludes that radiophotoluminescent glass dosimeters are a promising and reliable dosimeter for CyberKnife dose verification with clinically acceptable accuracy within 5%.
Introduction
The rapid development of computer technology in recent years has brought new radiation therapy techniques to replace traditional stereotactic radiosurgery (SRS) methods.Advances in SRS have led to better dissection of microlesions that are difficult to surgically remove and have improved the quality of life for patients.SRS consists of small field radiation therapy and is commonly achieved by one of the following systems: Gamma knife, X knife (cone or intensity-modulated technique (IMRT)), or CyberKnife.
SRS delivers high dose radiation and is typically completed in one to three fractions, whereas conventional radiation therapy is completed in 20 to 30 fractions.However, for the dose verification of small field radiation therapy techniques, the influence of lateral electron disequilibrium and high dose gradients could increase the inaccuracy of dose measurements [1][2][3][4].CyberKnife has various small circular collimator sizes; therefore, dose evaluation is critical.Monte Carlo-based dose calculations have become major tools for the verification of clinical dosimetry in small field radiation therapy techniques [5][6][7][8][9][10].Ling [11] reported that a pencil beam algorithm can be used in Cyberknife system.Ling developed a model based dose calculation algorithm to better handle the lateral scatter in an irregularly shaped small field for the CyberKnife system.
This study uses the Monte Carlo BEAMnrc to simulate the percent depth dose and lateral dose profile of Accuray G3 CyberKnife (Accuray Inc, USA).The simulated results are compared to the actual measurements.In this study, thermoluminescent dosimeters (TLDs), radiophotoluminescent glass dosimeters (RPLGDs), and Kodak EDR2 films were used to measure the lateral dose profile and percent depth dose of CyberKnife.Araki [6] reported that a large active volume, high density, and high atomic number dosimeters affected the measured results for CyberKnife.Therefore, an anthropomorphic phantom was also used in this study to evaluate the accuracy of the dose supplied by CyberKnife and assess the feasibility of the clinical dose verification using RPLGD, TLD, and Kodak EDR2 film.
Materials and Methods CyberKnife
Early models of CyberKnife (Accuray Inc., USA) were mainly used to treat intracranial lesions.The newer model has broader applications for various tumor sites [12][13][14].The features of G3 CyberKnife include a 6 MV X-band Linac on a mechanical arm that is capable of creating more than 1200 treatment beams in 3D space with six axes and narrow beams collimated by 12 secondary collimators with different circular collimator sizes and two sets of image devices for image-guided radiotherapy (IGRT).CyberKnife adopted a non-isocentric treatment method to allow the dose profiles to conform to the distribution of the tumor shape and minimize the damage to the normal tissue surrounding the tumor.
Monte Carlo simulations
The Monte Carlo (MC) simulation code (OMEGA/BEAM) used in the study was developed by the National Research Council of Canada (NRCC) and the University of Wisconsin.A flowchart of the OMEGA/BEAM simulation is shown in Fig 1 .Bremsstrahlung splitting and Russian roulette variable reduction techniques were used to increase the relative particle collection efficiency.The photon cutoff energy was set at 0.01 MeV, and the electron cutoff energy was set at 0.7 MeV.The voxel size for the simulation was 1 mm 3 .In this study, we used water and acrylic as the media for the MC simulations.The results were then compared to the measured results in terms of the lateral dose profile and percent depth dose.
Dosimeters and reader systems
TLD and REXON UL-320 readout system.The TLD used in this study was the Harshaw TLD-100H (LiF: Mg, Cu, P).The TLD-100H measures 1 mm 3 in size and has an effective atomic number of 8.2 and a density of 2.64 gm/cm 3 .The TLD-100H readout system was the REXON UL-320 reader (REXON Inc., USA).
RPLGD and DOSE ACE FGD-1000 readout system.The RPLGD and the readout system for the dose measurement were the GD-302M glass dosimeter and Dose Ace FDG-1000 system (Asahi Techno Glass Corporation, Japan), respectively.The composition of the GD-302M is as follows: O (51.61%), P (31.55%),Na (11.00%),Al (6.12%), and Ag (0.17%) [13].The effective atomic number of the GD-302M is 12.04, and the density is 2.61 gm/cm 3 .The glass dosimeter has a cylindrical shape and is 1.5 mm in diameter and 12 mm in length.The readout system used a 1 mm diameter pulsed UV laser as an excitation light source, and the visible light signal was then collected through the 0.6 mm reading window to evaluate the radiation dose [15].
Kodak EDR2 film and Lumisys LS75 readout system.The third method for evaluating the dose response was Kodak EDR2 film (Kodak, USA).The optimal dose response range for EDR2 film is between 25 to 400 cGy; saturation occurs at 700 cGy.The optical density readout system for the EDR2 film was the Lumisys LS75 laser scanner (Kodak, USA), which has a maximum resolution of 0.1 mm.EDR2 film and the PTW-Freiburg analytic MEPHYSTO software Version 7.33 were used to scan the images and determine the optical density that corresponded to the dose response.
Characteristic analysis for the dosimeters
Radiation detection characteristic analyses of the dosimeters, including reproducibility and linearity, were also conducted to ensure accuracy.
Reproducibility of the dosimeters.The dosimeters were irradiated with a single dose (200 cGy) by the AECL Co-60 unit.All dosimeters were irradiated, read, and annealed 10 times to obtain the coefficient of variation (CV) to determine reproducibility.A low CV indicates better stability for the dosimeter readout and thus better reproducibility.
Dose linearity.Dose linearity characterizes the relative linearity between the readout and dose delivered to the dosimeters at different doses.This study used the AECL Co-60 as a source, with 25, 50, 75, 100, 125, 200, 225, 300, 325, and 400 cGy for the analysis of dose linearity.
Lateral dose profile measurement
This study also used the TLD-100H, GD-302M, and EDR2 film for the measurement of the lateral dose profile.The lateral dose profile was measured at a depth of 5 cm in an acrylic phantom with the SSD set to 75 cm.The measured results were normalized to the dose value at the isocenter.The measurements of the lateral dose profile with the TLD-100H, GD-302M, and EDR2 film were conducted using an acrylic phantom five times for each circular collimator size (5, 10, 20, 30, 40, and 60 mm).The mean dose and CV were normalized to the isocenter dose value to obtain the lateral dose profile for each circular collimator size.
Percent depth dose measurement
The percent depth doses were measured using two CyberKnife circular collimator sizes: 40 and 60 mm.The percent depth dose measurements with the GD-302M dosimeters were measured at an SSD of 75 cm and depths of 3, 6, 9, 12, 15, 18, 30, 60, 120, 180, and 240 mm in an acrylic phantom.Each measured depth was repeated five times to obtain the mean dose and CV.The mean dose was normalized to the depth of the maximum dose at 1.5 cm (for the 6 MV beam).
Anthropomorphic phantom dose measurement
The GD-302M dosimeter was used to evaluate the accuracy of the dose given by the Cyber-Knife to an anthropomorphic phantom (Accuray Inc., USA).This study used polystyrene phantoms of 63.5 mm in both length and width and 3 mm in thickness to replace the radiochromic film pack in the anthropomorphic phantom.The GD-302M dosimeter was placed in the polystyrene phantoms for the dose measurement.For the simulated target shown in Fig 2, the target was a circular shape with a diameter of 3 cm.The CyberKnife treatment planning system used 6 dimensional cranial tracking modules to deliver the treatment dose.The dose given to the target was 3000 cGy in 3 fractions (1000 cGy per fraction).During the irradiation process, the target localization system was used to track the tumor location to ensure the accuracy of delivery.
Radiation detection characteristics of the dosimeters
Readout reproducibility was evaluated after multiple irradiations.A reproducibility analysis was conducted with the TLD-100H and GD-302M.The CVs analyzed for the TLD-100H were between 0.99% and 3.00%, whereas they were between 0.48% and 2.98% for the GD-302M.The results are shown in Fig 3.In this study, the CV of each dosimeter (TLD-100H, GD-302M and EDR2) was less than 3. Dose linearity was evaluated with the TLD-100H, GD-302M, and EDR2 film from 25 to 400 cGy.A regression curve and coefficient of determination (Rsquared) were obtained for the readout dose and irradiated dose.As the coefficient approached 1, the relationship between the readout dose and irradiated dose was linear.The R-squared value for the TLD-100H, GD-302M and EDR2 film were 0.9996, 0.9991, and 0.9938, respectively, as shown in Fig 4.
Lateral dose profile measurement
The lateral dose profile measurement results for the GD-302M, TLD-100H, and EDR2 film are shown in Fig 5.For circular collimator sizes between 60 to 20 mm, the measurement discrepancies for the GD-302M, TLD-100H, and EDR2 were less than 3%.For circular collimator sizes less than 10 mm, there was no uniform dose due to lateral electron disequilibrium.The lateral dose profile showed steep changes, which resulted in measurement discrepancies greater than 3% among the different dosimeters.
Percent depth dose measurement
The comparison of the percent depth dose between the GD-302M measurements and the Monte Carlo simulation for CyberKnife circular collimators of 40 and 60 mm are shown in Fig 6 .Twelve points were selected in this study for the dose measurements, and the CV of each point was less than 3%.
Monte Carlo simulation
A comparison of the OMEGA/BEAM simulation and the GD-302M measured results is shown in Fig 6(A) for a 60 mm circular collimator.As indicated, the discrepancy in the buildup region was 4.00%, 5.23% and less than 1.84% at distances of 3 mm, 9 mm, or greater than 9 mm, respectively.The build-up region discrepancy was mainly affected by the electron disequilibrium, which led to greater measurement inaccuracy.When the circular collimator size was 40 mm, the discrepancy between the OMEGA/BEAM simulation and GD-302M measured results in the build-up region was less than 3.32%, and the discrepancy beyond the build-up region was less than 2.64% (Fig 6
(B)).
A comparison of the lateral dose profiles is shown in Fig 7 for the simulation and the EDR2 film.When distance was less than 29 mm, the measurement discrepancy of the EDR2 film relative to the OMEGA/BEAM simulation was less than 2.76%.The discrepancy was greater than 2% for points at 4, 5, 16, 20, 25 and 28 mm away from the center and less than 1.87% for the remaining points that were more than 30 mm away from the center.When the distances were 30 to 33 mm away from the center, the calculation from the OMEGA/BEAM simulation was underestimated by 4.86% to 13.27% relative to the EDR2 film.When the distance from the center was greater than 34 mm, the discrepancies between the OMEGA/BEAM and the EDR2 measurements were apparent without the flattening filter.Not enough photons reached the collimator edge, which led to inaccuracies in the OMEGA/BEAM calculations of 2.50% to 6.60%.
Anthropomorphic phantom dose measurement
The calculated values from the CyberKnife treatment planning system and dose measurements in an anthropomorphic phantom with the GD-302M are compared in Table 1.The GD-302M measured value was an average of the measurements at the same location, whereas the calculated value from the CyberKnife planning system was an average of 5 dose calculation points inside the effective readout area for the GD-302M.The dose measurement for the GD-302M was 2840.63 cGy (CV 2.93%), and the average dose from the CyberKnife planning system was
Fig 2 .Fig 3 .Fig 4 .
Fig 2. Treatment planning for an anthropomorphic phantom.(A) and (B) are the axial and sagittal plane images.(C) is the coronal plane image.The circular area is the simulated target.(D) is the schematic diagram for incident beam directions.doi:10.1371/journal.pone.0169252.g002
Fig 6 .
Fig 6.Comparison of the percent depth dose curve between the actual measurements and the Monte Carlo simulation for circular collimators of sizes (A) 60 mm and (B) 40 mm.doi:10.1371/journal.pone.0169252.g006
Fig 7 .
Fig 7. Comparison of the lateral curve for the actual measurements and the Monte Carlo simulation.The circular collimator size was 60 mm.doi:10.1371/journal.pone.0169252.g007 | 2018-04-03T03:47:22.951Z | 2017-01-03T00:00:00.000 | {
"year": 2017,
"sha1": "d6e884e3533c87198acbbcfa460bd0c5a33eff07",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0169252&type=printable",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d6e884e3533c87198acbbcfa460bd0c5a33eff07",
"s2fieldsofstudy": [
"Medicine",
"Physics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
208357682 | pes2o/s2orc | v3-fos-license | FACTORS ASSOCIATED WITH INADEQUATE MILK CONSUMPTION AMONG ADOLESCENTS: NATIONAL SCHOOL HEALTH SURVEY - PENSE 2012
ABSTRACT Objective: To identify the prevalence and factors associated with inadequate milk consumption among adolescents. Methods: This was a cross-sectional study based on secondary data from the National School Health Survey (2012), a Brazilian survey carried out using a self-administered questionnaire in a representative sample of 9th-grade students from public and private schools. The frequency of milk intake and its association with socio-demographic characteristics, food consumption and physical activity were estimated. A descriptive and inferential analysis of factors associated with inadequate milk consumption (no consumption at least one of the seven days of the week) was performed. A multiple logistic model was adjusted to control confounders. Results: The sample included 108,828 adolescents and inadequate milk consumption ocurred in 58.9%. The final model included nine variables independently associated with inadequate milk intake: breakfast frequency less than 4 days per week (odds ratio [OR]=2.40; p<0.001), unprocessed or minimally processed foods intake less than 5 days per week (OR=1.93; p<0.001), living in the northeast region (OR=1.39; p<0.001), less maternal schooling (OR=1.35; p<0.001), physical inactivity (OR=1.33; p<0.001), attending public school (OR=1.26; p<0.001), not being white (OR=1.14; p<0.001), being older than 14 years old (OR=1.13; p<0.001) and having a habit of eating meals while watching TV or studying (OR=1.04; p=0.036). Conclusions: Inadequate milk consumption is prevalent among Brazilian adolescents. The identification of associated factors suggests the need to develop nutritional guidance strategies for the prevention of diseases that result from low calcium intake.
INTRODUCTION
Milk and its derivatives are a food group that has high nutritional value, and it is recommended as part of a balanced diet. Drinking milk is recommended during all stages of life, as it provides indispensable nutrients such as amino acids, fatty acids, minerals (calcium, magnesium, selenium) and vitamins (retinol, cyanocobalamin, pantothenic acid). [1][2][3] Calcium present in milk is highly bioavailable, reaching about 70% use value when compared to other foods. This makes it an important part of human nutrition. [2][3][4] The micronutrient calcium is one of the most abundant minerals in the human body, found mainly in the structure of bones. Not only does it participate in bone metabolism, but it also has a role in enzymatic and metabolic reactions, in the coagulation process, in the signaling and adhesion of cells, in mediating muscle contractions, in the secretion of hormones and neurotransmitters, and in the transport of substances. 2,3,5 Milk is among the most acquired kind of food among Brazilian families. There will be an estimated 23% increase in the production of milk around the world in the next decade, with 73% of this additional supply coming from developing countries. 1,2,6 Adolescence, which includes the age group from 10 to 19 years old, is a biopsychosocial maturation cycle characterized by the broad biological development of organs, tissues and systems; and by intense physical, sexual, cognitive, social and emotional changes. This process is directly influenced by the interaction of genetic, environmental and endocrine factors, whose organic and nutritional demands are increased by accelerated physical growth patterns. 5 Exclusion or inadequate consumption of milk at this stage impairs structural growth, leads to lower bone mineral density and, consequently, contributes to the development of osteoporosis and increased risk of fractures in adulthood and old age. 2,3,5,7 Based on data from the National Health and Nutrition Examination Survey, an association was found between lower milk intake during adolescence, lower bone mineral content and lower bone mass among women aged 20 and 49 years old. 7 In Brazil, studies indicate that there is a significant reduction in milk consumption among adolescents, with an inverse increase in the intake of processed and high sugar drinks. Similarly, such products are among the most consumed beverages by young Europeans, as data from the European Youth Heart Study (EYHS) demonstrate. Such changes in consumption have led to calcium intake below the recommended amount and higher consumption of low nutritional value foods. [8][9][10][11][12] In this context, the objective of the present study was to identify factors associated with inadequate milk consumption among adolescents. Additionally, it aimed to contribute to the development of strategies for control and prevention of nutritional disorders and, consequently, changes in growth and of bone metabolism in this age group.
METHOD
This study is cross-sectional and used secondary data from the second version of the National School Health Survey (Pesquisa Nacional de Saúde do Escolar -PeNSE), conducted between April and September 2012. It is a population-based Brazilian school health survey conducted by the Ministry of Health (Ministério da Saúde -MS) in partnership with the Brazilian Institute of Geography and Statistics (Instituto Brasileiro de Geografia e Estatística -IBGE). The database is in the public domain and is available electronically on the IBGE website. 13 The target population consisted of students attending ninth grade during the day at public and private schools located in urban and rural areas of Brazil. The sample was designed to represent this population in 32 geographic strata: each of the 26 capitals and the Federal District (DF), and the five geographic macroregions of the country (North, Northeast, Southeast, South and Central West), constituted by the other counties. 13 The sample of each stratum was allocated proportionally to the number of schools, according to administrative dependence (private and public). Schools with a total of 15 students or more were eligible according to the 2010 School Census. In the first stage, the schools were selected through probabilities proportional to size (number of students enrolled). In the second stage, ninth grade classes were chosen to be studied in each of the selected schools. All of the 109,104 students from the selected schools, who were present on the day of data collection and who answered the questionnaire, made up the sample. The full description of the sample selection process is available in the PeNSE 2012 scientific paper. 13 Data collection was performed through a self-administered questionnaire, structured in thematic modules. The questions had multiple choice answers that contained information on sociodemographic, behavioral, eating and health characteristics. 13 Food consumption was measured by the frequency of food consumption in the seven days prior to the survey date, with responses ranging from daily consumption to no consumption. 13 Consumption of milk, fresh and minimally processed foods, and ultra-processed foods were evaluated. In addition, we also evaluated the following habits: eating meals while watching TV or studying, having lunch and dinner with guardians, and eating breakfast. 13 The question regarding milk intake considered the consumption of milk and beverages with milk (coffee or chocolate milk, smoothies and porridge). It did not include yogurt, cheese and other milk derivatives. 13 Since milk is considered to be the main source of calcium for adolescents, not drinking it on at least one of the seven days was defined as inadequate intake, considering that drinking milk serves more than half of the daily need for calcium. Specifically, a variable was made to represent the consumption of fresh or minimally processed foods (MPF) (beans, raw vegetables, raw salads, cooked vegetables, and fruits). Another variable was made to represent food consumption of ultra-processed foods (UF) (fried salty snacks, sausages, crackers, cookies, snack foods, candies and soda), based on the average number of days consumed in the last week, including the foods defined for each of these categories. Consumption of MPF less than five times a week was attributed to risk behavior, whereas for UF, the risk was considered when they were consumed two or more days in the week. Finally, not having breakfast or meals (lunch and dinner) in the presence of guardians for four days or more of a week was considered a risk behavior.
Physical activity was considered to be a possible factor associated with inadequate milk consumption. For this purpose, the accumulated exercise time in minutes was calculated from the time spent walking and cycling to and from school, physical education at school and general activities such as dance, gymnastics and wrestling. Adolescents with an accumulated activity of less than 300 minutes per week were defined as inactive. Conversely, active adolescents participated in 300 minutes or more of physical activity per week.
Data analysis was performed using Stata 14.0 software. As suggested by the PeNSE methodology, all analyses were performed using the expansion and sample weight technique, according to the selection and population representation process outlined in the research. Descriptive and bivariate analyzes were performed to study the associations. A multiple model was adjusted to independently identify factors associated with inadequate milk consumption.
Data were evaluated for their distribution characteristics. The cutoff points of the eating behavior variables were defined according to frequency of consumption and biological plausibility. The statistical chi-square test was used to measure associations, due to the parametric nature of the variables studied. Subsequently, a logistic regression model was built. In order to select the independent variables eligible to compose the multiple model, the value of p≤0.20 was considered as the inclusion criteria. The variable input technique was Stepwise Forward, and the value of p≤0.05 was used to define a statistically significant association.
RESULTS
Of the total students analyzed, 82.8% were from public schools and about half were female (52.2%). About a quarter of the adolescents reported being physically active.
Regarding eating behavior, 58.9% of the adolescents showed inadequate milk consumption, with a confidence interval of 95% (95% CI) 55.6-62.1; and 64.4% (95% CI 63.0-65.8) presented UF consumption higher than twice a week. The habit of eating meals while watching TV or studying was present in 81.1% (95% CI 78.7-83.4) of adolescents (Table 1). Table 2 shows the bivariate analysis of factors associated with inadequate milk consumption and their respective prevalence and effect measures. The 15 characteristics tested showed a statistically significant association with this outcome.
After adjusting the logistic model, the following variables remained significant: macroregion, type of school, race, age, maternal education, physical activity, MPF consumption, frequency of breakfast and the habit of eating in front of the TV or studying (Graph 1).
DISCUSSION
According to the criteria used here, inadequate milk consumption was present in most of the sample studied. Nine factors independently associated with this dietary error were identified: eating breakfast less than four days per week; eating MPF less than five days per week; residing in the Northeast Region; less maternal education; physical inactivity; enrolling in a public school; not being of the white race; being over 14 years old and having the habit of eating meals while watching TV or studying.
Although there are several ways to assess milk consumption, food frequency analysis has been the most widely used in population studies. The literature has identified a significant reduction in the intake of this food in adolescents. This phenomenon has been associated with increased consumption of soft drinks and other ultra-processed beverages, socioeconomic status, gender, age group, eating behavior and nutritional status. [8][9][10][14][15][16] Among these factors, we highlight the high consumption of drinks with high sugar content, as they contribute to excessive caloric intake and negatively affect anthropometric parameters, which contribute to excess weight gain. 17 According to the EYHS, the daily intake of 100g of sugary beverages was associated with increased body mass index (BMI) and adiposity in children aged 9 to 15 years old. In addition, replacing these beverages with milk and water during follow-up was associated with less body fat gain, and reduced BMI and waist circumference. 12 However, no association was identified in the studied sample between inadequate milk consumption and higher frequency of UF consumption, which included soda. Such disagreement can be explained by the grouping of ultra-processed foods into one composite variable and the way the PeNSE data were obtained, which limits the scope of information related to food consumption to frequency in the last seven days.
Breakfast is a main daily meal. 1 Not eating it results in lower average macro and micronutrient intake. 18 Specifically, breakfast, in many cultures, means significant milk consumption, 2 and a reduction in its frequency may contribute to low milk intake, which may lead to a greater risk of inadequate milk intake among PeNSE adolescents who ate breakfast less frequently.
Although there is an inverse association between age and breakfast, 18,19 the present study identified an association between older age and inadequate milk consumption, regardless of breakfast. This aspect can be explained by the fact that older adolescents consume less milk due to the fact that they consume more soft drinks. 16 In this regard, adolescents' food choices are influenced by their social, cultural and media environment, which leads to greater consumption of low-quality nutritional foods, typical for adolescents. 20 Thus, MPFs are potentially consumed less, meaning that milk is replaced by other beverages. This is in agreement with the association of lower frequency of MPF consumption with lower milk consumption, as evidenced in the studied sample. Watching TV for extended periods has been linked to unhealthy food consumption. 21,22 Among teenagers, soft drinks and other types of industrialized beverages are among the most consumed foods while watching TV. 11,23 This practice partly supports the growing trend of substituting milk for other processed and sugar-rich drinks. 10 Thus, excessive exposure to electronic screens represents a higher risk of inadequate milk consumption, as identified in this survey of adolescents who participated in PeNSE 2012.
Physical inactivity was also associated with inadequate milk consumption in adolescents. In fact, physical inactivity has been associated with the consumption of low nutritional quality and high energy density foods. 11,24 Thus, physical inactivity affects food choices, contributes to less dietary variety and, consequently, less essential nutrients, as demonstrated by inadequate intake of milk and dairy products, legumes, fruits, meat, vegetables and cereals. 24,25 As for Brazilian macroregions, adolescents from schools in the Northeast Region had a higher risk of inadequate milk consumption. Despite the possible influence of cultural factors that differ between Brazilian regions, this finding is probably related to the lower purchasing power and lower socioeconomic development of this region, 26 as shown by the results of the Family Budget Survey (Pesquisa de Orçamentos Familiares -POF) from 2008-2009, which showed that the annual purchase per capita of pasteurized and fresh milk in the Northeast is comparable to about 50% in the Southeast. 6 In this context, it is worth mentioning the existence of social and economic inequality between races. According to the IBGE, the population that identifies as dark-skinned or light-skinned black has lower income to provide for their vital needs and is exposed to a higher degree of food insecurity. 26 This statement supports the association of the risk of inadequate milk consumption among adolescents of other races, when compared to the white race. Moreover, although further studies are needed to assess the influence of cultural aspects, this observed difference in consumption between races may be the effect of such conceptions on eating habits, which are generally varied among them. Additionally, the lower educational level of mothers of adolescents from the PeNSE 2012 was associated with inadequate milk consumption. This finding is similar to that found by a study with data from PeNSE 2009, which showed an association between higher levels of maternal education and regular milk consumption (minimum of five days per week). 14 Parents/guardians' education is a determining factor in their children's behavior and food intake. 9,27,28 Data from the Adolescent Cardiovascular Risk Study (Estudo de Riscos Cardiovasculares em Adolescentes -ERICA) demonstrated a significant association between low maternal education and unhealthy eating behaviors. 28 Thus, a mother's lower level of education leads to greater difficulty in perceiving and assimilating food quality and potentially results in inadequate eating habits. 27 While the National School Feeding Program (Programa Nacional de Alimentação Escolar -PNAE) provides guidelines for healthy eating in public schools based on students' nutritional needs, which include the provision of at least two days a week for adolescents, 29 PeNSE students enrolled in these schools had a higher risk of inadequately consuming milk compared to those from private schools. This fact suggests that the goals set by this program may not be fully achieved, especially in municipalities with less socioeconomic resources. Indeed, it has been shown that public school students exhibit less healthy eating behaviors, are more prone to micronutrient deficiency, and consume less milk and dairy products. 9,28,30 This circumstance is possibly the result of families of adolescents in public schools' greater social vulnerability and less access to food.
It is worth noting that the use of secondary data limited the analyses performed in the present study, which used only the information available in the PeNSE 2012 database. Similarly, the fact that data was collected through a self-administered questionnaire potentially led to completion errors and missing information, which could be associated with inadequate milk consumption (nutritional status and consumption of other sugary drinks besides soda).
In addition, frequency-based food consumption assessment made it impossible to estimate the amount of food consumed by adolescents. Specifically, available information on milk made it impossible to estimate total weekly intake in isolation, since milk intake was also estimated when milk was associated with ultra-processed foods, which should not potentially be considered indicators of healthy eating.
On the other hand, PeNSE 2012 is a survey of the population of adolescents enrolled in the ninth grade in Brazil. It used a careful selection process of the participating schools and, consequently, enabled the recruitment of a representative sample of the national territory. Therefore, PeNSE stands out as the only study to evaluate milk consumption among adolescents all around Brazil. In addition, the statistical analysis performed considered the control of confounding factors of associations through a multiple model. It identified the independent effect of the nine factors associated with inadequate milk consumption.
Finally, inadequate milk consumption is prevalent among Brazilian adolescents. The identification of associated factors suggests the need to improve existing government strategies, such as the PNAE and the School Health Program (Programa | 2019-11-28T12:15:21.532Z | 2019-11-25T00:00:00.000 | {
"year": 2019,
"sha1": "be2e14705fd660416d0b34efd245e11302085840",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/rpp/v38/1984-0462-rpp-38-e2018184.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6f496a3ccd3a9e089cafbb3bf48531d8f187c207",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
190873740 | pes2o/s2orc | v3-fos-license | Intervertebral disc kinematics in active duty Marines with and without lumbar spine pathology
Abstract Military members are required to carry heavy loads frequently during training and active duty combat. We investigated if operationally relevant axial loads affect lumbar disc kinematics in forty‐one male active duty Marines with no previous clinically diagnosed pathology. Marines were imaged standing upright with and without load. From T2‐weighted magnetic resonance images, intervertebral disc (IVD) health and kinematic changes between loading conditions and across lumbar levels were evaluated using two‐way repeated measures analysis of variance tests. IVD kinematics with loading were compared between individuals with and without signs of degeneration on imaging. Linear regression analyses were performed to determine associations between IVD position and kinematic changes with loading. Fifty‐eight percent (118/205) of IVDs showed evidence of degeneration and 3% (7/205) demonstrated a disc bulge. IVD degeneration was not related to posterior annular position (P > .205). Changes in sagittal intervertebral angle were not associated with changes in posterior annular position between baseline and loaded conditions at any lumbar level (r < 0.267; P = .091‐.746). Intervertebral angles were significantly larger in the lower regions of the spine (P < .001), indicating increased local lordosis when moving in the caudal direction Disc height at the L5/S1 level was significantly smaller (6.3 mm, mean difference = 1.20) than all other levels (P < .001) and baseline posterior disc heights tended to be larger at baseline (7.43 mm ± 1.46) than after loading (7.18 ± 1.57, P = .071). Individuals with a larger baseline posterior annular position demonstrated greater reduction with load at all levels (P < .002), with the largest reductions at L5/S1 level. Overall, while this population demonstrated some signs of disc degeneration, operationally relevant loading did not significantly affect disc kinematics.
kinematic changes between loading conditions and across lumbar levels were evaluated using two-way repeated measures analysis of variance tests. IVD kinematics with loading were compared between individuals with and without signs of degeneration on imaging. Linear regression analyses were performed to determine associations between IVD position and kinematic changes with loading. Fifty-eight percent (118/205) of IVDs showed evidence of degeneration and 3% (7/205) demonstrated a disc bulge. IVD degeneration was not related to posterior annular position (P > .205). Changes in sagittal intervertebral angle were not associated with changes in posterior annular position between baseline and loaded conditions at any lumbar level (r < 0.267; P = .091-.746). Intervertebral angles were significantly larger in the lower regions of the spine (P < .001), indicating increased local lordosis when moving in the caudal direction Disc height at the L5/S1 level was significantly smaller (6.3 mm, mean difference = 1.20) than all other levels (P < .001) and baseline posterior disc heights tended to be larger at baseline (7.43 mm ± 1.46) than after loading (7.18 ± 1.57, P = .071). Individuals with a larger baseline posterior annular position demonstrated greater reduction with load at all levels (P < .002), with the largest reductions at L5/S1 level. Overall, while this population demonstrated some signs of disc degeneration, operationally relevant loading did not significantly affect disc kinematics.
K E Y W O R D S
intervertebral disc bulge, intervertebral disc degeneration, low back pain, lumbar spine, upright MRI
| INTRODUCTION
Military members are required to carry heavy loads frequently during training and combat. During operations, Marines carry a minimum operational load of 11.3 kg in the form of ballistic protection, which can quickly escalate with the addition of necessary equipment to over 45 kg, exceeding the recommended load carriage limit of 33 kg. 1 Higher rates of intervertebral disc (IVD) degeneration has been observed to occur at a higher frequency in military populations compared to similarly-aged civilians. 2 It is thought that load-induced changes in IVD health may play a role in the development of clinical back pathology in this population. However, the association between operational loading, disc degeneration, and clinical spinal pathology (ie, bulge, herniation) has not been explicitly explored.
Heavy axial loads alter natural spinal posture, which may fatigue paraspinal musculature necessary for stabilization. 3 This may increase a Marines vulnerability to IVD injury and an increased rate of IVD degeneration over time. 4 Furthermore, individuals with disc degeneration demonstrate not only decreased whole lumbar range of motion, but also decreased intervertebral range of motion, specifically at the levels with degenerated IVDs. [4][5][6] This decreased range of motion may alter axial distribution of weight, affecting compression and shear forces at intervertebral joints. 7,8 Previous investigations on the effect of load and position on IVD kinematics (IVD height and intervertebral angular changes) in Marines demonstrated that as local lumbar flexion increased, decreased anterior and increased posterior IVD height occurs under operational loading conditions. 4,9 However, these previous investigations did not examine changes in posterior annular position (defined as focal or asymmetric extension of the disk beyond the vertebral border 10 ) with load, or the influence of disc health on kinematic loading responses. Evidence of IVD kinematic changes in response to postural alterations suggests that axial loading may also affect more specific features of disc morphology, such as annular position.
Disc morphology is often used as an indicator of IVD health, and changes in disc morphology are observed with disc degeneration and injury. 11 Changes in disc morphology with degeneration are thought to be a result of decreased proteoglycan concentration within the nucleus pulposus leading to loss of hydration and ultimately a decrease in disc height over time, or destabilization of the disc due to an annular or nucleus pulposus injury. 12 Although IVD herniation is apparent and well defined, the current literature does not provide a clear clinical definition for the term disc bulge, implicating its dependence on individual patient characteristics. Furthermore, clinically relevant changes in kinematics could include, but are not limited to, significant posterior annular protrusion compressing neural elements, loss of IVD height mimicking fusion, and resultant intervertebral angular derangements.
In order to further understand the influences of load on IVD kinematics in active duty Marines, the purpose of our study was to (a) investigate the effect of operationally relevant load on IVD height, intervertebral angle, and posterior annular position in the lumbar spine, and (b) to compare IVD kinematics between Marines with and without disc degeneration. We hypothesized that under increased axial load from tactical equipment, Marines' lumbar IVDs would demonstrate increased posterior displacement of the annulus fibrosus compared to baseline. Additionally, we hypothesized that Marines with IVD degeneration would exhibit decreased disc height and IVD angles compared to those with nondegenerated IVDs.
| Study design
This is a retrospective analysis of lumbar spine imaging data with repeated-measures design. Independent variables were loading condition and disc degeneration on intervertebral angle, posterior annular position, lordosis, and IVD height.
| Volunteers
Utilizing patients from a previous study, a total of 43 male active duty
| Load carriage
Marines were scanned naturally standing without load and standing with body armor (11.3 kg). The 11.3 kg body armor was used because it is minimum protective equipment that Marines are required to wear during military operations/training. The body armor was retrofitted to remove any metallic components to ensure compatibility with MRI.
Marines were not provided instruction on how to assume each position, but were asked to hold each position steady for the duration of the entire MRI acquisition (approximately 3 minutes).
| Image analysis
Postural measurements (IVD height, IVD angle) were generated from upright MRI images in each load configuration using a previously validated algorithm 13 using OsiriX. 14
| Disc grading
All lumbar discs were graded for disc degeneration using the Pfirrmann grading scale. 17 The Marines' discs were separated into degenerated or nondegenerated groups based on Pfirrmann grade; IVDs with a Pfirrmann grade of III or more were assigned to the 3 | RESULTS
| Participant demographics
Complete image datasets were analyzed from 41 male active duty Marines (Table 1)
| Effect of axial load on local lordosis and disc heights
Changes in sagittal intervertebral angle were not associated with changes in posterior annular position between baseline and loaded conditions at any lumbar level (r < 0.267; P = .091-.746). Intervertebral angles were significantly larger in the lower regions of the spine (P < .001), indicating increased local lordosis when moving in the caudal direction. There was also a trend for the main effect of load on intervertebral angles, in that angles were larger (more lordotic) at baseline (7.15 ± 1.63) than with load (6.77 ± 1.85, P = .064).
There was a main effect of level on disc height, in that disc height at the L5/S1 level was significantly smaller (6.3 mm, mean difference = 1.20) than all other levels (P < .001). There was a trend for baseline posterior disc heights to be larger at baseline (7.43 mm ± 1.46) than after loading (7.18 ± 1.57, P = .071). Additionally, there was a significant interaction (P = .006) between axial load and posterior disc height across levels, such that while the L1/2 disc exhibited an increase in disc height with loading, all other levels exhibited a decrease in disc height. However, of these differences, only the L3/4 and L4/5 discs were statistically significant (P < .022).
| Effect of axial load on posterior annular position
Posterior annular position at baseline was found to be different across lumbar levels ( F I G U R E 3 Intervertebral disc (IVD) measurements across lumbar levels in 41 active duty Marines. The Marines' posterior annular position was recorded in the standing unloaded (white) and standing loaded (gray) conditions at all levels. There was a significant main effect for IVD level (P < .035), but no effect of loading (P = .363) demonstrated that disc bulge was not found to increase at lower lumbar segments. Contrarily, the addition of axial load appeared to cause reduction in posterior annular protrusion, indicating decreasing disc bulge at lower lumbar levels. In this study, we limited our analysis of disc bulge to the midsagittal plane. The prevalence of central versus paracentral disc bulge in healthy populations is unclear. In symptomatic cases, if a disc is protruded or extruded, then paracentral location is most commonly observed. 25,26 However, even when a disc bulge is paracentral, it is most often diffuse, and able to be seen to some extent in the midsagittal plane. 26 In asymptomatic individuals or in individuals with mild disc bulges-not protrusions or extrusions-the location of the bulge is more likely to be central. 26 As diagnosed spinal pathology was an exclusion criterion for this study, and the lack of paracentral disc bulges was visually confirmed, the volunteers in this study are within the latter group.
There are three main limitations to the study. To acquire the imaging data for analysis, an elastic band was used to gently secure the coil to the volunteers' low back. While this may influence posture, the band and the coil are relatively light (approximately 1 kg), and the posture of patients did not appear to change when it was attached. Additionally, we were unable to resolve the PLL due to the short T2 relaxation of collagenous tissues. Development of new ultra-short TE pulse sequences may provide insight into the health and function of the PLL under axial load. We used the posterior border of the annulus fibrosis (AF) as a proxy for disc bulge, which may not be the most accurate characterization of IVD movements.
Rather, we may be observing abnormal nucleus pulposus migration with respect to adjacent vertebral bodies. 23 Lastly, Marines are exposed to significant conditioning, as well as physical demand, which may contribute to differences in findings compared to age and sex matched civilians. Such characteristics may impact the external validity of our findings and limit their applicability to the general population. Future work should be directed toward localizing IVD migration in multiple planes to better characterize kinematic responses to axial load. The findings of this analysis warrant further investigation into axial loading and resultant IVD kinematic changes in hopes of elucidating its unique alterations to disc morphology in a highly active population.
ACKNOWLEDGMENTS
The authors thank the Marines from the 1st and 5th Regiments who supported this effort. | 2019-06-14T13:46:36.897Z | 2019-06-01T00:00:00.000 | {
"year": 2019,
"sha1": "b5e698372b3d14161a80532ad29cbfda92ca688c",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/jsp2.1057",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "efe230370b10d3a7b0543df3ef6d83e5a084f615",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237750337 | pes2o/s2orc | v3-fos-license | Simulating Historical Earthquakes in Existing Cities for Fostering Design of Resilient and Sustainable Communities: The Ljubljana Case
: The seismic exposure of urban areas today is much higher than centuries ago. The 2020 Zagreb earthquake demonstrated that European cities are vulnerable even to moderate earthquakes, a fact that has been known to earthquake-engineering experts for decades. However, alerting decision-makers to the seismic risk issue is very challenging, even when they are aware of historical earthquakes that caused natural catastrophes in the areas of their jurisdiction. To help solve the issue, we introduce a scenario-based risk assessment methodology and demonstrate the consequences of the 1895 Ljubljana earthquake on the existing building stock. We show that a 6.2 magnitude earthquake with an epicentre 5 km north of Ljubljana would cause many deaths and severe damage to the building stock, which would likely lead to direct economic losses higher than 15% of the GDP of the Republic of Slovenia. Such an event would be catastrophic not only for the community directly affected by the earthquake but for the entire country. We have disseminated this information over the course of a year together in addition to formulating a plan for enhancing the community seismic resilience in Slovenia. Hopefully, local decision-makers will act according to their jurisdiction in Slovenia and persuade decision-makers across Europe to update the built environment renovation policy at the European level.
Introduction
Historically, earthquakes have caused many natural disasters (e.g., [1][2][3]). In Europe, the Great Lisbon earthquake and the Great Messina earthquake are probably the deadliest events of this nature to have happened in the modern era. The Great Lisbon earthquake affected about 800,000 km 2 of land and killed up to 100,000 people (e.g., [1]), while the number of fatalities in the Great Messina earthquake may have been even higher [3]. It is thus very well known that very strong earthquakes can occur in Europe. However, the 2009 L'Aquila earthquake and the 2020 Zagreb earthquake showed that the existing built environment in Europe is not immune to strong or even moderate earthquakes [4,5]. Although the seismic resistance of new buildings and other types of new facilities is not problematic, the exposure of urban areas today is much higher than centuries ago. Therefore, strong earthquakes can still have a disastrous impact on today's built environment and societal wellbeing if their epicentre occurs near urban areas. Even though these facts are well known to experts in earthquake engineering, it is extremely difficult to establish governmental plans and actions to enhance seismic resilience prior to such events. The issue cannot be solved by learning from past events because learning from rare events is statistically unusual [6]. Therefore, it makes sense to develop physics-based methods and tools that can provide realistic information about the effects of strong earthquakes and seismic risk to (NDRA) [22,23], which was then adopted by the Government of the Republic of Slovenia. The NDRA accounts for fifteen different hazards including natural hazards (e.g., seismic and flood hazard) as well as man-made hazards (e.g., terrorism). In the following, the NDRA methodology [23] and the estimation and evaluation of seismic risk is briefly summarised.
The National Disaster Risk Assessment Methodology
The National Disaster Risk Assessment considers different hazards [23]. For each hazard, two or three adverse events, which are also called hazard scenarios [23], are defined based on an arbitrarily selected level of likelihood. Then, the consequences of the defined hazard scenarios are estimated. The likelihood and the consequences of the hazard scenario are used to define the risk. For this purpose, the risk matrix is defined ( Figure 1). Hazard scenarios of the same hazard enable a within-hazard risk comparison. However, the between-hazard risk comparison is made for only one hazard scenario (i.e., the so-called representative hazard scenario). The representative hazard scenario is considered to be "the reasonable worst-case scenario", whereby the term "reasonable" implies that the scenario is not related to very long return periods. Namely, the return period of the representative scenario is limited to 500 years. Consequently, catastrophic events with longer return periods, which could also occur in Slovenia, are excluded from the risk assessment by definition. Figure 1. Risk matrix according to [23] for the evaluation and communication of representative earthquake scenarios in Ljubljana. The risk in [23] is evaluated for the consequence on (i) people, (ii) economy, environment, and cultural heritage (E&E&C), and (iii) politics and society (P&S). The overall consequence is also defined.
The consequence and likelihood levels of a hazard scenario are characterised by a number from one to five. In particular, the consequence level increases with the increase of severity of consequences, while the likelihood level increases with the decrease of the return period (i.e., lower return periods represent a higher probability of occurrence). The consequence level is estimated by considering the impact on (i) people, (ii) economy, environment, and cultural heritage, and (iii) politics and society. Each consequence level is defined by the loss interval. For example, consequence level one corresponds to not more than five fatalities and 20 evacuated people, and economic losses lower than 100 million EUR (about 0.25% of the GDP). However, consequence level five corresponds to equal to or more than 200 fatalities or 500 evacuated people, and economic losses higher than 2.4% of the GDP. Analogously, each likelihood level is defined by the interval of the return period. For example, likelihood level one represents events with a return period of 250 [23] for the evaluation and communication of representative earthquake scenarios in Ljubljana. The risk in [23] is evaluated for the consequence on (i) people, (ii) economy, environment, and cultural heritage (E&E&C), and (iii) politics and society (P&S). The overall consequence is also defined.
The consequence and likelihood levels of a hazard scenario are characterised by a number from one to five. In particular, the consequence level increases with the increase of severity of consequences, while the likelihood level increases with the decrease of the return period (i.e., lower return periods represent a higher probability of occurrence). The consequence level is estimated by considering the impact on (i) people, (ii) economy, environment, and cultural heritage, and (iii) politics and society. Each consequence level is defined by the loss interval. For example, consequence level one corresponds to not more than five fatalities and 20 evacuated people, and economic losses lower than 100 million EUR (about 0.25% of the GDP). However, consequence level five corresponds to equal to or more than 200 fatalities or 500 evacuated people, and economic losses higher than 2.4% of the GDP. Analogously, each likelihood level is defined by the interval of the return period. For example, likelihood level one represents events with a return period of 250 years or more, while likelihood level five corresponds to events with a return period of only five years or less. The estimated risk for each considered hazard is communicated by the risk matrix ( Figure 1). The highest risk level is characterised by high likelihood and high consequence. However, risk level (low, medium, high, or very high) is assigned to each component of the risk matrix, which sets the basis for the within-hazard and between-hazard risk communication and comparison.
It is obvious that such risk evaluation is extremely sensitive to the definition of the hazard scenarios. As the scenarios are selected based on expert opinion, rare events can be overlooked. This can be especially problematic in the case of earthquake events because the potential of earthquake consequence rapidly increases with return periods greater than 250 years. Neglecting low-probability, high-consequence events in disaster risk assessments can quickly lead to bias. The issue can be solved by including all events in the risk assessment and estimating their average impact in a selected period. The time-based risk assessment is the established norm in the field of seismic risk assessment. However, it may not be suitable for a comparative risk assessment, because the models and methods needed in that approach have not yet been developed for all the hazards. An alternative way to address this problem is to expand the domain of likelihood and consequence level of the risk matrix, which would enable the inclusion of low-probability, high-consequence events into the risk assessment.
The Assessment of Seismic Risk
The National Disaster Risk Assessment [23] considered three earthquake scenarios. Earthquakes were simulated in Bovec (north-western Slovenia), Ljubljana (central Slovenia), and Brežice (south-eastern Slovenia). The Ljubljana earthquake was considered the representative scenario [23,24].
The three earthquake scenarios were defined by the epicentral seismic intensity VII-VIII according to the European macro-seismic scale (EMS) [25]. Based on the selected seismic intensity, the return periods of the earthquakes were estimated. For all three scenarios, the return period was between 150 and 250 years, which corresponded to likelihood level two in the risk matrix.
The most severe consequences were estimated for the Ljubljana earthquake scenario. The consequence level was first estimated for each type of impact separately. The impact on people was characterised by consequence level five, because it was realised that the earthquake would cause a high number of evacuated people (5200). It is interesting to note that the consequence level would have been equal to four had the impact on people only depended on the number of fatalities (60). In terms of impact on the economy, environment, and cultural heritage, consequence level five was assigned. In this case, the high consequence level was determined due to the high valuation of the buildings that were estimated to be damaged (about 3 billion EUR or 8.2% of GDP). It should be noted that the valuation of the damaged buildings is not equal to the economic losses caused by the earthquake. In terms of impact on politics and society, the consequence level was equal to four, which was due to the expected psychosocial effects (e.g., increased fear in people) and political consequences (e.g., the need to request international aid) of the earthquake. By considering the consequence levels for all three types of impact, the overall consequence level was estimated to be five. Finally, based on the consequence levels estimated for different types of impact and by also considering the estimated likelihood level, the risk associated with the Ljubljana earthquake scenario was evaluated. It was concluded that the level of risk was high, regardless of the type of impact ( Figure 1).
In the case of the other two scenarios, which were not considered representative of the seismic hazard, the consequence was lower. This was mainly because the urban areas in Bovec and Brežice are much smaller than that in Ljubljana. The consequence level in these cases varied between one and three. Thus, the risk was evaluated as low, medium, and high depending on the type of consequence.
The Building Stock in the Republic of Slovenia
The sources of building stock data and population data were, respectively, the Real Estate Register [26] and the Central Population Register [27]. The Real Estate Register data refers to unique building units, which can be either entire buildings or parts of buildings in the case of large buildings. The distinction between the two is not evidenced in the register and is thus not considered in this study. For brevity, building units are hereinafter termed "buildings".
Each building from the Real Estate Register is described with the following data: the centroid coordinates, year of construction, occupancy class, net floor area, predominant material of the load-bearing structure, building value based on a real estate mass appraisal procedure, number of storeys, and building height.
The Central Population Register includes the number of permanent and temporary residents in residential buildings. To avoid the double-counting of people, only permanent residents were considered. The building and population density are presented in Figures 2 and 3
The Building Stock in the Republic of Slovenia
The sources of building stock data and population data were, respectively, the Real Estate Register [26] and the Central Population Register [27]. The Real Estate Register data refers to unique building units, which can be either entire buildings or parts of buildings in the case of large buildings. The distinction between the two is not evidenced in the register and is thus not considered in this study. For brevity, building units are hereinafter termed "buildings".
Each building from the Real Estate Register is described with the following data: the centroid coordinates, year of construction, occupancy class, net floor area, predominant material of the load-bearing structure, building value based on a real estate mass appraisal procedure, number of storeys, and building height.
The Central Population Register includes the number of permanent and temporary residents in residential buildings. To avoid the double-counting of people, only permanent residents were considered. The building and population density are presented in Figures 2 and 3, respectively. A concentration of buildings, as well as people, can be observed in urban areas of Ljubljana, Maribor, Celje, and Kranj. Although the materials of the load-bearing structures are specified in the Real Estate Register, type of the structural system is unclear (e.g., reinforced concrete structures can have a wall system, a frame system, a dual system, etc.). In addition, the Real Estate Register does not contain information about the dimensions of the load-bearing structure elements, material properties, and the results of the structural design. However, building data were available for the total building stock and were considered sufficient for the development of a simplified building class fragility model, which is described in Section 4.2. Buildings were classified into 20 building classes (see Table 1). The simplified fragility model of each building class accounts for a specific predominant material of the loadbearing structure, the construction period, and the number of storeys. Although the materials of the load-bearing structures are specified in the Real Estate Register, type of the structural system is unclear (e.g., reinforced concrete structures can have a wall system, a frame system, a dual system, etc.). In addition, the Real Estate Register does not contain information about the dimensions of the load-bearing structure elements, material properties, and the results of the structural design. However, building data were available for the total building stock and were considered sufficient for the development of a simplified building class fragility model, which is described in Section 4.2. Buildings were classified into 20 building classes (see Table 1). The simplified fragility model of each building class accounts for a specific predominant material of the load-bearing structure, the construction period, and the number of storeys. The adopted fragility model is simplistic with respect to the number of parameters used to define building class fragility models. If the building stock data were more precise (e.g., [28]), other parameters of the buildings and other fragility models would be considered in the development of the building stock fragility models [29][30][31][32][33]. Nevertheless, engineering judgment would be needed to define other parameters that are not yet available [31]. However, it was proven before that pure empirical and heuristic-empirical models, developed by different authors, provided reasonably comparable risk results [30]. The vast majority of buildings in Slovenia are either brick or stone masonry buildings or reinforced concrete buildings. Buildings with masonry (brick or stone) and reinforced concrete structures were therefore treated as separate material classes. Steel and timber buildings, as well as all other buildings with a mixed or unspecified material of the loadbearing structure, were classified in the third material class (i.e., other or unspecified load-bearing structure material). It is worth emphasising that the buildings with steel and timber structures were not grouped into separate classes because the percentage of these buildings is very low. The majority of buildings from the third material class have a load-bearing structure that is made of a combination of masonry and reinforced concrete.
The distinction between theperiods of construction is based on standards for earthquakeresistant design in Slovenia as well as in Yugoslavia, which extended to the area of today's Slovenia until 1991 [34]. Buildings built up to 1964 were not designed for seismic actions. The first building code explicitly addressing seismic action and design came into force between 1964 to 1981. Construction based on the second generation of earthquake-resistant design started in Yugoslavia in 1982. These codes were valid until 2008 when Eurocodes become mandatory in Slovenia. However, no distinction is made between buildings built in the period from 1982 to 2008 and those built after 2008, because the number of buildings from the latest period is relatively low and buildings from the 1982-2008 period already have relatively high earthquake resistance.
The number of storeys also affects the building fragility model. Buildings with a maximum of three storeys are classified as low-rise buildings. Medium-rise buildings and high-rise buildings are those with four to six storeys and seven storeys or more, respectively. This classification is similar to that used in the literature (e.g., [9,35,36]).
Based on the building stock classification, 27 building classes could be defined, but some simplifications were made in order to make the population of buildings between building classes more uniform. Therefore, medium-rise and high-rise masonry buildings were represented by only one class. The same was done in the case of buildings made of materials other than masonry and reinforced concrete. Moreover, no distinction was made between medium-rise and high-rise buildings with reinforced concrete structures that were constructed before 1964.
In order to take into account only the most important buildings, the characteristic building stock of the Republic of Slovenia was defined, which includes buildings that have permanent residents or an estimated value of 50,000 EUR or more. Based on this constraint, the characteristic building stock includes about 520,000 buildings, the value of which was estimated to about 97% of the value of the total building stock. It is interesting that the characteristic building stock is approximately evenly distributed in all three construction periods (Table 1). It includes 332,000 masonry buildings (64% of all buildings in the characteristic building stock), with a value estimated to around 42% of the total value of the characteristic building stock. Almost 60% of residents of Slovenia are permanently registered in these buildings. The value of reinforced concrete buildings was estimated to be 31% of the total value of the characteristic building stock, while the value of buildings with structures made of other (or unspecified) materials represents 28% of the total market value. Most of the buildings have three storeys or less. The number of buildings with four or more storeys is approximately 17,200, which is only about 3% of all buildings in the characteristic building stock. However, the estimated market value of these buildings represents approximately one-third of the total value of the characteristic building stock, and almost half a million people live in these buildings.
The Seismic Hazard in Slovenia
In 2020, two seismic hazard models were available (i.e., [37,38]). Here we refer to the official seismic hazard model in the Republic of Slovenia which was introduced by Lapajne et al. [37]. According to that model, the highest seismic hazard is observed for the north-western, central, and south-eastern parts of Slovenia. These parts of Slovenia also coincide with the epicentres considered in the scenarios from the National Disaster Risk Assessment (Section 2). According to [37], the peak ground acceleration (PGA) for the return period of 475 years and the most hazardous regions was estimated to 0.25 g. By moving towards the north-east and south-west of Slovenia, the seismic hazard is reduced. The lowest PGA in these regions does not exceed 0.10 g, even for the case of a 1000-year return period.
The seismic hazard model [37] primarily depends on the catalogue of past earthquakes. The seismic hazard in the north-western part of Slovenia is influenced by the 1511 Idrija earthquake (magnitude 6.8), the 1976 Friuli earthquake (magnitude 6.5), and the 1998 and 2004 Posočje earthquakes (magnitudes 5.7 and 4.9). The seismic hazard model in central Slovenia is controlled mainly by weaker earthquakes, but also by some stronger earthquakes, such as the 1882 Vrhnika earthquake (magnitude 5.0), the 1895 Ljubljana earthquake (magnitude 6.1), and the 1926 Postojna earthquake (magnitude 5.6). In the south-eastern part of Slovenia, the seismic hazard model considers numerous earthquakes with relatively low magnitudes and few with moderate magnitudes. The 1917 Brežice earthquake (magnitude 5.7) is the strongest known earthquake to have occurred in this region.
A new official seismic hazard model is currently in preparation. However, its draft, unofficial version has already been presented [39]. The draft of the new model indicates that the regions with the highest seismic hazard in Slovenia are the same as in the case of the current official seismic hazard model (i.e., north-western, central, and south-eastern Slovenia). Based on this information and in consideration of the building stock's exposure, which is the highest in the Ljubljana region, it was decided to simulate the consequences of the 1895 Ljubljana earthquake. The selected event is described in more detail in Section 5.1.
Scenario-Based Seismic Risk Assessment Methodology
The term "scenario-based" in this paper indicates that the methodology enables the assessment of seismic risk for a particular earthquake scenario that is defined by earthquake magnitude and hypocenter. Thus the "scenario-based methodology", as used in this paper, has different meaning than in the case of a functional, probabilistic-based approach that is adopted from two-level factorial design [40].
The methodology for scenario-based seismic risk assessment of building stock, as considered in this paper (Figure 4), is consistent with general seismic risk assessment methodology [41], but the methodology was realised in a way that followed the course of events during an earthquake. The earthquake scenario was characterised by the earthquake magnitude and hypocenter. A spatial ground-motion model was then used to simulate the ground motions at the locations of buildings (Section 4.1). The ground-motion simulation was followed by the damage simulation. The damage of each building was characterised by a damage state starting from the state of minor to no damage to the state of complete damage. The building damage state was simulated based on the building stock fragility model (Section 4.2). Then, the consequences of the earthquake were determined for each separate building and at the level of the building stock (Section 4.3). The consequences were determined in terms of the direct economic losses, the number of collapsed buildings, and the number of fatalities. The ground-motion model and the damage model were considered uncertain. Thus, the ground motions and the damage states of the buildings were simulated many times, which then allowed for the quantification of the effects of uncertainties on the consequences.
The Ground-Motion Model
For the simulation of ground-motion fields, the framework introduced by Weatherill et al. [42] was used in conjunction with the Bindi et al. [43,44] ground-motion prediction equation, which is based on the moment magnitude, location of the hypocentre, and other fault parameters. The ground-motion model accounts for the between-and within-event variability of ground motions: where , is the ground-motion intensity simulated at location i for the j-th realisation of the earthquake scenario, is the median ground motion at location i, represents the ratio between the median ground-motion intensity and the simulated ground-motion in-
The Ground-Motion Model
For the simulation of ground-motion fields, the framework introduced by Weatherill et al. [42] was used in conjunction with the Bindi et al. [43,44] ground-motion prediction equation, which is based on the moment magnitude, location of the hypocentre, and other fault parameters. The ground-motion model accounts for the between-and within-event variability of ground motions: where Y i,j is the ground-motion intensity simulated at location i for the j-th realisation of the earthquake scenario, Y i is the median ground motion at location i, τη j represents the ratio between the median ground-motion intensity and the simulated ground-motion intensity for the j-th realisation of the earthquake (i.e., the between-event residual), and the term σε i,j is the ratio between the simulated ground motion at location i of the j-th earthquake realisation and the median ground motion at location i (i.e., the within-event residual). The between-and within-event residuals depend on the between-and within-event standard deviations τ and σ, which are multiplied to independent random variables with a standard normal distribution η j and ε i,j , respectively. The within-event residuals are affected by the spatial correlation between random variables ε i,j at different locations. This spatial correlation is incorporated into the model in order to account for the similarity of the ground motions at locations close to one another. In the past, many different models of spatial correlation between the ground motions have been developed. We used the model proposed by Jayaram in Baker [45]. The ground motions were simulated for the peak ground acceleration (PGA) at the rock level. The area was discretised into cells of 0.5 × 0.5 km. A constant value of the PGA at the rock level was assigned to all buildings within one cell for a given realisation of the ground-motion field, which was, in addition to the magnitude and hypocentre, affected by the length and area of the activated fault [46]. However, the PGA at the rock level was adjusted by the soil amplification factor based on the draft of the new Eurocode 8 [47]. For this purpose, a building-specific soil type map for the building stock was developed, as it has been shown that damage caused by earthquakes and soil amplification factors are correlated [48].
The Damage Model
The damage model depended on the stochastic building stock fragility model and the stochastic ground-motion model. The fragility functions were simulated at the building level with consideration of the effects of uncertainty of fragility at the level of the building class. Five damage states were considered, representing no to minor damage (DS0), slight damage (DS1), moderate damage (DS2), extensive damage (DS3), and complete damage (DS4). A detailed description of the damage associated with each damage state can be found elsewhere [9]. The fragility functions were defined for the PGA in the form of a lognormal cumulative distribution function which is typically assumed (e.g., [9,49]). Based on the simulated set of fragility functions, the damage state of a building in a given simulation of fragility function and a given ground-motion field was determined by generating a uniformly distributed random number from the interval [0,1] and then through assignation of the damage state, as shown in Figure 5.
The simulation of fragility functions of buildings was performed in two stages: at the level of building class and at the level of individual buildings. At the building class level, the uncertainty in the fragility functions was considered by randomly simulating the class-level median PGA causing DS4. This parameter is denoted as PGA DS4,c,l , where c represents the building class and l is the index of the damage simulation. Please note that the index of a damage simulation (l) is denoted differently than the index of a ground-motion simulation (j) (Section 4.1), because a given ground-motion simulation can be the basis for more than one damage simulation. In Figure 4, the number of ground-motion simulations is denoted as N and the number of damage simulations per one ground-motion simulation is denoted as M. Therefore, the total number of damage simulations is equal to N M. plete damage (DS4). A detailed description of the damage associated with each damage state can be found elsewhere [9]. The fragility functions were defined for the PGA in the form of a lognormal cumulative distribution function which is typically assumed (e.g., [9,49]). Based on the simulated set of fragility functions, the damage state of a building in a given simulation of fragility function and a given ground-motion field was determined by generating a uniformly distributed random number from the interval [0,1] and then through assignation of the damage state, as shown in Figure 5. The simulation of fragility functions of buildings was performed in two stages: at the level of building class and at the level of individual buildings. At the building class level, the uncertainty in the fragility functions was considered by randomly simulating the class-level median PGA causing DS4. This parameter is denoted as , , , where c represents the building class and l is the index of the damage simulation. Please note that the index of a damage simulation (l) is denoted differently than the index of a groundmotion simulation (j) (Section 4.1), because a given ground-motion simulation can be the basis for more than one damage simulation. In Figure 4, the number of ground-motion simulations is denoted as N and the number of damage simulations per one ground-motion simulation is denoted as M. Therefore, the total number of damage simulations is equal to N•M. Due to a lack of information, PGA DS4,c,l , was considered uniformly distributed within the bounds estimated from existing studies of similar buildings [9,49] (Figure 6a). Because of a certain level of conservatism in the fragility functions obtained from the literature, a correction factor called the load-conservatism factor was introduced and applied. It accounted for the impact of the code-based response spectrum and corresponding ground motions, which are often considered in the fragility analysis [18,19]. It was shown [50] that by using such ground motions, the median intensity causing a designated damage state can be underestimated significantly. This conservatism was eliminated approximately by the load-conservatism factor, which was estimated by means of nonlinear dynamic analyses of building-class equivalent single-degree-of-freedom (SDOF) models. The response of such SDOF models was simulated by code-consistent accelerograms [50] and hazardconsistent accelerograms that were selected from the NGA and RESORCE ground-motion databases [51,52]. The bounding values of PGA DS4,c,l adjusted by the load-conservatism factor are significantly increased (Figure 6a).
In the second stage, the fragility functions were fully defined at the level of individual buildings. This was done by considering the uncertainty in the fragility of buildings within the building class. First, the median PGA causing DS4 of a building ( PGA DS4,k,l ) was modelled by a lognormally distributed random variable centred around PGA DS4,c,l simulated in the first stage (Figure 6b). In PGA DS4,k,l , k and l are the index of the building and index of the damage simulation, respectively. The within-class uncertainty in the PGA causing DS4 was characterised by the lognormal standard deviation β DS4,c , which was considered equal to 0.40 [9] in the case of masonry and reinforced concrete buildings. However, for building classes 15-20 (Table 1), β DS4,c was increased, based on expert opinion, to 0.60 to account for the fact that the material of the load-bearing structure for most buildings in these classes is unknown. The median PGAs of fragility functions for other damage states PGA DS1,k,l , PGA DS2,k,l , and PGA DS3,k,l (Figure 6c) were estimated relative to PGA DS4,k,l with consideration of the damage-state PGA ratios from [9]. However, it was also considered that less severe damage states correspond to lower ductility demand and thus lower values of the load-conservatism factor. Lastly, the lognormal standard deviations of fragility functions defined at the building level were determined. For this purpose, the model developed in [53] was used to obtain the lognormal standard deviations of the DS4 fragility function (β DS4,k ). The lognormal standard deviations for other damage states β DS1,k , β DS2,k , and β DS3,k were then calculated according to [54]. Sustainability 2021, 13, x FOR PEER REVIEW 13 of 23 Figure 6. (a) The range of , , for the considered building classes with and without consideration of the load-conservatism factor, (b) schematic presentation of two simulations of , , and the corresponding distributions of , , for building class 3, and (c) schematic presentation of two simulations of , , and the corresponding fragility functions for two buildings from building class 3.
The Consequence Model
The consequence model supported the estimation of (direct) economic losses, number of collapsed buildings, and number of fatalities for each simulation of building damage. The direct economic losses for k-th building and l-th damage simulation L k,l were modelled as: where A k is the net floor area of the k-th building and c k,l is the ratio between the DSdependent repair/reconstruction cost and the estimated reconstruction cost per m 2 of the net floor area, denoted as C R . The ratio c k,l depended on the damage state of the k-th building from the l-th damage simulation. This was equal to 0.02, 0.1, 0.4, and 1.0 for damage states from DS1 to DS4, as recommended in [9]. The net floor areas A k were obtained from the Real Estate Register [26] (Section 3.1). The average new construction cost 1100 EUR/m 2 of the net floor area (inclusive of VAT) [55] was considered as the base cost to define C R . This cost was then increased to estimate the reconstruction cost C R = 1100 EUR/m 2 of the net floor area (inclusive of VAT). A 13.5% cost increase was estimated from the literature [56][57][58]. The total direct economic losses for the l-th damage simulation were determined as the sum of L k,l over the entire building stock. The number of collapsed buildings was determined as a portion of buildings reaching the complete damage state (DS4). The ratio between the number of collapsed buildings and buildings reaching DS4 was considered dependent on the material of the load-bearing structure and number of storeys, according to [9]. This varied from 5% to 15%. Therefore, the number of collapsed buildings in a given damage simulation was first determined for each building class. By summing up the numbers of collapsed buildings of all building classes, the number of collapsed buildings was then determined at the level of the building stock.
The number of fatalities F k,l in the k-th building and l-th damage simulation was determined as follows: where O k,l is the Boolean variable, which takes the value of 1 in the case of the building's collapse and 0 otherwise, λ f ,k is the fatality rate due to collapse of the k-th building, and N P,k is the number of people inside building k. The value of O k,l was determined by randomly generating a subset of collapsed buildings from all buildings reaching DS4 within a given building class. The size of the subset of collapsed buildings in the l-th damage simulation was equal to the number of collapsed buildings within the given building class determined in the l-th damage simulation. The fatality rate λ f ,k in general depends on many parameters, such as the type of structural system, the material of the structure, and the building height [59]. However, in the case of conventional structural systems common for Slovenia, λ f ,k is close to 0.10. Therefore, the value of 0.10 was considered in this study. The same value was assumed also in [18], where a simplified version of the fatality rate model by Zuccaro and Cacace [60] was applied. Moreover, N P,k was estimated by considering the equivalent annual occupancy model [61], which allows the obtaining of the yearly average number of people in a building based on the building's purpose, surface area, and number of permanently registered residents [27]. Finally, the number of fatalities in the l-th damage simulation F l was determined as the sum of F k,l over all buildings.
Consequences of the 1895 Ljubljana Earthquake on the Current Building Stock
The 1895 Ljubljana earthquake occurred on Easter Sunday, 14 April, at 23:17 local time [2]. The magnitude of the earthquake was estimated to be M L = 6.1 [62] and the EMS intensity was estimated between VIII and IX. The epicentre was estimated at 46.1 • N and 14.5 • E, which is approximately 5 km north of Ljubljana downtown, at a depth of about 16 km [62]. The shock was reportedly felt in Vienna (Austria), Assisi and Florence (Italy), and Split (Croatia). The largest damage was caused within the radius of 18 km from the epicentre. At the end of 19th century, Ljubljana's population was about 31,000, with around 1400 buildings. About ten percent of the buildings were damaged or destroyed and about 10 people died [63]. Since then, the building stock and the population around Ljubljana have increased substantially.
Simulation of Ground-Motion Fields in Terms of PGA
For the purpose of the ground-motion simulation based on [43,44], the local magnitude M L = 6.1 was converted to moment magnitude M w = 6.2 according to [64]. The earthquake epicentre was set to 46.1 • N and 14.5 • N, and the hypocentral depth was considered equal to 16 km [2]. The best estimate of the Žužemberk fault parameters (strike = 315 • , dip = 80 • , rake = 160 • [65]) was considered. It was assumed that the hypocenter is at the centroid of the fault surface, the length and area of which were estimated according to the model of Leonard et al. [46].
Ground-motion fields for PGA at the rock level were simulated 500 times, in order to account for ground-motion model uncertainties. In Figure 7, the realisation of two ground-motion fields is presented. The ground-motion fields in Figure 7a,b approximately correspond, respectively, to the 5th and 50th percentile of economic losses, which are presented later in Section 5.2. A large difference in the ground-motion fields can be observed, which implies that the uncertainties in the ground-motion intensities can greatly affect the variability of the consequences.
Simulation of Ground-Motion Fields in Terms of PGA
For the purpose of the ground-motion simulation based on [43,44], the local magnitude ML = 6.1 was converted to moment magnitude Mw = 6.2 according to [64]. The earthquake epicentre was set to 46.1° N and 14.5° N, and the hypocentral depth was considered equal to 16 km [2]. The best estimate of the Žužemberk fault parameters (strike = 315°, dip = 80°, rake = 160° [65]) was considered. It was assumed that the hypocenter is at the centroid of the fault surface, the length and area of which were estimated according to the model of Leonard et al. [46].
Ground-motion fields for PGA at the rock level were simulated 500 times, in order to account for ground-motion model uncertainties. In Figure 7, the realisation of two ground-motion fields is presented. The ground-motion fields in Figure 7a,b approximately correspond, respectively, to the 5th and 50th percentile of economic losses, which are presented later in Section 5.2. A large difference in the ground-motion fields can be observed, which implies that the uncertainties in the ground-motion intensities can greatly affect the variability of the consequences. The average values of PGA at rock and surface level for all 500 simulations of groundmotion fields are presented in Figure 8. It can be seen that the average PGA at the rock level above the rupture area is approximately uniform. By moving away from the projection of the rupture area, the values of PGA decrease according to the ground-motion model. However, the local soil condition (Figure 8b) increases the average PGA at the surface of almost all the cells as there are only a few sites in the investigated area that are classified as rock or rock-like sites (soil type A, according to [66]). The average values of PGA at rock and surface level for all 500 simulations of groundmotion fields are presented in Figure 8. It can be seen that the average PGA at the rock level above the rupture area is approximately uniform. By moving away from the projection of the rupture area, the values of PGA decrease according to the ground-motion model. However, the local soil condition (Figure 8b) increases the average PGA at the surface of almost all the cells as there are only a few sites in the investigated area that are classified as rock or rock-like sites (soil type A, according to [66]).
The portion of the building stock exposed to the ground-motion effects is relatively large. About 30% of the characteristic building stock (165,000 buildings) is located within 35 km from the projection of the fault rupture area to the surface. In this area, the PGA is expected to exceed 0.05 g, which is about the threshold at which building damage starts to develop [58,67]. The real estate value of the exposed building stock is approximately 47 billion EUR. The exposed area is populated by 747,000 people, which is more than onethird of the population of Slovenia. About 31% of the exposed buildings were built before 1965, 29% between 1965 and 1981, and 40% after 1982. The population in these building classes is quite uniformly distributed. Buildings built before 1965, between 1965 and 1981, and after 1981, are occupied, respectively, by about 221,000, 252,000, and 274,000 people.
to develop [58,67]. The real estate value of the exposed building stock is approximately 47 billion EUR. The exposed area is populated by 747,000 people, which is more than onethird of the population of Slovenia. About 31% of the exposed buildings were built before 1965, 29% between 1965 and 1981, and 40% after 1982. The population in these building classes is quite uniformly distributed. Buildings built before 1965, between 1965 and 1981, and after 1981, are occupied, respectively, by about 221,000, 252,000, and 274,000 people.
Simulation of Building Stock Damage and Consequences
For each of the 500 ground-motion fields, 20 sets of fragility functions for each building were simulated. Consequently, the size of the damage sample, as well as that of the consequence sample, of each building was equal to 10,000.
The spatial distribution of the building stock damage for the selected damage simulations is presented in Figure 9. Due to the resolution limitation, the damage state is not presented for each building separately. Rather, an average damage of the buildings for cells of 0.25 × 0.25 km is calculated and presented on the map. The damage maps presented in Figure 9a,b were obtained based on the ground-motion fields from Figure 7a,b and approximately correspond to the 5th and 50th percentile of economic losses, respectively. It can be observed that the damage maps based on the average building stock damage within cells are significantly different. In Figure 9a, the average damage exceeds DS3 for only a few cells, while the average damage presented in Figure 9b is close to DS4 for many cells in the north-east of Ljubljana downtown. It is interesting to note that the differences in damage and, consequently, the losses are due to the effect of both the between-and within-event ground-motion residuals. Due to the effect of the within-event residual, the highest values of PGA occur in different areas. In the case of the simulation corresponding to the 5th percentile of economic losses, PGA is the highest on the outskirts of Ljubljana (Figure 7a), where the building density is very low. However, in the simulation Figure 8. The average PGA for 500 simulations at (a) rock and (b) surface level. In the latter case, the ground-motion intensities are presented only for the cells that include buildings.
Simulation of Building Stock Damage and Consequences
For each of the 500 ground-motion fields, 20 sets of fragility functions for each building were simulated. Consequently, the size of the damage sample, as well as that of the consequence sample, of each building was equal to 10,000.
The spatial distribution of the building stock damage for the selected damage simulations is presented in Figure 9. Due to the resolution limitation, the damage state is not presented for each building separately. Rather, an average damage of the buildings for cells of 0.25 × 0.25 km is calculated and presented on the map. The damage maps presented in Figure 9a,b were obtained based on the ground-motion fields from Figure 7a,b and approximately correspond to the 5th and 50th percentile of economic losses, respectively. It can be observed that the damage maps based on the average building stock damage within cells are significantly different. In Figure 9a, the average damage exceeds DS3 for only a few cells, while the average damage presented in Figure 9b is close to DS4 for many cells in the north-east of Ljubljana downtown. It is interesting to note that the differences in damage and, consequently, the losses are due to the effect of both the between-and within-event ground-motion residuals. Due to the effect of the within-event residual, the highest values of PGA occur in different areas. In the case of the simulation corresponding to the 5th percentile of economic losses, PGA is the highest on the outskirts of Ljubljana (Figure 7a), where the building density is very low. However, in the simulation corresponding to median economic losses, the highest PGA can be observed in Ljubljana downtown (Figure 7b), where the exposure is much higher.
Due to the uncertainty of the ground-motion and damage models (see Sections 4.1 and 4.2), the numbers of buildings in different damage states are presented ( Table 2) for three percentile values (5th, 50th (median), and 95th percentile). Most of the buildings are expected to be in DS2 (i.e., between 17,000 to 48,000, with a median of about 34,000), whereas the number of buildings in DS4 is expected to be between 318 and 24,000 with the median of about 4700. The median number of collapsed buildings was estimated to be 674. The direct economic losses were estimated between 1.7 and 22.3 billion EUR, with a median loss of 7.9 billion EUR. The median number of fatalities was estimated to be 338, while the fatalities for a 90% confidence level were observed in the interval between 19 to 1820. Approximately 70% of fatalities and 50% of economic losses are expected to occur in buildings built before 1965. corresponding to median economic losses, the highest PGA can be observed in Ljubljana downtown (Figure 7b), where the exposure is much higher.
(a) (b) (Table 2) for three percentile values (5th, 50th (median), and 95th percentile). Most of the buildings are expected to be in DS2 (i.e., between 17,000 to 48,000, with a median of about 34,000), whereas the number of buildings in DS4 is expected to be between 318 and 24,000 with the median of about 4700. The median number of collapsed buildings was estimated to be 674. The direct economic losses were estimated between 1.7 and 22.3 billion EUR, with a median loss of 7.9 billion EUR. The median number of fatalities was estimated to be 338, while the fatalities for a 90% confidence level were observed in the interval between 19 to 1820. Approximately 70% of fatalities and 50% of economic losses are expected to occur in buildings built before 1965. The very large difference between the 5th and 95th percentile is mainly the consequence of uncertainties in the ground-motion field. In order to quantify the effect of these The very large difference between the 5th and 95th percentile is mainly the consequence of uncertainties in the ground-motion field. In order to quantify the effect of these uncertainties, the analysis presented in Section 5.1 was re-performed by setting the withinand between-event residuals (see Equation (1)) to zero. Consequently, one ground-motion field instead of 500 was taken into account. It consisted of the ground-motion intensities obtained with the median ground-motion model (Y j in Equation (1)). Note that the median ground-motion field is similar to that presented in Figure 8 but with slightly lower PGAs.
The results of the simulation are shown in Table 3. The median direct economic losses (6.6 billion EUR) are slightly lower than those obtained with consideration of the ground-motion uncertainties (Table 2), which is the expected result. The same trend can be observed for the number of fatalities and the number of buildings in damage states DS3 and DS4. On the other hand the number of buildings in DS1 and DS2 increased, which was also expected. However, more importantly, the dispersion of the consequences measured by the 90% confidence interval is significantly decreased. The 5th and 95th percentile values are much closer to the median values, which indicates that the greatest source of uncertainty of consequences corresponds to the ground-motion field uncertainty. However, because historical earthquakes cannot be defined precisely, the scenario-based seismic risk assessment should be assessed with consideration of between-and within-event residuals.
Discussion and Conclusions
The investigated region is particularly vulnerable because of the high concentration of buildings and people. In an earthquake with Mw = 6.2 and the epicentre at a critical location, 165,000 buildings and 747,000 people would be exposed to detrimental groundmotion effects if assuming that the latter can occur at the peak ground acceleration higher than 5% g. The consequences of the simulated earthquake event would be catastrophic for the Republic of Slovenia, due to the high expected number of fatalities and very high direct economic losses. Thus, the Republic of Slovenia is not resilient to strong earthquakes.
From an earthquake consequence point-of-view, Slovenia is very likely amongst the most exposed member states of the European Union. This conclusion is not the result of extremely strong historical earthquakes in Slovenia or an extremely fragile building stock, but rather the consequence of the limited resources of the Republic of Slovenia. A simple analysis of the results of the effects of the considered earthquake indicates that a strong earthquake could cause direct losses of more than 15% of the GDP of the Republic of Slovenia, if we take into account the median economic loss of 7.9 billion EUR and the GDP from 2019 according to the data of the Statistical Office of the Republic of Slovenia (48 billion EUR). Such losses would be without doubt catastrophic and probably associated with a very long recovery period. For a better understanding, the median losses can be compared to those observed in Italy after the 2009 L'Aquila earthquake. According to Wikipedia, the L'Aquila earthquake caused losses of 16 billion USD, however Italy's GDP in 2019, according to Google, amounted to 2191 billion USD. The losses of the catastrophic L'Aquila earthquake, which had a slightly larger magnitude than that estimated for the 1895 Ljubljana earthquake, amounted to 0.7% of the GDP of the Republic of Italy. We can thus conclude that the earthquake consequence in terms of economic losses relative to the GDP of the Republic of Slovenia would be about 20 times greater than that based on the GDP of the Republic of Italy. However, Italy has not yet succeeded to fully recover the affected area hit by the 2009 L'Aquila earthquake. The recovery time in Slovenia would be significantly longer, not only due to limited financial resources but also due to other resources in Slovenia.
Fortunately, there have been no earthquakes in the Republic of Slovenia in the last seventeen years to cause such damage. However, due to the information provided by the seismic stress test of the building stock in Slovenia [19] and the recent catastrophic earthquakes in the vicinity of Slovenia (Italy, Croatia, Albania), the awareness about seismic risk is growing, and the community position for establishing preventive actions to enhance seismic resilience is strengthening too.
There are two fundamentally different approaches to addressing the problem. The first approach is to speculate and wait until the building stock is replaced naturally. Such an option is against sustainable development goal No. 11 of the United Nations, which foresees that we have to make cities and human settlements inclusive, safe, resilient, and sustainable. The unsustainability of inaction is also indicated by the seismic stress test of the building stock in Slovenia, which utilized time-based seismic risk assessment [19]. The seismic stress test outcome showed that about 90-230 thousand human lives in Slovenia are dangerously exposed to earthquakes in the long term. The second approach for addressing the problem is to act in agreement with the sustainable development goal, which means establishing and realising plans for enhancing community seismic resilience before a strong earthquake hits Slovenia. Triggering this process is very challenging, because the necessary condition for it is the community's awareness of a too high seismic risk.
Currently, Slovenia has no law regulating the responsibility of owners for the seismic risk, which is largely related to rare earthquake events with catastrophic consequences for the owners and community. Most of the owners expect that the complete solidarity of the community will be established after a strong earthquake and that this is the optimal option to address the problem. Such a belief is false. It is based on the assumption that the government will take care of the renovation or replacement of damaged buildings with help from others. Even if the complete solidarity of the community is established after a strong earthquake, it can be argued that the recovery time will be far too long, and societal wellbeing will be insufficient for decades in the area affected by the strong earthquake. Another issue is that the complete solidarity of the community after a strong earthquake is not equitable to the owners who invested in new construction or building strengthening prior to the earthquake due to their awareness of seismic risk. Thus, the concept of the complete solidarity of the community after an adverse event is not the solution but the main obstacle that prevents the establishment of a value system in relation to seismic risk prior to a major earthquake. Without the value system of seismic risk, the owners' interest in strengthening the seismic resistance of their buildings and consequently enhancing the community seismic resilience cannot be established.
The simulation of consequences of a rare earthquake event can thus present a key step to disseminate information about the outcomes of strong earthquakes within the community and establish risk awareness. We had the opportunity to present the results of this study to the Ministry for the Environment and Spatial Planning and in the Slovenian parliament at the 37th meeting of the Committee on Infrastructure, Environment, and Spatial Planning, and have disseminated it to other stakeholders and decision-makers in the past year. Based on the argument that we have to react prior to a strong earthquake, which some other invited experts also supported, the Committee on Infrastructure, Environment, and Spatial Planning of the National Assembly of the Republic of Slovenia unanimously decided that the Government of the Republic of Slovenia must prepare a resolution on enhancing the seismic resilience of Slovenia by the end of 2021. Hopefully, the resolution and other actions from the seismic stress test [19] suggested to the Government will provide enough information to realise that sustainable development in Slovenia is impossible without enhancing community seismic resilience.
It should be noted that the scenario-based seismic risk assessment methodology, as realised for the purpose of seismic stress test of building stock in Slovenia [19], can be further improved. The main limitations of the methodology are related to the lack of building stock data and the ground-motion model, which are affected by more parameters than those considered in the study. Thus, it was suggested that the Government establish a system for building stock data collection, which could in the future, for example, provide more data to develop a comprehensive building stock damage model. Because it is expected that the seismic stress test will be executed periodically, all other novelties developed between two consecutive tests will be automatically included in the seismic stress test. | 2021-09-28T01:09:36.447Z | 2021-07-08T00:00:00.000 | {
"year": 2021,
"sha1": "efc82237b04743c1f9f848da2594a46dfc311fc7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/13/14/7624/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d40b0707b8c6023970aaafb59a9265e24c1f49cd",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering",
"History"
],
"extfieldsofstudy": [
"Geography"
]
} |
142722160 | pes2o/s2orc | v3-fos-license | “Becoming a Different Me”: Simone de Beauvoir on Freedom and Transatlantic Sexual Stereotypes
Europe’s relationship to America in general, and France’s in particular, centers around questions of freedom and dependency. This paper compares Europe’s search for independence with America’s ideology of freedom as articulated through today’s sexualised transatlantic rhetoric. I examine Simone de Beauvoir’s observations that differences in sexual relations and gender constructions are crucially linked to constitutional and cultural notions of liberty. Her portrayal of male disempowerment in the novel The Mandarins contrasts an intimidating American masculinity with its counterpart in Europe. European masculinity has been constructed as soft and peace loving, while its American counterpart is perceived as emboldened and tough. The ‘War on Terror’, as noted by Timothy Garton Ash and others, has reintroduced the sexual imagery into the verbal abuse hurled over the Atlantic. Europe’s tendency to define itself against America lends itself to revealing conclusions regarding de Beauvoir’s inability to dismantle cultural stereotypes about the ‘New World’ of possibility and abundance. Europe’s relationship to America in general calls into question notions of freedom and dependency. In particular France’s search for cultural autonomy in an increasingly Americanised world converges with debates regarding the limits of liberty in a nation that constitutionally and culturally celebrates its freedom. Often overlooked in this discussion are the opinions expressed by the Left Bank existentialists, whose views on the constraints of freedom were shaped by their attitudes to America. From the time when Alexis de Tocqueville published De la Démocratie en Amérique in 1835, French suspicion and outright hostility towards America’s military and economic supremacy has often translated into cultural elitism. Through closely reading passages from Simone de Beauvoir’s fiction and non-fiction, this essay identifies two key players in Europe’s post-war ‘independence project’ from America: the European male intellectual and the independent American woman. Both draw attention to the ways in which ideas and values are constructed and deconstructed and stereotypes of self and others created and circumvented on the transatlantic border. Signifying something beyond the geo-political boundary di-
Europe's relationship to America in general calls into question notions of freedom and dependency.In particular France's search for cultural autonomy in an increasingly Americanised world converges with debates regarding the limits of liberty in a nation that constitutionally and culturally celebrates its freedom.Often overlooked in this discussion are the opinions expressed by the Left Bank existentialists, whose views on the constraints of freedom were shaped by their attitudes to America.From the time when Alexis de Tocqueville published De la Démocratie en Amérique in 1835, French suspicion and outright hostility towards America's military and economic supremacy has often translated into cultural elitism.Through closely reading passages from Simone de Beauvoir's fiction and non-fiction, this essay identifies two key players in Europe's post-war 'independence project' from America: the European male intellectual and the independent American woman.Both draw attention to the ways in which ideas and values are constructed and deconstructed and stereotypes of self and others created and circumvented on the transatlantic border.Signifying something beyond the geo-political boundary di-viding two or more nations, Ilan Stavans (2000) reflects on the border as "first and foremost a mental state, an abyss, a cultural hallucination, a fabrication" (13).
The sense of risk taking and deliriousness implied by Stavans' ruminations are found in de Beauvoir's writing.Particularly the gap between freedom and constraint animates her prose.Her work, especially the autobiographical, reveals a commonality with the narratives of liberty that underscore American history, culture and ideology.Attentive to America's many pleasures, Simone de Beauvoir was also repelled by its failure to live up to its ideals.As an intellectual, however, she recognised this chasm as a productive zone of critical engagement and creativity.When opportunities to bridge the rift between dream and reality presented themselves in her own life, de Beauvoir declined, perhaps sensing that her work as an intellectual and a writer would suffer.Many of the idiosyncrasies that challenge and motivate the life she lived are strongly present and far from resolved in Les Mandarins (1954).To begin tracing the effect of this transatlantic maelstrom, real and imagined, on Simone de Beauvoir, this essay will focus primarily on this fictional account of the period from 1944 to the early 1950s (compressed into the years 1944-47).By exploring masculinity as a social construct, Les Mandarins depicts both the initial post-war joy and an increasing disillusionment with Europe's disempowerment.
Published five years after Le Deuxième sexe, de Beauvoir's iconic text on the historical and socio-cultural status of women, the specific post-war dilemma to which France was subjected provides Les Mandarins with a new textual framework against which de Beauvoir's ideas about gender relations gain a deeper significance.Specifically, she connects the childlike status of France, dependent on its American 'saviour', to the male impotence experienced by Dubreuilh and Henri in their relationship to women.Despite her distaste for American politics, Simone de Beauvoir recognises the independence of many American women by comparison to the French: "femme américaine, femme libre; ces mots me semblaient synonymes " (1954: 318) ["'American woman', 'free woman'-the words seemed synonymous" (1999: 330)], she writes in her travelogue L'Amérique au jour le jour (1948).
American women have been simultaneously depicted as a threat and the object of sexual desire in twentieth-century French literature.They were believed to be conspiring against the influence of the male intellect and the general powerlessness of post-war France.Often, their very femininity has been called into question, an act of sexual stereotyping that has left the French male intellectual vulnerable to similar assaults on their gender.Most dramatically, the emotional and intellectual independence Europe seeks away from America is reflected in the way Simone de Beauvoir oscillates between embracing as well as resisting the American dream of freedom as personified in the French cult of the American woman.
Dedicated to her lover in real life, the American author Nelson Algren, Les Mandarins invites the reader to draw comparisons between text and life and the characters Anne and Lewis who share similarities with de Beauvoir and Algren.For all her devotion to Algren, de Beauvoir was never prepared to sacrifice her public and private life with Sartre, the same way her alter ego Anne eventually abandons Lewis."Même si Sartre n'avait pas existé," de Beauvoir writes in her autobiography La Force des choses (1963), "je ne me serais pas fixée à Chicago" (177) ["Even if Sartre hadn't existed, I would never have been able to live permanently in Chicago" (De Beauvoir 1968: 177)].Perhaps more than most writers, Simone de Beauvoir felt herself strongly situated (not least by her readers) as belonging to a very specific milieu.Perhaps because the specific locality of her authorship prevented her from living elsewhere, her writing evokes a sense of remoteness as it attempts to tap into an unattainable elsewhere.The many volumes of autobiography testify to the desire to link the present moment to the past and the future, to connect herself with the plight of others.
At times, de Beauvoir's work shares great affinity and concerns with political activisim.The fact that her writing often focuses on the author's difficulty to successfully cross-over from fiction into politics does not detract from the powerful impact of her writing on issues relating to peace, justice, and equality.However, this is not to say that a fundamental disbelief in the American dream of freedom as an ideology and cultural practice leads Simone de Beauvoir to offer something more substantial in place of cultural and sexual stereotypes.While offering insight into Europe's 'independence project' from America, Simone de Beauvoir's personal trajectory reveals this struggle to be related to and entangled with fantasies of unity with others and freedom of self-a dream that reinforces cultural and sexual stereotypes.During the Algerian war of independence 1954-62, a sense of personal failure towards Algerian women led her to conclude that personal happiness is inescapably bound up with national self-esteem.With regards to sexual stereotypes, de Beauvoir's pessimistic La Femme rompue is written during the May 1968 insurgency.Far from celebrating the new-found freedom of women and the oppressed, this novella exposes the hollowness of a woman's life after the breakup of a marriage.Finally, de Beauvoir's writing indulges in transatlantic cultural stereotypes, specifically, by depicting America as the 'New World' of opportunity to a European who wishes to leave her old self behind.
Keeping these historical events in mind, my goal in the pages that follow will be to develop a discussion of the transatlantic relations during the period of reconstruction and its contemporary manifestations in the 'war on terror'.Today, Europeans and Americans once again occupy opposite ends of the male/female spectrum.The 'war on terror' has reintroduced sexual imagery into the verbal abuse hurled over the Atlantic, falling back on post-war rhetoric between what was perceived to be a feminised, neutralised Europe and a tough, masculinised America.Simone de Beauvoir addresses this complex question of transatlantic gender construction against the backdrop of post-war politics and cultural rivalry in a way that highlights dominant socio-cultural narratives of transatlantic difference then and now.
Into the Jars of Literary History
France has often objected to an American superpower because of the implications for French culture and the French language.The fear of being stampeded by a hoard of English speakers fuels French antagonism against America.However real the agony, it is also important to remember that, as Pierre Guerlain, Professor of American Studies at the University of Marne la Valée, observes: cultural resentment is the more acceptable face of economic resentment: it is much easier to reject a foreign country's culture than to admit that, in the economic rat race between nations, one has fallen behind.(Guerlain 1996: 136) America's entry into the Second World War in 1941 and subsequent aid after the war underscored French dependency on its transatlantic neighbour in a way that irked the sensibilities of the intellectual, in particular.America was no longer a distant dystopia to be feared, ridiculed, or admired.Paradoxically, America was resented both for its splendid isolation and its intervention.Its post-war aid to Europe was reluctantly accepted and scholars have subsequently questioned the significance of the American aid programme to Europe altogether (Judt 1967: 38).
Though its financial impact might be in doubt, the Marshall Plan had an undisputable effect on the psyche of the French people at the time.Unable to challenge America on a political and economic level, French artists and intellectuals propagated the belief that America was intellectually inferior to the Old World.This was not an exclusively foreign view.The most extreme expression of cultural inadequacy could be found in 1940s and 1950s America where the intellectual was stigmatised as a figure of mirth, at the best of times, or a communist to be feared.Either way, he was scorned on account of what was perceived to be an ambivalent masculinity.In his definition of an egghead, conservative anticommunist writer Louis Bromfield captures the sexualised anti-intellectual hostility in 1950s America by describing the intellectual as: A person of spurious intellectual pretensions, often a professor or the protege [sic] of a professor.Fundamentally superficial.Over-emotional and feminine in reactions to any problem.Supercilious and surfeited with conceit and contempt for the experience of more sound and able men...A self-conscious prig, so given to examining all sides of a question that he becomes thoroughly addled while remaining always in the same spot.An anaemic bleeding heart.(Cited in Cotkin 1999: 332) It is no wonder, perhaps, that Stalinism was thought by some Left Bank intellectuals to be an: Intellectually and culturally superior system that was destined to remain victorious against exploitative American capitalism and its supposedly trivial, manipulative, soulless, and impoverished 'non-culture'.(Berghahn 2001: 92) In his essay 'Situation de l'écrivain en 1947' Sartre's pessimistic view of the role of the intellectual in France is translated into a bitter rebuke of both America and the Soviet Union: Nous savons que le destin posthume de nos oeuvres ne dépendra ni de notre talent ni de nos efforts, mais des résultats du conflit futur ; dans l'hypothèse d'une victoire soviétique nous serons passés sous silence jusqu'à ce que nous soyons morts une seconde fois ; dans celle d'une victoire américaine, on mettra les meilleurs d'entre nous dans les bocaux de l'histoire littéraire et on ne les en sortira plus.(Sartre 1948 : 320) [The fate of our works, he writes "will depend neither upon our talents nor our efforts, but upon the results of [a] future conflict[s].In the event of a Soviet victory, we will be passed over in silence until we die a second time; in the event of an American victory, the best of us will be put into the jars of literary history and won't be taken out again." (Sartre 1988: 215)] Until then, Europe is fated to be the repository for American ideas: "Une idée peut descendre d'un pays élevé vers un pays à potentiel bas -par exemple d'Amérique en France-elle ne peut pas remonter" (Sartre 1948: 292) ["An idea can descend from a country with a high potential towards a country with a low potential-for example, from America to France-it cannot rise" (Sartre 1988: 197)].
Both the idea of potential as well as the arguably phallic imagery used to describe the gap between the transatlantic neighbours find their way into Simone de Beauvoir's novel Les Mandarins (1954).Taking as its focal point the choices that the intellectuals of the Left Bank faced between a capitalist American future and socialist Russia, Les Mandarins accounts for the dilemma of an intellectual circle only thinly disguised from the real one formed by de Beauvoir and Sartre.The dream of a socialist Europe independent from America is articulated through the author and politician Robert Dubreuilh in conversation with the less nostalgic Scriassine, a relatively minor character in the novel: "La reconstruction, c'est très joli : mais pas par n'importe quel moyen.Ils acceptent l'aide américaine ; un de ces jours, ils s'en mordront les doigts : de fil en aiguille la France va tomber sous la coupe du l'Amérique."Scriassine vida sa coupe de champagne et la reposa bruyamment sur la table : "Voila une prédiction bien optimiste !" Il enchaina d'une voix sérieuse : "Je n'aime pas l'Amérique ; je ne crois pas à la civilisation atlantique ; mais je souhaite l'hégémonie américaine parce que la question qui se pose aujourd'hui c'est celle de l'abondance : et seule l'Amérique peut nous la donner."L'abondance ?pour qui ?à quel prix ?" dit Dubreuilh.Il ajouta d'une voix indignée : "C ¸a sera joli le jour où nous serons colonisés par l'Amérique !" "Vous préférez que l'U.R.S.S. nous annexe ?" dit Scriassine.Il arrêta Dubreuilh d'un geste : "Je sais : vous rêvez d'une Europe unie, autonome, socialiste.Mais si elle refuse la protection des U.S.A., elle tombera fatalement dans les mains de Staline."(De Beauvoir 1955: I, 191-192) ["Reconstruction is all very well and good, but not when it's done without considering the means.They go on accepting American aid, but one of these days they're going to be sorry.One thing will lead to another, and eventually France will find herself completely under America's thumb."There are a couple of things to note regarding this intricate exchange between Dubreuilh and Scriassine.The most obvious of these is the utter disempowerment felt by the two French intellectuals.Like puppets, these men have no say in their own future.All they can do is talk-a sad indictment of the dwindling importance of the role of the intellectual both then and now-a situation for which America and its propensity for the mass-produced and the artificial is often blamed.Their views demonstrate the utter passivity of Europe, faced with the choice between America and Stalin.After its soldiers had liberated Europe from Fascism, America had become the guardian of Europe's fate.Ill-equipped for modern life, Scriassine implies above, France must stay close to its protector in order to survive.It is at this point in transatlantic history that representations of the American dream converge with discourses of the emancipated American woman and the notion of abundance.
Hannah Arendt (1965) provides an insightful commentary on how the European poor contributed to the materialism of the American dream through "ideals born out of poverty, as distinguished from those principles which had inspired the foundation of freedom" (139).Thus, she notes, the American dream, as the nineteenth and twentieth centuries under the impact of mass immigration came to understand it, was neither the dream of the American Revolution-the foundation of freedom-nor the dream of the French Revolution-the liberation of man; it was, unhappily, the dream of a 'promised land' where milk and honey flow.(Arendt 1965: 139) The dream of abundance as represented by America is starkly contrasted with the emptiness and hopelessness felt by the male intellectual in Les Mandarins.Arendt alludes to the femininity of this dream by describing America as a promised land where milk and honey flow, iconically represented by the Statue of Liberty who famously welcomes immigrants and visitors to her shores.
The causal link between the impoverished, powerless European male and the American, feminised dream of abundance underpins the narrative of Simone de Beauvoir's novel.Decades before the eroticised advertising imagery that, crudely speaking, connects sex with shopping, de Beauvoir constructs similar links between sexual prowess and the belief in an American dream of plenty.Interesting in this respect, however, is that her focus is on male desires and wish-fulfilment, rather than on female consumption.This is not to say, however, that Simone de Beauvoir herself was immune to this Jekyll and Hyde relationship, transposing her views on independence and freedom onto America as a way of coming to terms with her own limitations as an intellectual in France at the time.
Les Mandarins shifts between America and France, between the choice of a dream and a frugal reality, to signal the power struggle between men and women.Through the relationship between Dubreuilh and his wife Anne, France's political situation is mirrored most evocatively.Anne, a psychiatrist, falls in love with an American author, rendering Dubreuilh sexless and inadequate.The more virile and youthful American lover threatens the manhood of the much older Dubreuilh in a way that parallels how the youthfulness of America ousts old Europe.Furthermore, the lover Lewis is modelled on Simone de Beauvoir's real-life lover, the American author Nelson Algren.Possessing an over-supply of everything Dubreuilh lacks, Lewis symbolises American wealth.Stephen Spender (1974) accords wealth with masculinity and with America, and thus profoundly different from Europe: "European possessiveness is feminine.American wealth is rape, something torn out of the earth or from other men" (48).
Even the way Dubreuilh is protective about France, wishing to enter politics as a way of saving his country from the clutches of America, signals a fearful possessiveness.Stubbornly, he continues to reject any affiliation with America, even if he is as repulsed by the news of Soviet labour camps as his increasingly estranged friend Henri Perron.Through his wish to escape politics altogether and take up writing, Henri is depicted as a victim.An ex-Resistance fighter and head of the newspaper L'Espoir, Henri's neutrality vis-à-vis America and Russia demonstrates a general unmanliness.To further underscore this state, Henri is victimised by the women in his life.His wife Paula, with whom he has fallen out of love, threatens suicide, a young woman and her mother blackmail him, and Nadine coerces him into marriage through the birth of their child.Though an intellectual of some standing, Henri's refusal to take a political stand as the editor of a major paper reinforces the stereotypically feminine aspect of culture as something "indistinct and soft", as Michel de Certeau (1997) has observed, "a nonplace in which everything goes, in which 'anything' can circulate" (107).
Towards the end of the novel, Henri and Dubreuilh reconcile, united in their shared sense of inferiority as French intellectuals faced by the dominance of either Russia or America.Utterly defeated, Dubreuilh says: "Dès le début la partie s'est déroulée entre l'U.R.S.S. et les U.S.A. ; nous étions hors du coup.""Ce que vous disiez ne me semble pourtant pas si faux," dit Henri."L'Europe avait un rôle à jouer et la France en Europe.""C'était faux ; nous étions coincés.Enfin, rendez-vous compte," ajouta Dubreuilh d'une voix impatiente, "qu'est-ce que nous pesions ?Rien du tout."(De Beauvoir 1955: IV, 94-95) ["The game was between Russia and the United States from the start.We were completely out of it." "Nevertheless, what you used to say still doesn't seem so false to me", Henri said."That is, that Europe-and France in Europe-had a definitive part to play.""It was false; we were trapped.After all", Dubreuilh added in an impatient voice, "let's face it.What weight did we carry?None at all." (De Beauvoir 1993: 620)] A disempowered man, whether then or now, is often discursively linked to an emancipated woman, whose independence constitutes a threat to the man, or worse, makes a mockery of him.If the French male intellectual felt himself weightless and empty when confronted with his post-war destiny as depicted by de Beauvoir, the American female by comparison was perceived to be gaining in strength and influence.The sexual stereotypes that mark de Beauvoir's fiction provide the reader with a key to understanding Franco-American cultural relations of the early postwar decades.What makes the transatlantic relationship at the time so complex, however, is also the genuine appeal of the American dream of freedom in Europe, especially when symbolically exported in its most feminine and seductive form.
The Lolita Syndrome
Women and their bodies were at the core of European sentiments regarding America at the beginning of the twentieth century, anticipating some of the disturbing polemics relating to eugenics during the Second World War.As part of an exchange program between the Sorbonne and Columbia and Harvard University in 1910, Professor Gustave Lanson was convinced that the 'girl américaine' embodied the American race as unimpaired by the melting pot: A slim, athletic young girl with regular features, a pure profile, blond or brown hair, clear blue eyes, a laughing, frank, and firm gaze, lithe and confident gestures, nothing of the English stiffness, a mixture of strength and grace, a free, rich and joyful expansion of life: that is what I think of as the American 'girl' type.(Cited in Roger 2005: 191) In an engrossing chapter on the special role that the American woman played in the collective imagination of the French at the time, Philippe Roger argues that the 'girl' was considered by some the perfect type of the American race precisely because she was not yet a grown woman.Adult females, however, were also singled out.
The politician and author Charles Victor Crosnier de Varigny was the first to single out the American female as "the superior type of the race and environment".According to de Varigny and other propagandists like him, she was developmentally ahead of the male, "the (already present) future of the American man" (Roger 2005: 184).To Pierre Drieu la Rochelle, the French fascist author, the American woman embodies the beauty and skills of a "superior race", as David Carroll notes (1995: 165).However, no amount of admiration could diminish European fears of the influence of American women and their role in the Americanisation of Europe.The American woman's power, it was believed, came from her supreme emancipation from her husband.The possibility that something similar to the suffragette movement might arise in France put fear into the French intelligentsia.In La Femme aux États-Unis, de Varigny conveys some of this fear by proposing that the American female might consider wielding her power beyond her country's borders.He suggests, "the 'dame', not satisfied with having also conquered the New World, is well on the way to Americanizing the old one" (cited in Roger 2005: 186).
Deeply fascinated with the freedom of the American woman, Simone de Beauvoir fills her travelogue L'Amérique au jour le jour (1948) with astute observations regarding female-male relations.The travelogue is unfortunately a somewhat underrated work whose value one hopes might be recognised anew because it expertly and vividly captures an America of the past, but also because it addresses America's role today, as the only remaining superpower.The sentiments towards America in Europe today are similar to those expressed in the late 1940s.L'Amérique au jour le jour was published two years before Le Deuxième sexe and it is not unlikely that the observations she made during her travels in America influenced her views as both a feminist with the MLF (Mouvement de libération des femmes) and an existentialist.Indeed, as Deirdre Bair (1990) argues in her biography of Simone de Beauvoir, the author was always accommodating towards American feminists who visited Paris in the 1970s, "enjoying what she sometimes called 'transoceanic feminist reciprocity"' (545).However, reminiscent of Sartre's observation that "man is condemned to be free," de Beauvoir deplores what she perceived as the squandering of freedom in America.
Simone de Beauvoir observes the restlessness of the people she encounters there, the quest for excitement, the vast selection of consumerist choices as a means to mask the emptiness and boredom of life.Unlike other Western countries, an "official denial that individualism may have the[se] soul-destroying consequences" has taken hold of the American psyche, she argues (Brooks 2002: 127).To doubt American freedom is to be un-American.In stark contrast to her own childhood filled with learning, she observes in L'Amérique au jour le jour how the American "consomme sa jeunesse sur place faute de savoir que c'est l'homme qui est la mesure des choses et non celles-ci qui lui imposent a priori ses limites" (De Beauvoir 1954: 305) ["[he] spends his youth staying put, never knowing that it is man who is the measure of things, and not things that a priori impose limits on him" (De Beauvoir 1999: 313)].In particular, she objects to the inertia amongst young Americans: "en Amérique il remplit tout juste l'espace qui lui a été réparti dans un monde extérieurs à lui" (De Beauvoir 1954: 305) ["[Young Americans] simply fill the space assigned to [them] in a world that's external to [them]" (De Beauvoir 1999: 313)].Thus, while Simone de Beauvoir recognises the emancipation of American women, she also argues that this will not earn them respect by the opposite sex, largely because freedom as such is not valued in America the same way it is in France and Europe more generally.
The question of freedom and its significance on either side of the Atlantic often surfaces in discussions related to history and the past.An American propensity to annex other people's history blends seamlessly with misogyny in de Beauvoir's Les Mandarins.When visiting her American lover Lewis, Anne is struck by the condescending way France is discussed there: "leurs scrupules à notre égard ressemblaient à ceux qu'un homme peut éprouver devant une faible femme ou une bête passive" (De Beauvoir 1955: IV, 161) ["Their scruples concerning us were like those a man could feel towards a weak woman or a passive animal" (De Beauvoir 1993: 666)].Though their sympathy clearly was for France, "déjà avec notre histoire ils fabriquaient des légendes de cire" (IV, 161) ["already they were making wax legends out of our history" (1993: 666)].Echoing Sartre's concern that "the best of us will be put into the jars of literary history", Simone de Beauvoir's narrator voices a real concern of becoming patronised by America-as a woman and a citizen of France.Unless the American male earns his freedom and discovers his true potential, women will continue to be objectified and portrayed as idols, divinities and the objects of cults, she predicts.As for women in France, she observes that the strong woman no longer has a place in the collective imagination of the French.
In an essay titled Brigitte Bardot and the Lolita Syndrome, she notes how the femme fragile has replaced the femme fatale in popular culture in France."The adult woman now inhabits the same world as the man," she notes, "but the childwoman moves in a universe which he cannot enter [thus] the age difference reestablishes between them the distance necessary to desire" (De Beauvoir 1972: 10).Having established themselves in the work force, women must be removed from the male sphere in other ways than professional in order to continue to be desired by men.As a Marxist, she attributes the sexism of the French male to capitalism and the economical competition between men and women, which of course comes from the other side of the Atlantic-thus America is indirectly to blame for the demise of chivalry in France.However, never failing to reflect on her own vantage point as a cultural critic and observer, Simone de Beauvoir's travelogue and other pieces of non-fiction invite the reader to consider her own predicament when passing judgement over sexual stereotypes and America's abundance of freedom on the one hand and Europe's dependency and entrapment on the other.In what remains of this essay, I shall read de Beauvoir's intellectual and creative development in the context of her own position as a woman and intellectual in France at war with Algeria.Finally, I will conclude with a few remarks regarding the continuing relevance of these debates in the 'war on terror' today.
Becoming a Different Me
Though Simone de Beauvoir's commitment against French atrocities in Algeria during the war of independence 1954-62 was strong, she also suffered from profound estrangement and alienation.Her memoirs operate as a place in which she can regain some of the authority and self-control lost as a result of feeling unable to make a difference as an intellectual.This is not to say that de Beauvoir did not act.On the contrary.At one point both Sartre and Simone de Beauvoir were labelled anti-French due to the strong stance they took against the French government in Les Temps modernes and other publications.However, part of the freedom that she had gained since a restrictive bourgeois Catholic childhood was lost and never to be had again.Never was she to experience the euphoria during the years of 1929-44 accounted for in the second instalment of her autobiography, La Force de l'âge, when Sartre and herself imagined themselves invincible: Le jeu, en déréalisant notre vie, achevait de nous convaincre qu'elle ne nous contenait pas.Nous n'appartenions à aucun lieu, aucun pays, aucune classe, aucune profession.(De Beauvoir 1960 : 26) [By releasing the pressure of reality upon our lives, fantasy convinced us that life itself had no hold upon us.We belonged to no place or country, no class, profession, or generation.]Unable to relinquish her citizenship, Simone de Beauvoir experiences painful identification with the victims whose suffering she is unable to alleviate: "I needed my self-esteem to go on living; but I was seeing myself through the eyes of women who had been raped twenty times, of men with broken bones, of crazed children: a Frenchwoman" (cited in Lawson 2002: 125).This is a bleak period in de Beauvoir's life in that she discovers that personal happiness is inescapably and unhappily bound up with national self-esteem.With this realisation comes disbelief in the abstract notion of freedom, not to mention autonomy of one's self altogether and the existence of one's past as de Beauvoir has her narrator exclaim in the novella La femme rompue written a few years after the end of the Algerian war: "Je croyais savoir qui j'étais, qui il était: et soudain je ne nous reconnais plus, ni lui ni moi" (De Beauvoir 1967: 191) ["I know the whole of my past by heart and all at once I no longer know anything about it" (De Beauvoir 1969: 169)].With the loss of the past comes ontological and epistemological despair: "Je n'ai rien d'autre que mon passé.Mais il n'est plus bonheur ni fierté: une énigme, une angoisse.Je voudrais lui arracher sa vérité.Mais peut-on se fier à sa mémoire?" (1967: 212) ["I possess nothing other than my past.But it is no longer pride nor happiness-a riddle, a source of bitter distress.I should like to force it to tell the truth.But can one trust one's memory?" (1969: 185)]. 1onsidering the emphasis de Beauvoir places on contemplation and reflection, it is not surprising that her central objection to America is that it does not encourage introspection, understanding and personal growth because it knows and valorises only the present time: -l'avenir collectif est dans les mains d'une classe privilégiée, la pullman class à qui est réservée la joie d'entreprendre et de créer sur de grandes échelles ; les autres ne savent pas s'inventer, dans le monde d'acier dont ils sont les rouages, un avenir singulier : ils n'ont ni projet, ni passion, ni nostalgie, ni espoir qui les engage au delà du présent ; ils ne connaissent que la répétition indéfinie du cycle des saisons et des heures.Mais coupé du passé et de l'avenir, le présent n'a plus de substance ; il n'est rien ; c'est un pur maintenant vide.Et parce qu'il est vide il ne peut s'affirmer que par des moyens extérieurs : il faut qu'il soit 'excitant'.(De Beauvoir 1954 : 259) [The collective future is in the hands of a privileged class, the Pullman class, which has a monopoly on the joy of starting ventures and creating on a grand scale.The others don't know how to invent a unique future for themselves in the steel world in which they are merely cogs in the machine.They have no project, passion, nostalgia, or hope that engages them beyond the present; they know only the indefinite repetition of the cycle of hours and seasons.But cut off from the past and future, the present no longer has any substance; it's nothing, just a pure, empty now.And because it is empty, it can be affirmed only through external means: it must be 'exciting'.(De Beauvoir 1999: 266)] Simone de Beauvoir's fiction and non-fiction have something in common with the narratives of liberty that underscore American history, culture and ideology.Fractured self-esteem is sutured through language and narrative.The bulk of her autobiographical work alone testifies to a moving belief in representation shared by both France and America.Naturally, all governments govern and celebrate their leadership through representation, but perhaps none more fervently (at least in the West) than America, as Anne Norton (1993) implies: Brought forward by a declaration, constituted in writing, Americans place themselves under the authority of language.The declaration spoke the nation into being.The constitution stands not as an artefact, or as mere law, but as the written representation of America.( 9) Simone de Beauvoir also speaks herself into being and as such her oeuvre is a celebration of language against silence.In all her work, she calls attention to the gap between the ideal and reality, not to chastise the American people, although there is an element of that, but also to motivate the reader to carry out his or her personal aspirations for freedom and independence in the process of reading and living.
this decision, her autobiographical and fictional work already speaks volumes of how the author combined her private life with her work.
To an existentialist, especially, to whom there is no fixed self, only a constantly becoming self, the power of narrative to shape and reshape us must not be underestimated.The struggle for cultural integrity in France is reflected in Simone de Beauvoir's struggle for autonomy as a woman to whom the political becomes the personal as testified by her extreme identification with rape victims in Algeria and a philosopher to whom power to change political reality is limited.Her self-exploration and attempts to liberate herself echo the intellectual and emotional independence Europe sought away from America.Interestingly, this wish for autonomous self-creation finds resonance in the American concept of the 'self-made' man or woman.Consider the following dialogue between Dubreuilh and Henri towards the end of Les Mandarins: "La réalité n'est pas figée", dit Dubreuilh."Elle a un avenir, des possibilités.Seulement pour agir sur elle et même pour la penser, il faut s'installer en elle et non s'amuser à des petits rêves.""Vous savez, je ne rêve guère", dit Henri."Quand on dit : 'Les choses sont moches' ou comme moi l'an dernier : 'Tout est mal', c'est qu'on rêve en douce à un bien absolu."Il regarda Henri dans les yeux : "On ne s'en rend pas compte, mais il faut une drôle d'arrogance pour placer ses rêves au-dessus de tout.Si on était modeste, on comprendrait qu'il y a d'un côté la réalité, et de l'autre rien.Je ne connais pas de pire erreur que de préférer le vide au plein", ajouta-t-il.(De Beauvoir 1955: IV, 216-217) ["Reality isn't frozen", Dubreuilh said."It has possibilities, a future.But to act on it-and even to think about it-you've got to get inside it and stop playing around with little dreams.""You know, I have very few dreams", Henri said."When someone says, 'Things are rotten', or, as I was saying last year, 'Everything is evil', it can mean only that he's dreaming secretly of some absolute good."He looked Henri in the eyes."We don't always realize it, but it takes a hell of a lot of arrogance to place your dreams above everything else.When you're modest, you begin to understand that, on the one hand, there's reality, and on the other, nothing.And I know of no worse error than preferring emptiness to fullness", he added.(De Beauvoir 1993: 704-705)] The contemporary context of European and American relations reiterates the debates of Simone de Beauvoir's time.In particular, the sense of entrapment and inferiority voiced by Dubreuilh and Henri in Les Mandarins has resurfaced more recently.Andrew Ross (1989) suggests that the construction of masculinity goes hand in hand with the international balance of patriarchal power.Comparing cultural icons such as the American Rambo and the English Boy George, Ross asserts that the latter "bespeaks the softer European contours of masculinity in the twilight of its power".While American masculinity is "emboldened and threatening", its European counterpart is "sentimental and peace loving" (Ross 1989: 165).In the light of recent events, Ross' comments must be considered prescient of the transatlantic rhetoric today.The 'war on terror', Timothy Garton Ash (2005) notes, has reintroduced sexual stereotypes into the transatlantic debate: If anti-American Europeans see 'the Americans' as bullying cowboys, anti-European Americans see 'the Europeans' as limp-wristed pansies.The American is a virile, heterosexual male; the European is female, impotent, or castrated.Militarily, Europeans can't get it up.(After all, they have fewer than twenty 'heavy lift' transport planes, compared with the United States' more than two hundred.)Following a lecture I gave in Boston, an aged American tottered to the microphone to inquire why Europe 'lacks animal vigor.'The word 'eunuchs' is, I discovered, used in the form of 'EU-nuchs.'The sexual imagery even creeps into a more sophisticated account of America-European differences, that of Robert Kagan of the Carnegie Endowment for Peace titled 'Power and Weakness'.'Americans are from Mars', wrote Kagan approvingly, 'and Europeans are from Venus'-echoing that famous book about relationships between men and women, Men are from Mars, Women are from Venus.(Garton Ash 2005: 123) In the US, feminist critics such as Susan Faludi (2007) have argued persuasively that the terrorist attacks of September 11 have been used to denigrate women, in particular, as helpless victims who need rescuing by manly male heroes (14).There is not sufficient space here to dwell deeper on the contemporary expression of these stereotypes, except to say that they have by no means diminished and, finally, to ask whether de Beauvoir's writing interrupts or supports this transatlantic view.Once again, it is the notion of freedom that underpins this question.Simone de Beauvoir recalls in the first pages of L'Amérique au jour le jour her sentiments regarding the prospect of encountering the country of her imagination and the desire for nothing less than the freedom to begin something new by virtue of being born again: Il me semble que je vais sortir de ma vie ; je ne sais si ce sera à travers la colère ou l'espoir, mais quelque chose va se dévoiler, un monde si plein, si riche et si imprévu que je connaîtrai l'extraordinaire aventure de devenir moi-même une autre.(De Beauvoir 1954: 11-12) [I feel I'm leaving my life behind.I don't know if it will be through anger or hope, but something is going to be revealed-a world so full, so rich, and so unexpected that I'll have the extraordinary adventure of becoming a different me.(De Beauvoir 1999: 3)] Rather than dismantling cultural stereotypes, de Beauvoir reinforced them by portraying America as the 'New World' of possibility and abundance to which she could abandon her old self.America simultaneously promised freedom of self and the freedom from self.Simone de Beauvoir would rather have been born again in America than have witnessed the pillars of her life crumble under the weight of too much history.
Scriassine emptied his glass and banged it down on the table."Nowthat'swhat I call an optimistic prediction!"In a serious voice, he continued rapidly, "I don't like America and I don't believe in the Atlantic community.But I sincerely hope America predominates, because the important question in this day and age is one of abundance.And only America can give it to us." "Abundance?" Dubreuilh said."Forwhom?And at what price?That would be a pretty picture, to be colonized by America!" he added indignantly."Wouldyourather Russia annexed us?" Scriassine asked.He stopped Dubreuilh with a sharp gesture."Iknow.You're dreaming of a united, autonomous, socialist Europe.But if Europe refuses the protection of the United States, she'll inevitably fall into the hands of Stalin."(DeBeauvoir 1993: 144)] | 2019-05-03T13:06:13.512Z | 2009-01-01T00:00:00.000 | {
"year": 2009,
"sha1": "415070bde624340eae4c94031a81e8435e3c7522",
"oa_license": "CCBY",
"oa_url": "http://newreadings.cardiffuniversitypress.org/articles/10.18573/newreadings.69/galley/67/download/",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "b75fdd898dadf67b9b7e82a64a7febc1193de613",
"s2fieldsofstudy": [
"Art"
],
"extfieldsofstudy": [
"Sociology"
]
} |
245868085 | pes2o/s2orc | v3-fos-license | Foodborne Toxigenic Agents Investigated in Central Italy: An Overview of a Three-Year Experience (2018–2020)
Foodborne diseases (FBDs) represent a worldwide public health issue, given their spreadability and the difficulty of tracing the sources of contamination. This report summarises the incidence of foodborne pathogens and toxins found in food, environmental and clinical samples collected in relation to diagnosed or suspected FBD cases and submitted between 2018 and 2020 to the Food Microbiology Unit of the Istituto Zooprofilattico Sperimentale del Lazio e della Toscana (IZSLT). Data collected from 70 FBD investigations were analysed: 24.3% of them started with an FBD diagnosis, whereas a further 41.4% involved clinical diagnoses based on general symptomatology. In total, 5.6% of the 340 food samples analysed were positive for the presence of a bacterial pathogen, its toxins or both. Among the positive samples, more than half involved meat-derived products. Our data reveal the probable impact of the COVID-19 pandemic on the number of FBD investigations conducted. In spite of the serious impact of FBDs on human health and the economy, the investigation of many foodborne outbreaks fails to identify the source of infection. This indicates a need for the competent authorities to continue to develop and implement a more fully integrated health network.
Introduction
Foodborne diseases (FBDs) represent a worldwide public health issue and are caused by consuming food contaminated with pathogens or their toxins. Sources of contamination are often difficult to trace and these diseases are considered easily spreadable.
Foodborne pathogens consist of bacteria, viruses, fungi and parasites contaminating food in the different phases of the production chain, during the transport, preparation and handling steps up to the final consumer. Some of these bacteria and fungi can produce toxins that outlive the producer thereof, thus the absence of the pathogen itself cannot exclude the contamination and guarantee the food wholesomeness. Many of these bacteria and their toxins are thermostable and cannot be destroyed by typical food preparation methods. As a result, the assessment of food safety becomes even more complex [1,2]. Furthermore, factors like the increase of susceptible populations (e.g., elderly and immunocompromised patients) and consumption patterns (e.g., the growing demand for the market of ready-to-eat food) have increased the risk of foodborne illnesses [3][4][5].
Only five EU member states reported data related to HAV or other unspecified Hepatitis viruses, with a total of 135 cases in 2019. Compared with 2018, the number of notified Hepatitis A (including other Hepatitis virus, unspecified) decreased in the EU, mainly due to reduced reporting. Hepatitis A outbreaks were characterised by a high percentage of cases needing hospitalisation (73.3%).
The growing public health focus on microbial toxins and bacterial agents is due to a set of determinants including a better overall surveillance and an increase in the number of notified foodborne outbreaks, including those involving bacterial toxins [7].
The reported number of zoonosis outbreaks to EFSA and ECDC in Italy in 2018 and 2019 were the same (134 and 135, respectively) [10,11]. Particularly, in 2018 the 25% of outbreaks were caused by Salmonella spp., the 13.3% by bacterial toxins, the 11.2% by Campylobacter spp. and the 0.7% by Shiga-toxins producer E. coli (STEC). In 2019, Salmonella spp. outbreaks decreased to 13.3% of overall outbreaks, bacterial toxins maintained the same levels while E. coli (STEC) rose to 1.5%. S. aureus and C. perfringens caused the same number of outbreaks in 2018 and 2019, C. botulinum outbreaks decreased from 4.5% to 1.5% and B. cereus slightly grew (from 0.7 to 3%).
The number of outbreaks caused by HAV was stable for the two considered years (3%). In 2019, cases from other Hepatitis virus were also reported (1.5%). In contrast to the European trend, the percentage of Italian outbreaks caused by Norovirus was lower, and decreased from 6.7% in 2018 to 4.4% in 2019. In 2019, only one outbreak related to L. monocytogenes was reported, involving 12 cases and causing two deaths. No yersiniosisrelated outbreaks were notified from Italy in 2018 and 2019.
Multi-level monitoring, including contamination in the food chain control subsequent steps from distribution to consumers, human disease surveillance and epidemiological investigations of epidemics and sporadic cases, is still an important source of information for authorities to assess the success of current food safety management systems and to identify new hazards [12].
Outbreak surveillance primarily aims to stop the outbreak by identifying the offending products and withdrawing them from the market. Investigations can aim to identify all the involved cases and find out the unsafe practices that led to the outbreak [12].
Furthermore, most pathogens that can be transmitted by food, may also be transmitted by other pathways such as water, direct human and animal contact. Therefore, there is a need for source attribution to quantify the proportion of all the foodborne cases, and the food vehicles that are most frequently associated with illness [12,13].
Italy entrusts scientific and institutional research on FBDs to the Istituto Superiore di Sanità (ISS) which coordinates surveillance of FBDs at the national level, through the Department of Food Safety, Nutrition and Veterinary Public Health and the Department of Infectious Diseases. They host numerous National Contact Points designed by the Ministry of Health for the collection of surveillance data. The department includes different laboratories and reference centres for many microbiological agents of disease. Other activities on specific pathogens or pathologies are distributed between National Reference Centres (NRCs) and National Reference Laboratories (NRLs). The CRNs represent an operational tool of high and proven competence at the service of the state, in the sectors of animal health, food hygiene and zootechnical hygiene. (ISS) and NRL for Foodborne Viruses (ISS). These organisations act as a reference in the conduct of epidemiological investigations, especially when they provide periodic reports and manage platforms on which positive samples are loaded with the relative metadata and the results of the typing tests. For pathogens with no reference laboratories assigned, such as toxin-producing B. cereus and C. perfringens, obtaining surveillance data and samples to compare with becomes more complicated.
In Italy, notification of infectious disease is regulated by law DM 15 December 1990, that categorises notifiable diseases into five classes according to their importance and impact on public health. Infectious and FBDs pathogens and bacterial toxins belong to classes I, II and IV: botulism is included in class I diseases for which an immediate report is required (within 12 h); -listeriosis, hepatitis and non-typhoid salmonellosis are 2nd class diseases because of their high frequency and require a report within 48 h from the observation of the case (also for suspected diseases); -other FBDs are included in the 4th class. In this case, the individual medical report must be followed by reporting from the local health unit only if an epidemic outbreak occurs.
Data are reported to the Ministry of Health Directorate General Health Prevention Transmissible Diseases and International Prophylaxis Unit. The notification flow is distinct for sporadic cases (class II) and outbreaks (class IV). Human cases of botulism and trichinellosis are subject to mandatory notification within 12 h.
With the exception of the above-mentioned notification system, no national protocol of intervention for the management of foodborne outbreaks is available; therefore, the management of this system is actually entrusted to the individual regional authorities.
As a consequence, the system suffers from extreme fragmentation and lack of standardisation, with the establishment of a variable number (0, 1 or 2) per region of Regional Reference Laboratories (only nine of which are formally recognised) for the management of FBDs and related operational guidelines (not always available). Furthermore, other public institutions occasionally collect the same kind of data in the same area and the lack of shared database renders an even more partial the view on the toxic infections currently occurring in each region.
Our laboratory, historically operating as an official control laboratory for food microbiological analyses, was also recognised as Regional Centre for Enteropathogenic Bacteria in 1996 and afterwards was appointed as actor in the Regional Plan for the Surveillance and Management of Infectious Emergencies during the Extraordinary Jubilee 2015-2016, concerned the surveillance and notification of foodborne disease, with particular regard to Salmonella from human, animal and environmental sources. It collects bacterial isolates obtained through the microbiology control of food matrices sampled by competent authorities of Lazio and Tuscany regions and from the producers themselves for self-control analyses. Furthermore, it receives human isolates from public and private hospitals and laboratories across the entire area of competence for serological typization. In order to overcome the above-mentioned critical points, the Lazio region has been recently appointed IZSLT as the Regional Reference Laboratory for FBDs and foodborne human diseases (Deliberation of Lazio Region, n. G06447 of 28 May 2021). The deliberation determined the organisation of a regional group for the management of FBDs, which will lay down guidance for the surveillance of FBDs. The regional deliberation should end the overlapping of activities and promote the centralisation of epidemiological and analytical data, which must then be communicated to the Regional Service of the Monitoring of Infectious Diseases at INMI L. Spallanzani (SERESMI).
In this study, we collected the experience of the Istituto Zooprofilattico Sperimentale del Lazio e della Toscana "M. Aleandri" (IZSLT) in identifying pathogens and bacterial toxins in food and the environment, prior to the official appointment of our laboratory. The investigated samples were collected by the competent authorities, mainly in the Lazio and Tuscany regions, in relation to all the FBD reports notified to our laboratory for the three-year period 2018-2020.
Results
We collected data related to 70 investigations for FBDs: 28 cases in 2018, 29 cases in 2019 and 13 in 2020. Only 17 out of 70 investigations (24.3%) took place following an official FBD notification by the health regional or national system (hospital, emergency room ER, or primary care physician) with the patient having a clinical diagnosis of FBD. Among the remaining cases, 29 (41.4%) investigations started with a direct consumer report after a symptomatic event (e.g., gastrointestinal symptoms, fever, headache, etc.) with or without a clinical diagnosis, and 24 (34.3%) followed a notification from the competent authorities with no information on the patient's condition provided to our laboratory during the investigation (Table S1).
The overall number of undertaken investigations were similar for 2018 and 2019, but dropped drastically in 2020 ( Figure 1). Overall, 70% of the investigations were conducted both in small and large food retailers and catering services (school, company and hospital canteens, and restaurants) and 28.6% in private homes (Tables 1 and S1). A total of 340 samples were analysed. For each investigated case, a variable number of samples was collected (ranging between 1-67 samples, mean 4.84, median 2). In total, 49.1% of the food matrices represented the "Ready-to-eat" (RTE) category, 47.6% were non-RTE, with the remaining 3.2% obtained from the immediate environment (sponge-wiped surfaces). Nineteen (5.6%) out of 340 samples analysed and which involved 17 investigations, were positive for the presence of food-related pathogens. Of these, twelve (63.1%) represented non-RTE foods, while five (26.3%) represented RTE foods. Among the positive samples, more than half (11) represented meat-derived products (five pork, four poultry and two bovine), of which only three samples were RTE foods.
In total, 36.5% of the food samples analysed represented leftovers of meals consumed by the patient with evidence from the available data (6.3% positives); 14.1% were meal leftovers probably consumed from patients without assessed evidence (4.2% positives); 20.3% were sampled from the same package or batch of the suspected meal (no positives found); 22.9% were samples collected from the market retailers and restaurants pointed to by the patients (9% of positives); 2.9% were unrelated samples collected still sealed in patients' private homes (no positives found); and 3.2% were environmental samples (18.2% of positives) ( Table 2).
The time ranging between the onset of symptoms and sample collection by the competent authority was measured for 28 investigations, with a mean value of 8.3 days (min 1, max 26 days).
The commonest pathogen detected was L. monocytogenes (six of 19 positives), of which two represented environmental samples corresponding to investigation number (IN) 58 and IN70 of Table S1, two were from non-RTE meat products (IN53, IN63), one was an RTE meat product (IN58), and one was a non-RTE vegetable product (IN59) ( Table 2); of the six positive, five were reported in 2020. One RTE poultry sample was positive for Salmonella Infantis (IN44), while a non-RTE bovine meat sample was positive for E. coli (STEC) (IN34). One case of Y. enterocolitica was identified in an RTE pre-cooked pasta (IN39). Two cooked homemade preparations were positive for Clostridium perfringens (IN28, IN51); in one of these, the food was also Bacillus cereus contaminated. Coagulase-positive Staphylococcus was detected twice, once in RTE fresh ovine cheese (IN14) and once in non-RTE chicken (IN66). Staphylococcal enterotoxins were detected in two cooked dishes prepared by catering services at two separate localities (IN23, IN26). With regard to viral pathogens, Norovirus and HAV were detected once in the same samples of fresh mussels (IN7), while HEV was detected twice in non-RTE pork sausages from one producer and linked back to three affected patients (IN42). In one case, the RTE roasted pork was infested with larvae of the fly genus Lucilia (Calliphoridae) (IN65). In total, 36.8% of the positive samples presented foodborne toxigenic agents. Three diagnosed cases of botulism were notified. The first (IN25) involved a hospitalised patient, who in the days prior to his illness had consumed locally peddled food. Several foods, including cheese, bovine hamburgers, pork salamis and vegetables in oil, were sampled but none were Clostridium botulinum positive. Another case (IN68) concerned the recall of a particular batch of canned tuna in sunflower oil, originating in an outbreak of botulism amongst 37 people on the island of Sicily who had all visited the same canteen. During this investigation, none of the 67 cans of tuna analysed were positive for C. botulinum toxins. The recall was accordingly revoked. The third case involved one person being hospitalised (IN69). All of the food samples analysed, including the homemade tuna kept in oil, tested negative for any C. botulinum toxins.
In two HAV investigations (IN4, IN7), only one was linked to contaminated farmed mussels (IN7), and in two cases of HEV (IN3, IN42), only one was probably caused by the contamination of both fresh and seasoned pork products sampled in a butchery shop (IN42). Regarding the cases of diagnosed salmonellosis (IN19, IN32, IN56), the investigations did not give any result in all three of them.
Of the seven cases of listeriosis investigated, only three yielded samples that were positive. In these cases, all occurring in 2020, the pathogen was isolated both from the patient and the suspected food, and additionally typed with NGS (whole genome sequencing, WGS) to compare the strains and perform a deep epidemiological investigation. In the first case (IN63), this molecular approach allowed us to exclude any link between the strain isolated from the patient and the strain isolated from fresh minced beef sampled in the patient's home. The origin of patient contamination remains unknown to date. The second case (IN58) involved a pregnant woman and, after investigation at the frequented market, a pork RTE meat was found positive to the same serotype and ST. This case is still under study for evaluating the relatedness by comparing the genomic data. In a third case (IN70), a nosocomial outbreak of listeriosis was detected in the Lazio region. The source of contamination was attributed to the meat slicer of the hospital kitchen by using a WGS approach as reported by Russini et al. [14].
When a specific clinical diagnosis was not available (53 out of 70 investigations), the search for the responsible pathogens focused on the most common cause of FBD, according to the characteristics of the food matrix involved (Table S1). The principal investigated pathogenic agents were the enterotoxins producer bacteria coagulase-positive Staphylococcus, B. cereus, the staphylococcal enterotoxins, Salmonella spp. and L. monocytogenes. In 12 cases, we found a positivity in the collected food sample, and the most frequent identified targets were staphylococcal enterotoxins (2), positive coagulase Staphylococcus (2), C. perfringens (2) and L. monocytogenes (2) ( Table S1). In eight cases, the positive matrix was a meat sample (belonging both to the RTE and non-RTE category).
In one important investigation, which occurred in a police school canteen (IN38), 130 persons were involved, and no specific diagnosis was formulated for all the symptomatic people. Served meal samples and environmental samples were collected. The analyses tested a wide range of pathogens but no positive samples were found (Table S1).
Discussion
For several years, the need for studies concerning FBDs has been globally recognised. Estimating the burden of FBDs is necessary in order to reach a global risk ranking for policy making. For this purpose, the harmonisation of laboratory methodologies, epidemiological and biological data collection and sharing are useful processes for comparing the estimates between diseases, countries and regions [15].
This report describes the experience of a bi-regional (Lazio and Tuscany) focus on the analysis for FBD investigations, which occasionally received cases that occurred in other Italian regions, in the three-year period (from 2018 to 2020).
Considering this time frame, we noticed a general decline in cases during 2020, with 13 investigations, compared to 28-29 in the previous years. Particularly, 2020 was characterised by a collapse of investigations carried out in restaurants, cafeterias and holiday dinners both for the number of food sampling sites and for the number of food sampling preparation sites. In parallel, food samples prepared or consumed at home increased. Given the peculiar situation due to the COVID-19 pandemic during the whole of 2020, consumers' habits have inevitably changed [16]. The harsh lockdown restrictions caused the closure of restaurants, schools and company canteens, probably leading to a decrease in cases involving this kind of service. As a matter of fact, a preliminary study performed in Spain showed a marked decrease in the number of reported foodborne infections during the first semester of 2020, compared with the same of 2019, specifically for Campylobacter and Salmonella infections [17]. Accordingly, during the first Italian lockdown (February-May 2020) we did not receive any reports or samples linked to a suspected or overt case of FBD.
A probable collateral effect of the pandemic could be the avoidance of emergency rooms or medical care in the case of FBDs without acute symptoms. The reduction of tourist flows may also have played an important role in reducing infectious diseases, which collapsed during the first months of lockdown [17].
The impact of the COVID-19 epidemic on public health is not limited to this aspect. Many countries reported that in 2020, and specifically during the first half of the year, less non-COVID-19 patients were hospitalised. In a large University-Hospital in Parma, Italy, admissions for non-communicable diseases (NCDs) in 2020 vs. 2019 dropped by approximately one third [18]. Denmark reported a significant decrease in hospital admission during national lockdowns compared with the pre-pandemic baseline period [19]. According to a survey carried out in the U.S. on healthcare staff, several routinary services, including investigations related to other communicable diseases, foodborne outbreaks, public health surveillance and evaluation, and non-communicable disease responses were no longer available or heavily penalised, due to the burden of the COVID-19 impact. Foodborne outbreaks were specifically mentioned, highlighting a reduced ability to conduct surveillance, outbreak investigation or inspections [20]. To our knowledge, similar reports concerning the COVID-19 effects on FBDs in Italy have not been disseminated, but we can hypothesise that the drop in notifications may have had the same cause.
Even before the COVID-19 pandemic, a general underestimate of the population burden of FBDs was suspected by the official bodies; in most countries, illness recording occurs only when a patient consults a doctor or a nurse, and they require a sample for laboratory testing [21]. Only for severe cases, where hospitalisation is needed, it proceeds with the clinical investigation of the causes and the detection of the pathogens and toxins involved. In addition, where the symptoms have been ignored or confused, the customers' complaints are infrequent.
In the U.S., only about half of the foodborne illness outbreaks have a recorded contributing agent, and data describing the outbreaks are limited and not sufficient for further analysis [22]. A retrospective telephone survey conducted in Italy between 2008 and 2009 reports that only 39.5% of persons with a self-reported episode of gastrointestinal illness contacted a physician; only 0.3% of the total submitted a specimen for laboratory investigation [23].
Another consequence of the scarce sensibility of the surveillance systems is the difficulty to detect the decrease of the occurrence of a specific foodborne illness due to the success of control programs against the pathogen responsible for the disease. This fact increases its relevance when the numbers of the observed illnesses are small or underestimated [24]. Retrospective telephone surveys have been proposed to be a cost effective tool to detect changing disease incidence, but they cannot estimate the contribution of specific pathogens [25]. Expert elicitation, on the other hand, can be used to determine exposure routes for key hazards such as foodborne bacteria, but they can suffer from substantial uncertainty [26].
Out of a total of 340 analysed samples, only 19 were positive for the presence of a foodborne pathogen with 26% belonging to the RTE category. The reasons for failing in positive detection may be different. From a clinical point of view, the impossibility of carrying out a correct source attribution could be due to the lack of clinical indications that could direct the selection of pathogens to be searched. In our experience, only 24.3% of investigations started with a diagnosis of FBD and, with the exception of botulism, no clinical diagnosis related to toxin agents was found. In general, these kinds of diseases are not deeply investigated due to the symptoms that are often generic or treated with a symptomatological clinical approach, from both the patients and the medical health system (e.g., emergency room). As a consequence, an inaccurate anamnesis and diagnosis may not be able to correctly address the search for pathogens that could thus escape.
A robust detection of pathogens is also favoured when the period of time elapsed between the moment of possible intoxication and the sampling by the official authorities is short. In this case, it could be easier for the patient to remember the meals consumed and correctly address the epidemiological investigation on the most probable sources of contamination. In our case series, the average period was 8.3 days, with a maximum of 26 days. This value is given by the incubation period (which is the time from infection with the pathogen to the onset of symptoms), the time occurring from the onset of symptoms to the consultation of a physician or a competent authority and those necessary to organise the sampling procedures after notification of the event. It is reasonable to think that in many cases it was impossible to sample a residue of the suspected meal, since it is not admissible to sample food after its expiration date (in many cases, packaged food is considered expired after a few days after its opening), preventing the investigation on the most probable source of the pathogen. Other complications are given by the unavailability of the contaminated food since it was completely consumed or sold. These limits are often overcome by sampling food from another package from the same batch (20.3% of our case series with no positivity found), or from samples collected from the market retails and restaurants pointed to by patients (22.9% of our case series with a 9% positivity rate), making uncertain any assumption on the origin of the involved pathogen.
A technical issue to consider is related to some intrinsic limits of sampling protocols and detection procedures. Given the uneven distribution of biological contaminants in food, the chance of detection may depend on the type of sampling (e.g., if the food sample is composed of more than one increment or if the test portion is unique) [27]. During the evaluated years, in two positive cases, the parameters analysed did not exceed the legal limits. Therefore, it could be possible that the involved patients have been exposed to portions of the meal more contaminated than the portion sampled by the authorities during the investigation.
When clinical indications and anamnesis are absent, food and residual samples are subjected to a vast panel of tests, and it may happen that the quantities were lower than that indicated by the reference standards for certain microbiological targets. If the food sample is not available or available in limited quantities, based on the investigated pathogen, it would be advisable to carry out environmental sampling, but this type of procedure is almost never carried out (we could examine only 11 environmental samples related to nine out of 70 investigations).
According to the definition of "Ready-to-eat" food (RTE) of commission regulation (EC) No 2073/2005 as food intended for direct human consumption, numerous factors may enhance the hazard level of RTE. Surfaces may be reservoirs for bacterial contamination, which could increase the risk of bacterial aggregation and dissemination during RTE preparations and manipulations (e.g., slicing or packaging), exposing the final consumer to foodborne pathogens. This hygienic issue is worsened by the ability of some bacteria to form biofilm, which facilitates their survival on surfaces and protects them from drying and cleaning procedures [14,28]. RTE foods prepared by hand are often implicated in foodborne illness outbreaks, as this direct contact may lead to an increased incidence of contamination with potential foodborne pathogens [28,29].
In our report, 63.1% of positive samples belonged to the category of non-RTE food and 26.3% were RTE food. Despite the majority of samples not being considered RTE, 52.6% of positive samples were residual or probable residual of consumed food (two RTE and eight non-RTE samples). Of the eight non-RTE foods, seven were prepared with fresh meat (beef, pork and chicken) and one with a soft cheese and all were manipulated in private houses or canteens. In addition to the already mentioned factors that may increase the hazard in the production of RTE food, the risk of FBDs appears to increase in fresh food not properly cooked or handled after cooking. Therefore, the incorrect treatment, handling and consumption of food of any category (not necessarily RTE) seems to be a crucial aspect that should not be underestimated. Collecting environmental samples, particularly for some pathogens, could be crucial for detecting the role of secondary contamination in FBDs [30,31].
The NGS approach is currently an irreplaceable and increasingly used tool in the study of outbreaks, in epidemiological investigations and source attribution studies [9,32,33]. This approach could also help in the genomic characterisation of toxigenic pathogen strains, and evaluate the presence of genes associated with toxins production, assess the genetic variants and monitor the spread of antimicrobial resistance [34][35][36]. We applied this methodology only when both the isolates of food and human origin were available (three cases of listeriosis). In one case, the analyses carried out gave inconsistent results since the strain of L. monocytogenes isolated from food found in the patient's house was not compatible with the one isolated from the patient. In another case, the NGS approach was decisive in tracing the source of contamination [14].
This aspect underlines the usefulness of simultaneously managing both strains of animal, food and environmental origin (deriving from the official control activities of our laboratory) and human origin (collected by virtue of the regional acknowledgments obtained over time), according to a One-Health perspective. Furthermore, this work highlights the central importance of the collaboration of the different institutions (IZS, ISS, regional and local competent authorities that take on the surveillance of FBDs), even if further interventions for the standardisation of procedures and intensification of data exchange networks are required. In this context, our report could provide a further dissemination channel, representing a source of analytical information useful for more geolocated study on FBDs in addition to official transmission routes that expose data in a general and aggregate manner.
This study brings to attention, even if secondary to the main objective, the critical issues related to the study and research of FBDs in the era of COVID-19, suggesting how the pandemic could also have affected the frequency and identification of these diseases.
In conclusion, our report could contribute to estimates of the burden of FBDs, to harmonise methodologies and to share data. Our experience, even if conducted with few exceptions at a regional level, can highlight some critical and operational aspects (such as the regulatory gap, absence of standard guidelines for the management and organisation of interventions during epidemiological investigations) that can also be generalised to wider levels of FBD management.
Conclusions
The official data regarding notifications of cases and outbreak of FBDs in Italy may suffer from a general underestimation compared to European trends. The present work reports the result of three years of investigations related to the episodes of FBDs reported to the Food Microbiology Unit of the IZSLT, before the official appointment as a Regional Reference Laboratory. The highlighted criticalities were linked to a fractional management of FBDs, which still characterises various regions of our territory, to the lack of integrated coordination at the national level as well as to a disparity of treatment and knowledge with respect to the different foodborne pathogens. Therefore, the same issues could be relevant in other Italian regions.
From a technical point of view, it is necessary to implement the use of certain methods with a high discriminating power (i.e., NGS-based) and to develop integrated management platforms for sharing metadata and analytical data. In conclusion, a better assessment of FBD outbreaks can give better information to the risk managers, leaving more space for more accurate monitoring of the impact of the implementation of their decisions.
Epidemiological and Clinical Data Collection
Data concerning the 70 FBDs investigations were collected for the three-year period 2018-2020, during the routine laboratory activities of food control of our institute, covering Lazio and Tuscany regions, which occasionally received cases that occurred in other Italian regions. The main sources of information were the official report of sampling issued by the local competent authority and, when available, food questionnaires administered to patients during hospitalisation were a further source of data collection and the starting point for investigations. The records considered for this study include both investigations started by the competent authority after the notification of an FBD (listeriosis, salmonellosis, etc.) by the involved hospital who treated the patient and isolated the pathogens, or after personal complaints from consumers who declared they had symptoms after consuming food in hospitals, emergency rooms or at home, without a clinical diagnosis.
Microbiological and Molecular Analyses
The local competent authorities carried out environmental and food sampling, the latter consisting of remains of meals in private homes and restaurants, or foodstuff from the same production batch sold in the supermarkets frequented by the patients. If the suspected batch of food was no longer available (fully consumed, expired or withdrawn), the authority proceeded with sampling a different batch of the same product or a similar product produced by the same company.
Detection and identification of pathogens and toxins from food and environmental samples were performed by the Food Microbiology Unit of IZSLT through internal procedures, proprietary protocols and the standard tests defined by the international and European standard described in Table 3. When required, both molecular detection and cultural microbiological methods were performed. The bacterial isolates of human origin were obtained by the Microbiology Unit of hospitals involved in the investigations and transferred to the Regional Reference Centre for Pathogenic Enterobacteria (CREP) at the Food Microbiology Unit of IZSLT for serological and molecular typing.
Data Analyses
The samples were first divided by the collection points, classified into three groups: "private house", concerning the food consumed at home or collected inside of patients' or consumers' home; "restaurant", including restaurants, catering services, hospital kitchens, workplaces and school canteens; "retail store", concerning all type of retail, e.g., local outdoor market, food shop, butchery and supermarket. Subsequently, the foodstuffs were divided into two subgroups depending on the place of production: prepared meals manipulated at home (cooking or homemade) or in the restaurant, and those purchased in retail stores and not modified or further processed. The food directly sampled from retail stores and public services were not divided in subgroups.
For the classification of food as Ready-to-eat (RTE) or not (non-RTE), we followed the Commission Regulation (EC) No 2073/2005, defining RTE food as "food intended by the producer or the manufacturer for direct human consumption without the need for cooking or other processing effective to eliminate or reduce to an acceptable level microorganisms of concern". We used as standard the list of RTE food categories identified by the European Food Security Agency (EFSA) [10]. | 2022-01-12T16:06:17.579Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "0e82568348653a4f031147c4d08bad88dc1a1efe",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6651/14/1/40/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d60f877c3a868ca5ea522f3209dbdb13f69664c9",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
51805052 | pes2o/s2orc | v3-fos-license | CLASSIFICATION OF POSITIVE SOLUTIONS TO A LANE-EMDEN TYPE INTEGRAL SYSTEM WITH NEGATIVE EXPONENTS
In this paper, we classify the positive solutions to the following Lane-Emden type integral system with negative exponents u(x) = ∫ Rn |x− y|τu−p(y)v−q(y) dy, x ∈ R,
1. Introduction.In this article, we examine the regularity, classification and nonexistence of positive solutions to the following system of integral equations having coupled nonlinearities with negative exponents: where n ≥ 1 is an integer and τ, p, q, r, s > 0. Our motivation for studying this integral system stems from the fact that it arises naturally in the study of reversed variants of the Hardy-Littlewood-Sobolev (HLS) inequalities and in curvature problems from conformal geometry.For instance, a special case of system (1) is the integral system which is closely related to the Euler-Lagrange equation for the extremals to a reversed HLS inequality introduced by the first author and Zhu [8] (see also [22]).In particular, for τ = α − n > 0 and p = q = −(n + α)/(n − α), the authors employed the method of moving spheres to show that every positive measurable solution of system (2) has the form where x 0 ∈ R n is some point and a 1 , a 2 , d > 0 are constants.This classification result is a crucial step in finding the best constant in the reversed HLS inequality.
For more on HLS inequalities and its reversed versions on, say, compact Riemannian manifolds and their applications to curvature problems, we refer the reader to [7,10] and the references therein.Soon after, the author in [12] considered system (2) and obtained necessary conditions for the existence of positive solutions as well as necessary and sufficient conditions for the scale invariance of the system with respect to certain energy functionals.When u ≡ v and p = q, system (2) becomes the single integral equation which was introduced by Li in [18].Interestingly, when n = 3, τ = 1 and p = 7, this equation is closely related to a fourth order conformal covariant operator on compact 3-manifolds.Xu [24] later proved that equation (4) has a positive solution of class C 1 if and only if p = 1+2n/τ .Therefore the earlier result of Li [18] indicates that p = 1 + 2n/τ and the positive solution u has the form where x 0 ∈ R n is some point and a, d > 0 are constants.Although there are some similarities with the classical HLS integral equations, the results for equation ( 4) are somewhat surprising.More precisely, if α ∈ (0, n) and p > 0, Chen, Li and Ou in [2] and [4] proved the following for the classical HLS integral equation (a) Every positive regular solution of (6) in the critical case has the form (5) but with τ = −(n − α) < 0. (b) The only non-negative regular solution of ( 6) is u ≡ 0 whenever the subcritical condition holds, i.e., p < (n + α)/(n − α).Unlike with equation ( 4), however, there does exist positive solutions in the supercritical case p > (n +α)/(n −α), at least when α is an even integer (see [14,15,20]).Furthermore, analogous results-albeit mostly partial ones-are known for the HLS system, i.e., when p, q < 0 and τ = α − n < 0 in (2).Namely, the questions on the classification, existence and non-existence of positive solutions remain open for the most part.We refer the reader to the papers [1,3,5,11,13,19] and the references therein for more details.
If p > 1 and α ∈ (0, n), it is noteworthy to mention the equivalence between equation ( 6) and the partial differential equation Here we mean the two equations are equivalent if, assuming solutions belong to the appropriate function space, a positive solution of one equation multiplied by a suitable positive constant if necessary, is also a positive solution of the other; and vice versa (cf.[2,23]).Therefore, the results for the integral equation also hold for the equivalent differential equation, and this illustrates one advantage of studying the integral equations.In view of this, one can obviously consider the corresponding differential equations to system (1).Indeed, several papers have addressed the regularity, existence and non-existence of positive solutions to such differential systems with negative exponents on bounded smooth domains (see [9,25]).We should also mention several past works that examine system (1) but with τ = α − n < 0, p = s ≤ −1, q = r ≤ −1 and its corresponding differential system (sometimes called the Schrödinger type elliptic system).For example, Li and Ma [21] studied the symmetry and uniqueness of its positive ground state solutions.Inspired by this, the first author of this paper examined the same integral system and further obtained classification results when p + q = −(n + α)/(n − α) and non-existence results when p + q > −(n + α)/(n − α) (see [6]).Using a topological approach, Li and the third author [14] recently obtained existence results for a family of elliptic systems which included the Schrödinger type system.
Motivated by the previous results for the above integral equations and systems, our aim in this paper is to generate similar classification and non-existence results for system (1).We achieve this by utilizing similar tools developed in the earlier works described above.In particular, we exploit an integral version of the method of moving spheres (see [16,17,18]).In the process, however, we must address and overcome several issues contributed by the coupled components and negative exponents in the problem.
We now state our main results, which are reminiscent of the ones for equation (4).We begin with a theorem on the regularity of measurable solutions.In this paper, measurable solutions refer to solutions which are Lebesgue measurable and non-infinity.
Theorem 1.1.Let n ≥ 1, τ, p, q, r, s > 0 and let (u, v) be a pair of positive measurable solutions to system (1).Then u, v are smooth, i.e., u, v belong to This theorem indicates that we can always assume hereafter that solutions of system (1) are smooth.Then the following classification result holds for positive solutions.
Theorem 1.2.Suppose n ≥ 1 and τ, p, q, r, s > 0 satisfy ) is a pair of positive smooth solutions to system (1), then u, v have the form where x 0 ∈ R n is some point and c 1 , c 2 , d > 0 are constants.
We now address the non-existence of solutions for the integral system.Although the next lemma plays an important role in our proofs of Theorems 1.1 and 1.2, we state it here because it also yields a non-existence result.Its proof is straightforward and so we state and prove it right after.This is not so much the case for the lemma itself, so we delay its proof until the next section.
Lemma 1.3.For n ≥ 1 and τ, p, q, r, s > 0, if (u, v) is a pair of positive measurable solutions to (1), then As noted above, we can easily deduce a non-existence result from Lemma 1.3.To see this, let (u, v) be a pair of positive measurable solutions to system (1).Without loss of generality, we can assume that p + q = max{p + q, r + s}.
Then p + q > 1 + n/τ is clearly a necessary condition for the existence of positive solutions.On the contrary, i.e., if n + τ − τ (p + q) ≥ 0, Lemma 1.3 (iii) would then imply that for a.e.x ∈ R n .Essentially, we have proved that Theorem 1.4.System (1) admits no positive smooth solution whenever Of course, one may ask if this non-existence result is optimal.It turns out that this is not the case, and we adapt the method from our proof of Theorem 1.2 to get an improved version.
Then system (1) admits no positive smooth solution.
The remaining parts of this paper is arranged in the following manner.In Section 2, we provide the proof of Lemma 1.3 followed by the proof of Theorem 1.1.Section 3 contains the proof of Theorem 1.2 and Section 4 contains the proof of Theorem 1.5.
Regularity.
In this section, we establish the regularity of positive solutions to (1), but first we give the proof of Lemma 1.3.Throughout the paper, Proof of Lemma 1.3.The proof is similar to that of Lemma 5.1 in Li [18], but we include it here for completeness.Since u and v are non-infinity measurable functions, we have meas{x ∈ R n : u(x) < ∞} > 0, and meas{x ∈ R n : v(x) < ∞} > 0.
Moreover, there exist R > 1 and some measurable set E such that Similarly, for any x ∈ R n , we have This proves the left hand side of the inequalities in (iii).
On the other hand, for some Combining the left hand side inequalities in (iii) and the above, we get (i).
For |x| ≥ 1, and Taking these with (i) and using the Lebesgue dominated convergence theorem, we get We obtain (ii).Combining (i) and (ii) with (1), we get the right-hand side of the inequality in (iii).
Proof of Theorem 1.1.For an arbitrary choice of R > 0, we can split u into two parts: Applying Lemma 1.3 (i), J 2 (x) can be differentiated under the integral for |x| < R, so J 2 ∈ C ∞ (B R ).On the other hand, by Lemma 1.3 (iii), we have u −p v −q ∈ L ∞ (B 2R ) and so J 1 is at least Hölder continuous in B R .Since R > 0 is arbitrary, u is at least Hölder continuous in R n , and along a similar process, we can deduce that v is at least Hölder continuous in R n .So in view of Lemma 1.3 (iii), we have that u −p v −q is Hölder continuous in B 2R and the regularity of J 1 is further improved.By standard bootstrap arguments, we conclude that u ∈ C ∞ (R n ).Likewise, a similar argument shows that v ∈ C ∞ (R n ).This completes the proof of the theorem.
3. Classification of positive solutions in the critical case.In this section, we complete the proof of Theorem 1.2.To this end, we employ the Kelvin transform and the method of moving spheres of Li and Zhu [16], which was later improved by Li [18] (see also Dou and Zhu [8]).For x ∈ R n and λ > 0, we define where is the Kelvin transform of ξ with respect to B λ (x).Set Σ x,λ = R n \B λ (x).
Lemma 3.1.Let τ > 0 and p, q, r, s > 0. If (u, v) is a pair of positive solutions to system (1), then, for any x ∈ R n , where where Proof.The lemma can be verified via direct calculations, but we sketch the proof for the reader's convenience.Write |η − x| 2 with x, η ∈ R n and λ > 0. The n-space forms in the y and η variables are related by For simplicity, write |ξ x,λ − y| τ u −p (y)v −q (y) dy.
Proof.Without loss of generality, we may assume x = 0 and write u λ = u 0,λ .Since τ > 0 and u ∈ C 1 (R n ) is a positive function, there exists r 0 ∈ (0, 1) such that From Lemma 1.3 (iii), we get For small λ 0 ∈ (0, r 0 ) and any 0 < λ < λ 0 , using (iii) of Lemma 1.3 and ( 12) Combining the above with (12), we arrive at with x = 0 and λ 0 (x) = λ 0 .Likewise, we can use similar arguments to arrive at This completes the proof of the lemma.
The next lemma shows that solutions must have the conformal invariance property provided that the sphere stops.
Step 2. There is an ε * < ε 1 , such that for any ε By Step 1, there exists δ 1 > 0 such that and the function is smooth in the relevant region, then, based on the positivity of kernel, we have where δ 2 > 0 is some constant independent of ε * .It is easy to see that for some constant C > 0 (independent of ε * ), and λ Using the mean value theorem, we have, for λ ≤ |ξ| ≤ λ + 1, that Step 2 is established, and this completes the proof of Lemma 3.3.
The following two key calculus lemmas are needed to carry out the final steps of the proof of Theorem 1.2.Proof of Theorem 1.2.First, we show that there exists some x 0 ∈ R n such that λ(x 0 ) < ∞.Then we show that this implies λ(x) is finite for all x ∈ R n .We prove the former statement by contradiction.That is, assume otherwise, i.e., if λ(x) = ∞ for all x ∈ R n , then for ξ ∈ R n , u x,λ (ξ) ≥ u(ξ), and v x,λ (ξ) ≥ v(ξ), ∀ |ξ − x| > λ.
By Lemma 3.4, we conclude that u = v = constant, which cannot satisfy (1).Now, for a fixed x ∈ R n , it follows from the definition of λ(x) that, for some c 1 , c 2 > 0, d > 0 and ξ 0 ∈ R n . | 2019-04-22T13:03:37.803Z | 2016-10-01T00:00:00.000 | {
"year": 2016,
"sha1": "3bfbdac1247a2a7295a1829663a734f5912d7f30",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3934/dcds.2016094",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "adb2fff5e2f20ba5245bf8d99b839f51dc01e0b0",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
} |
221377625 | pes2o/s2orc | v3-fos-license | Resveratrol inhibits hypertrophic scars formation by activating autophagy via the miR-4654/Rheb axis
Hypertrophic scars (HSs) are a type of pathological scar which are induced by surgery, burn injuries or trauma during the healing process. Due to the high recurrence rates and strong invasive properties, HSs have become a major clinical issue. Resveratrol has been identified as a potential agent to suppress scar formation; however, the underlying mechanism of action remains unclear. Therefore, the present study aimed to investigate the effect of resveratrol on HS-derived fibroblasts (HSFBs) in vitro. MTT assay was performed to evaluate cell viability following the resveratrol treatment. Western blot and RT-qPCR analysis was used to identify the expression levels and the relationship among autophagic markers, miR-4654 and resveratrol treatment. Finally, GFP-LC3 stable HSFBs cells were generated to further assess the effect of resveratrol. The results revealed that resveratrol significantly induced cell death in a dose-dependent manner and induced autophagy by downregulating the expression levels of Rheb in HSFBs. Notably, microRNA-4654 (miR-4654) was significantly decreased in the HSFBs and re-upregulated by resveratrol treatment dose-dependently. Through the bioinformatic analysis and luciferase assay, miR-4654 was identified to directly target Rheb. Transfection studies showed that miR-4654 negative correlated with Rheb expression, suggesting that the autophagic process may be altered by the miR-4654/Rheb axis under the control of resveratrol. In conclusion, the results of the present study suggested that resveratrol may promote autophagy by upregulating miR-4654, which in turn may suppress Rheb expression via directly binding to the 3′-untranslated region of Rheb. These findings provided a novel insight into the development of potential therapeutic targets for HSs.
Introduction
Hypertrophic scars (HSs) comprise a type of pathological scar induced by surgery, burn injuries or trauma during the healing process (1). HSs most commonly occur in the outer layers of the skin and arthroses, resulting in damage to the individual appearance and severe dysfunction, including itchiness, susceptibility to infection, pain and disfigurement (2,3). It is well established that HSs are a type of tissue fibrosis caused by the accumulation of the extracellular matrix, exhibiting a robust inflammatory response and fibroblast proliferation (2,4). The therapeutic strategies for HSs include surgery, radiotherapy and combination therapy (1); however, the therapeutic efficacy of these treatments remains unsatisfactory. For example, previous studies investigated the efficacy of laser therapy combined with silicone gel sheeting and steroid injection, but found that there was no significant effect for HSs treatment (5,6). Furthermore, HSs have been reported to be regulated by a number of complicated regulatory mechanisms, including inflammation (7) and immune response (8). nonetheless, to the best of our knowledge, the mechanisms behind the pathophysiological processes of HSs remain unknown (9). Therefore, it remains an urgent requirement to investigate the potential molecular events of HSs to identify novel therapeutic targets.
autophagy is an evolutionarily highly conserved catabolic pathway that maintains the cellular energy balance through recycling cytoplasmic proteins and controlling the quality of organelles (10,11); it also provides efficient protection for cells under various stress conditions (12,13). Previous studies have reported that autophagy was involved in numerous types of disease, including cancer, lung disease and neurodegenerative diseases (14)(15)(16)(17). In addition, the involvement of autophagy in the formation of HSs has been demonstrated under starvation stress (18). Thus, these findings provide reasoning for researchers to further investigate the relationship between HSs and autophagy.
Micrornas (mirna/mir) are endogenous, conserved non-coding rna molecules of 19-22 nucleotides in length (19). mirnas serve as critical regulators of target genes through multiple mechanisms, including inhibiting translation, promoting mrna degradation and repressing protein synthesis (20)(21)(22). in addition, an abundance of evidence has identified that mirnas were involved in numerous metabolic reactions, including cell proliferation, differentiation, autophagy and apoptosis, by directly binding to the 3'-untranslated region (3'-uTr) of their target mrnas (23)(24)(25)(26). To date, a small number of studies have suggested a potential role between mirnas and HSs; for example, the expression levels of mir-21 were reported to be upregulated in HS-derived fibroblasts (HSFBs) and inhibiting miR-21 expression significantly slowed the formation of HSs in vivo (27). Conversely, the expression levels of another miRNA, miR-137, were markedly downregulated in HSs, which induced the proliferation and metastasis of fibroblasts (28). Thus, as multiple miRNAs have been reported to be aberrantly expressed during HS formation, it deems worthy to investigate potential mirnas candidates for HS therapy.
resveratrol was discovered to be highly effective in the treatment of numerous types of tumor, including colon cancer, liver cancer and neuroendocrine tumors, as well as inflammatory reaction (29,30). Several previous studies have reported that resveratrol was involved in mirna-induced autophagy during the treatment of multiple types of disease, such as chronic diabetic nephropathy (31), alzheimer's disease (32) and cancer (33). For HSs, resveratrol has been approved as a potential agent to suppress scar formation (34). interestingly, Zeng et al (35) identified that resveratrol significantly inhibited cell growth by inducing fibroblast apoptosis, whereas Bai et al (36) discovered that sirtuin 1 was upregulated by resveratrol, leading to autophagy during HSs treatment. Therefore, the present study hypothesized that resveratrol may inhibit the viability of hypertrophic scars by activating autophagy via the mirnas. The results revealed that resveratrol induced autophagy by inhibiting the expression levels of rheb. notably, mir-4654 served as the 'bridge' between resveratrol and the GTP-binding protein rheb. Taken together, the findings of the present study confirmed that rheb was a target gene of mir-4654 and partially determined the novel mechanism of mir-4654-induced autophagy, thereby providing further insights into putative targets for HS therapy.
Materials and methods
Chemicals and cell culture. resveratrol was purchased from Target Molecule corp. normal skin-derived fibroblasts (nSFBs) and HSFBs were kindly provided by dr li Min at department of dermatology, Gulou Hospital (nanjing, china). 293T cells were obtained from the american Type culture collection. all cell lines were cultured in high glucose DMEM (Gibco; Thermo Fisher Scientific, Inc.), supplemented with 10% FBS (Gibco; Thermo Fisher Scientific, Inc.) and 1% antibiotic-antimycotic (Thermo Fisher Scientific, Inc.), and maintained at 37˚C in an atmosphere of 5% CO 2 . HSFBs were treated by resveratrol or transfected with mir-4654 mimic and inhibitor or rheb vectors. HSFBs treated with mixed vehicle controls were used as control.
MTT assay. an MTT assay was used to determine cell viability. Briefly, HSFBs were seeded in the 96-well plate at a density of 10 6 cells per well. Then resveratrol was diluted to various concentrations (0, 1, 10 or 100 µmol/l) using PBS and incubated with the cells (1x10 6 ) for 0, 24, 48 or 72 h at 37˚C. Following the incubation, 200 µl MTT medium was added/well and incubated with the cells at 37˚C for a further 4 h. Following treatment with dimethyl sulfoxide (100 µl), the optical density was measured at a wavelength of 570 nm for each experimental group using a microplate reader (Thermo Fisher Scientific, Inc.).
according to the manufacturer's protocols, rheb overexpression (oe) was accomplished using a rheb oe vector (pcdna3.1; Synthgene Biotech). The rheb knockdown (Kd) was accomplished using a short hairpin rna targeting rheb contained within a pcdna3.1 vector. an empty pcdna3.1 vector was used as the nc for the Kd and oe vectors. HSFBs were plated in the 6-well plate (10 6 cells pre well) and transfected with these vectors at 5 nM using lipofectamine ® 3000 reagent (Invitrogen; Thermo Fisher Scientific, Inc.). The cells were used for further study 72 h following transfection.
Detection of the target site of miR-4654 on the 3'-UTR of Rheb using a dual-luciferase reporter assay. The potential target sequence for mir-4654 on the 3'-uTr of rheb was predicted using TargetScan (www.targetscan.org). Subsequently, 1x10 6 293T cells were plated into six-well plates and cultured for 12 h at 37˚C and 5% CO 2 . The pGl3 luciferase reporter vector was obtained from Promega corporation and the rheb wildtype (WT) or mutant (MuT) 3'-uTr were cloned into the pGl3 plasmid to synthesize pGl3-rheb-WT or pGl3-rheb-MuT. To establish the 3'-uTr of mutant rheb (rheb-MuT), the binding sites were mutated via the site directed mutagenesis kit (neB e0554, new england Biolabs, inc.). 293T cells were co-transfected with 5 nM of mir-4564 mimics or mimic-nc and pGl3-rheb-WT or pGl3-rheb-MuT using lipofectamine ® 3000 (Invitrogen; Thermo Fisher Scientific, inc.), according to the manufacturer's protocol. Following incubation for 48 h at 37˚C, the transfected cells were harvested by centrifugation (350 x g; 3 min; 20˚C) and firefly luciferase activity was detected using a dual-luciferase reporter assay system (Promega corporation). The data was normalized to Renilla luciferase activity.
Fluorescence assay. HSFBs were infected with adenoviruses expressing GFP-lc3B fusion protein [umibio (Shanghai) co., ltd.] using lipofectamine ® 3000 reagent (invitrogen; Thermo Fisher Scientific, Inc.) for 72 h to obtain the stable GFP-LC3 cell line. Briefly, 1x10 6 cells were seeded upon glass confocal dishes and allowed to settle for 12 h at 37˚C. miR-4654 mimics, mir-4654 inhibitors and the respective ncs were subsequently transfected into the stable cell line with or without rheb oe plasmid or rheb Kd plasmid transfection. Subsequently, each group was treated with 100 µmol/l resveratrol or an equal volume of PBS. Following incubation for 72 h at 37˚C, cells in each group were fixed by 4% paraformaldehyde for 10 min at room temperature. The cell nuclei were stained with daPi (1:2,000; abcam) for 5 min in the dark at room temperature. The images were observed at 200x magnification using a confocal microscope.
Statistical analysis. Statistical analysis was performed using GraphPad Prism v5 software (GraphPad Software, inc.) and data are presented as the mean or percentage change ± Sd from three independent experiments. Statistical differences between the two treatment groups were compared using a paired Student's t-test, whereas comparisons between >2 groups were performed using a one-way anoVa and a Tukey's multiple comparisons test. P<0.05 was considered to indicate a statistically significant difference.
Results
Autophagy is triggered by resveratrol in a dose-dependent manner in HSFBs. To identify the pharmacological effect of resveratrol, HSFBs were treated with resveratrol at different concentrations, and the cell viability was subsequently determined using a MTT assay, which revealed a significant dose-dependent decrease in cell viability following the treatment with resveratrol compared with the untreated cells (Fig. 1A). Notably, the most significant level of inhibition occurred following 100 µmol/l resveratrol treatment. Subsequently, changes in the expression levels of the autophagy-related protein marker, lc3, were investigated. The results revealed that the LC3-II/LC3-I ratio was significantly increased in a dose-dependent manner (Fig. 1B). conversely, the expression levels of the upstream gene, Rheb, were significantly decreased dose-dependently. These findings suggested that resveratrol may induce autophagy in HSFBs.
To further confirm whether resveratrol inhibited the viability of HSFBs through activating autophagy, HSFBs were co-treated with the autophagy inhibitor 3-Ma (5 mM) and 100 µmol/l resveratrol. interestingly, 3-Ma treatment partially reversed the inhibitory effect of resveratrol on cell viability (Fig. 1c). The western blotting results further revealed that 3-Ma treatment markedly regulated the effect of resveratrolinduced autophagy; the lc3-ii/lc3-i ratio and Beclin 1 expression levels were partially but notably downregulated in HSFBs treated with resveratrol and 3-Ma compared with resveratrol alone (Fig. 1d). These results indicated that autophagy may be triggered by resveratrol in a dose-dependent manner to inhibit HSFB cell viability.
miR-4654 downregulates the protein expression levels of Rheb. The subsequent experiments investigated the effect of separate controls and a mixed control on the expression levels of rheb, cell viability and autophagy. The results revealed that the mimic-nc or inhibitor-nc transfections did not alter the protein or mrna expression levels of rheb compared with the untreated group ( Fig. S1a and B). Similarly, following the transfection of the cells with the mixed control (mimic-nc + inhibitor-nc + PBS), the expression levels of rheb were slightly downregulated; however, no statistical differences were recorded among the different groups ( Fig. S1a and B). a similar trend in the cell viability response was observed in each group (Fig. S2a). Similarly, no significant differences were identified between the separate controls and mixed control transfections on the lc-3ii/i ratio or Beclin 1 expression levels (Fig. S2B and c). in addition, the effect of the control transfections on the expression levels of lc3 were investigated using GFP-lc3 stable cells. compared with the untreated group, each separated control group did not display significant changes in the LC3 signal intensity (Fig. S3). In fact, even in the mimic-nc + inhibitor-nc + vector + PBS group, the lc3 signal was similar compared to the untreated group. Taken together, the results identified that the mixed control presented a similar effect to the single controls, and had no effect on rheb expression levels, HFSB viability or autophagy. Thus, the mixed controls were chosen for use in subsequent experiments.
The expression levels of mir-4654 were subsequently investigated in nSFBs and HSFBs from 14 independent repeated experiments. compared with the nSFBs, the expression levels of mir-4654 in the HSFBs were significantly downregulated (Fig. 2a). Following resveratrol treatment, the expression levels of miR-4654 were significantly upregulated in a dose-dependent manner compared with the untreated HSFBs (Fig. 2B). changes in mir-4654 expression in HSFBs transfected with mir-4654 mimic and inhibitor were assessed by rT-qPcr. The results revealed that the cells transfected with the mir-4654 mimic had significantly upregulated mir-4654 expression levels compared with the mimic-nc group. in addition, a 50% decrease was observed in the expression levels of mir-4654 following the transfection with the mir-4654 inhibitor compared with the inhibitor-nc group (Fig. 2c). Subsequently, the mrna and protein expression levels of rheb were analyzed in the presence of the mir-4654 mimic, mir-4653 inhibitor or resveratrol treatment; however, no significant differences were observed in the expression levels of rheb mrna between these groups (Fig. 2d). interestingly, neither mir-4654 nor resveratrol were able to influence Rheb mRNA expression levels; however, resveratrol treatment significantly downregulated the protein expression levels of rheb, while the inhibition of mir-4654 expression levels in HSFBs led to the significant upregulation of the protein expression levels of rheb compared with the control group (Fig. 2e). These findings indicated that rheb may be influenced by miR-4654 or resveratrol in the process of translation.
miR-4654 regulates the translation of Rheb through targeting its 3'-UTR region. To confirm the relationship between mir-4654 and rheb, the potential binding sites of mir-4654 on the 3'-uTr of rheb were investigated using bioinformatics analysis (Fig. 3A). To further confirm the hypothesis of the present study, a dual-luciferase reporter assay was performed by co-transfecting mir-4654 mimics or mimic-nc with the luciferase reporter vector containing the WT or MuT 3'-uTr of rheb. The transfection of the mir-4654 mimic led to a significant reduction in the relative luciferase activity of rheb-WT cells compared with the mimic-nc group; however, no significant differences were observed in the Rheb-MUT cells with mir-4654 treatment as compared with the mimic-NC group (Fig. 3B). These findings suggested that miR-4654 may target the 3'-uTr of WT rheb and regulate the expression of rheb negatively.
miR-4654 enhances Rheb-induced autophagy in HSFBs.
a previous study demonstrated the critical role of rheb in the regulation of cell autophagy and proliferation processes (38). Therefore, the present study aimed to investigate whether the mirna-dependent downregulation of rheb affected these processes in HSFBs. a 5-fold upregulation in mrna expression levels and a 2.4-fold upregulation in protein expression levels was identified in HSFBs transfected with the rheb oe vector compared with the control vector. conversely, the mRNA expression levels of Rheb were significantly downregulated in cells transfected with rheb Kd vector compared with the control vector and the western blotting results revealed a 60% reduction in rheb expression levels following rheb Kd in the cells (Fig. 4a).
Based on these results, the current study aimed to determine whether mir-4654 induced HSFB cell viability through a rheb-dependent mechanism The MTT assay results revealed that the inhibition of mir-4654 expression levels increased the cell viability of HSFBs compared with the untreated group or control group. However, the cell viability was reduced following the transfection with the mir-4654 mimic. in addition, following the co-transfection with the rheb Kd vector prior to the mir-4654 inhibitor, the cell viability was re-increased compared with the mir-4654 inhibitor group (Fig. 4B). The mrna expression levels of rheb were subsequently investigated following the different transfections. The regulation of mir-4654 expression levels using the mimic or inhibitor did not significantly alter Rheb expression levels in the HSFBs compared with the control groups; however, a 2.3-fold increase and a 50% reduction was observed in rheb expression levels in the rheb oe + mir-4654 mimic and mir-4654 inhibitor + rheb Kd groups, respectively (Fig. 4c). Finally, the expression levels of autophagy-related protein markers, including lc3 and Beclin 1, were analyzed. The data revealed that the overexpression of miR-4654 significantly upregulated the expression levels of these proteins compared with the control group. Moreover, following the transfection with mir-4654 mimic, the expression levels of Rheb were significantly downregulated compared with the control group. in contrast, following the transfection of cells with the mir-4654 inhibitor, the expression levels of rheb were significantly upregulated. Next, Rheb was overexpressed prior to the mirna-mimic/inhibitor transfection; the results demonstrated that the transfection with rheb oe partially reversed the effect of mir-4654 on autophagy. Similarly, the co-transfection of rheb Kd and the mir-4654 inhibitor into HSFBs also significantly reversed the effect of the miR-4654-inhibitor alone (Fig. 4d). These data suggested that rheb may be one of the targets of mir-4654 responsible for autophagy and the viability of HSFBs.
Res ve ra t rol i n d u ces H S FB a u t op h ag y t h ro ugh miR-4654/Rheb axis.
in light of the aforementioned findings, it was hypothesized that the effect of resveratrol on cell autophagy may be mediated by mirna. as previously demonstrated, resveratrol treatment promoted the upregulation of mir-4654 expression levels in HSFBs. The subsequent results revealed that the cell viability was markedly reduced following resveratrol treatment compared with the control group, and this effect was exacerbated by the addition of the mir-4654 mimic (resveratrol + mir-4654 mimic group). conversely, the inhibition of mir-4654 expression alongside resveratrol treatment effectively alleviated the effects of resveratrol treatment alone over cell viability. When rheb was overexpressed before resveratrol and mir-4654 mimic treatment, the cell viability was re-increased significantly compared with the resveratrol + mir-4654 mimic group. The silencing of rheb reversed the effect of resveratrol + mir-4654 inhibitor treatment (Fig. 5a). Moreover, rT-qPcr analysis revealed that neither resveratrol treatment alone or combined with mir-4654 mimics downregulated the expression level of rheb compared with the control group (Fig. 5B). However, these effects were significantly reversed following the co-transfection with the rheb oe plasmid (resveratrol + mir-4654 mimic + rheb oe group; Fig. 5B). Similarly, the results obtained from the western blotting experiments also suggested that the downregulated expression levels of rheb following resveratrol treatment in HSFBs were mir-4654 dependent. rheb was markedly reduced following resveratrol treatment alone or combined with mir-4654 mimic compared with the control group. By contrast, the inhibition of mir-4654 expression alongside resveratrol treatment effectively increased the effects of resveratrol treatment alone. However, the expression level of Beclin 1 and the lc3-ii/lc3-i ratio were both increased in the resveratrol group, and further increased in the resveratrol + mir-4654 mimic group. While the levels of such molecules were both downregulated when the cells were treated with mir-4654 inhibitor, which exhibited no difference compared with the resveratrol alone group. When the cells were co-transfected with the mir-4654 mimic and rheb oe prior to resveratrol treatment, no change was observed in the expression level of Beclin 1 and the lc3-ii/lc3-i ratio compared with the resveratrol + mir-4654 mimic group, while the protein level of Rheb was reupregulated significantly. Moreover, following the treatment of cells with the mir-4654 inhibitor and downregulating rheb expression levels with the rheb Kd plasmid, resveratrol treatment significantly inhibited cell autophagy (Fig. 5c). These findings suggested that the increase in autophagy by resveratrol in HSFBs may be dependent on the mir-4654/rheb axis.
miR-4654 promotes the formation of autophagosomes in
HSFBs following resveratrol treatment. To further confirm the role of miR-4654 in the activation of autophagy, the fluorescence intensity in the stable GFP-lc3 HSFBs cell line was analyzed in each group; mir-4654 mimics or inhibitors were transfected into stable GFP-lc3 cell lines. Following 72 h of transfection, DAPI was used to stain the cell nuclei and the cells were observed under confocal microscopy to determine the GFP-LC3 fluorescent intensities (Fig. 6). The results revealed that excessive autophagy was induced following resveratrol treatment, while the transfection with the mir-4654 inhibitor reversed the effect of resveratrol treatment. in addition, upon the co-transfection of the cells with the mir-4654 mimic and rheb oe prior to resveratrol treatment, the increased GFP-lc3 fluorescent intensity in the resveratrol + mir-4654 mimic group was reduced. Moreover, the co-transfection of cells with the mir-4654 inhibitor and rheb Kd plasmid prior to resveratrol treatment re-increased the effect of resveratrol + mir-4654 inhibitor treatment. Figure 6. mir-4654 promotes the formation of autophagosomes in HSFBs following resveratrol treatment. HSFBs were infected with adenoviruses expressing GFP-lc3 fusion protein to obtain a stable GFP-lc3 cell line. different treatments/transfections were applied in the stable cell line, including resveratrol treatment alone, resveratrol + mir-4654 mimic transfection, resveratrol + mir-4654 mimic + rheb oe transfection, resveratrol + mir-4654 inhibitor transfection + Rheb KD, resveratrol + miR-4654 inhibitor transfection and the control group. After 72 h, DAPI was used to stain cells and the cells were observed at a magnification of x200 using confocal microscopy. Scale bar=20 µm. The GFP-LC3 fluorescence intensity was normalized against the control group and determined using image J software. data are presented as the mean ± Sd, n=3, ** P<0.01; ## P<0.01 vs. the resveratrol group. lc3, microtubuleassociated protein 1a/1B-light chain 3; oe, overexpression; Kd, knockdown; control, mimic-nc + inhibitor-nc + PBS + vector group; HSFBs, hypertrophic scar-derived fibroblasts; miR, microRNA.
Discussion
autophagy serves a vital role in sustaining cellular metabolism; however, in certain cellular settings, autophagy is also known to induce cell death (39,40). autophagy has also been identified to regulate the apoptotic response (41). It is also established that autophagy is an essential process for the maintenance of cellular homeostasis (42). Furthermore, the association between autophagy and HSs has been widely reported by an increasing number of studies; for example, Shi et al (43) compared the autophagic capacity of HSs and normal skin and discovered that the generation of lc3 is prevented in HSs, which was suggested to benefit HSs formation. in addition, Shi et al (44) demonstrated that WT p53-modulated autophagy and autophagic-induced fibroblast apoptosis suppresses the formation of HSs in a rabbit ear model. Moreover, Shi et al (45) reported that interleukin-10 inhibits autophagy in HSFBs under starvation stress to reduce HS formation. These findings suggested that autophagy may be involved in the proliferation and survival of HSFBs (46). The present study demonstrated that resveratrol could efficiently trigger HSFBs autophagy, as evidenced by the increased lc3-ii/lc3-i ratio and upregulated expression level of Beclin 1, which are the markers of autophagy. The most marked effects were observed in the cells with 100 µmol/l resveratrol treatment for 72 h, in which the cell viability was seriously impaired. in addition, using the common inhibitor of the autophagic pathway 3-Ma, it was found that inhibition of autophagy could significantly reverse the effect of resveratrol treatment. it was hypothesized that high level of autophagy in HSFBs might be essential to inhibit cell viability.
The present study also demonstrated that resveratrol treatment markedly reduced the expression levels of rheb in a dose-dependent manner. Rheb has been identified as a critical regulator of autophagy during the disease state; a previous study has reported its ability to activate autophagy, thereby leading to its own inactivation via the mTor complex 1 pathway (37). Another study suggested that reduction of Rheb can initiate autophagy in macrophages with increased Beclin1 and autophagy-related protein 7 (47). These finding suggest that inhibition of rheb may be important to suppress HSFBs proliferation and induce autophagy, which was consistent with the present study.
an increasing number of mirnas have been reported to serve as essential components of multiple pathophysiological and biomechanical processes (38,48,49). HSs have been discovered to be associated with abnormal changes in the expression level of multiple mirnas, including the upregulation of mirnas such as mir-30a-5p and mir-152, and the downregulation of mirnas such as mir-143-5p and miR-4328 (50,51). In addition, miRNAs have been identified to affect the proliferation and apoptosis of fibroblasts, as well as extracellular matrix deposition (52,53). The roles of numerous mirnas have been reported in both biological and clinical processes, although the roles for the majority of mirnas have yet to be elucidated. The present study revealed that the expression levels of mir-4654 were significantly downregulated in HSFBs as compared to the NSFBs. While its level was significantly increased in the cells following the treatment with different concentrations of resveratrol. in addition, the expression levels of mir-4654 were closely associated with the degree of autophagy, which is consistent with the findings of a previous study (54). In the present study, overexpression of mir-4654 notably inhibited rheb expression at the protein level and transfection of the mir-4654 inhibitor markedly increased the level of rheb when compared with the control group. Subsequently, the current study further investigated the effect of mir-4654 and rheb on the autophagic process. results from bioinformatics analysis and a dual-luciferase reporter assay suggested that mir-4654 may directly inhibit the translation of rheb by targeting its 3'-uTr region. restoring the rheb expression reversed the cell viability inhibition and autophagy initiation of HFSBs induced by mir-4654 overexpression. Similarly, suppressing the rheb expression in rheb-depleted cells inhibited the cell viability and re-enhanced cell autophagy as compared with the mir-4654 inhibitor group. accordingly, these data indicated that resveratrol induced HFSBs autophagy might be regulated by upregulation of mir-4654 which in turn suppressed rheb.
Finally, co-treatment with mir-4654 mimic further suppressed cell viability and enhanced cell autophagy in resveratrol-treated HFSBs. When the cells were treated with mir-4654 inhibitor prior to resveratrol administration, a higher cell viability and lower degree of autophagy were observed compared with the resveratrol treated group. By contrast, dysregulation of rheb reversed the function of resveratrol and mir-4654 inhibitor treatment, and demonstrated no effect on the role of mir-4654 mimic treatment upon the resveratrol administration. The hypothesis of the present study was further verified by the fluorescence assay, which revealed that a higher number of autophagosomes were present following the transfection with the mir-4654 mimic and resveratrol treatment, whereas knocking down miR-4654 expression inhibited the fluorescence intensity of lc3 in HFSBs.
In conclusion, the findings of the present study suggested that resveratrol treatment may promote autophagy by upregulating mir-4654 expression levels, and thus downregulating the expression levels of the downstream gene, rheb. However, the findings of the current study are limited due to the fact that the effect of resveratrol treatment on nSFBs was not investigated, which may limit the clinical impact of these results. However, despite this limitation, the findings still shed light on the molecular mechanism underlying the mir-4654-mediated activation of autophagy and the results suggested that mir-4654 may be a potential biomarker and therapeutic target for the treatment and diagnosis of HSs. | 2020-08-06T09:03:41.872Z | 2020-08-04T00:00:00.000 | {
"year": 2020,
"sha1": "ea15edaf54e78bb4f1c9a5a875bbdebd6e61f952",
"oa_license": "CCBYNCND",
"oa_url": "https://www.spandidos-publications.com/10.3892/mmr.2020.11407/download",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "b40050754ee41cc832012337cdba0e0e626ae8dc",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
258502373 | pes2o/s2orc | v3-fos-license | EXTERNAL AND INTERNAL QUALITIES OF CHICKEN EGGS EARLY PRODUCTION AT VARIOUS STORAGE TIMES AT ROOM TEMPERATURE
Eggs are one of the consumed foods that contain many essential amino acid nutrients such as lysine, tryptophan, and methionine. Egg quality can be seen from the storability of eggs after they are produced. This study aims to evaluate the effect of various storage lengths on the external and internal quality of chicken egg consumption of the Isa Brown strain at the beginning of production at room temperature. The research method is a Laboratories experiment using a Complete Randomized Design. The study treatment consisted of egg storage duration for 0 days (P0), 7 days (P1), 14 days (P2), and 21 days (P3) with 4 repeats and each test unit of 4 eggs. The research variables include external qualities: egg weight, egg index, shell weight, and eggshell thickness, then internal qualities: egg white index, yolk index, yolk color, and egg pH. The data obtained in this study were analyzed with ANOVA, if there are differences between treatments, then proceed with the the Least Significant Different test. The results showed that the duration of deviation had a very noticeable influence (P<0.01) on the external quality of the egg including: egg weight, weight and thickness of the eggshell, and no influence (P>0.01) on the egg index. The length of storage also has a very noticeable influence on the internal quality of eggs (P<0.01). Based on the results of the study, it can be concluded that the initial chicken eggs of production at a storage duration of up to 21 days at room temperature experience a decrease in external and internal qualities but still meet the Standards National Indonesia 3926:2008.
INTRODUCTION
Eggs are one of the most perfect food sources of protein, a provider of nutrients with great biological value for the growth and development of body tissues (Feddern et al., 2017) at affordable prices (Wulandari et al., 2022) in all circles of society compared to large and small ruminant meat foodstuffs. One of the most important consumer criteria of egg quality is its freshness (Sati et al., 2020), but it is also important for its feasibility as a food ingredient to be consumed and needed especially by the egg processing industry. An important parameter of egg freshness is related to the quality of eggs, and their decrease depends mainly on the time and temperature of storage (Yimenu et al., 2017). However, over time storage after the eggs are produced by the hen in the laying hen farming business, the quality of eggs begins to decline (Oliveira et al., 2020).
External and internal qualities such as egg weight, albumen height, and egg pH are the main indicators for evaluating freshness, and are strongly influenced by storage time and temperature (Lee et al., 2016;Dong et al., 2017;Feddern et al., 2017) and are comprehensive indicators to reflect egg quality (Oliveira et al., 2020).
The results of the study by Padhi et al. (2013) showed that chickens from the Vanaraja male line breed at various ages of 28, 40, 52, 64, and 72 weeks also had a different influence on the quality of chicken eggs. The weight of the egg increases linearly until the age of 52 weeks and then remains stable, while the weight of the eggshell increases with age and the thickness of the shell is lower at the age of 28 and 40 weeks.
The novelty of this study was to evaluate the storability of chicken eggs from the Isa Brown strain at the initial age of production with various storage times of 0. 7. 14, and 21 days at room temperature and evaluate changes in egg quality due to egg storage. During storage, chicken eggs remain to carry out life activities, accompanied by various complex physical, chemical, and physiological changes (Al-Obaidi et al., 2015), which affect their quality. Egg weight, shell weight, albumen height, HU, and albumen viscosity decrease markedly with increasing storage time and temperature. However, the pH of eggs increases significantly with increasing storage periods and temperatures. Various factors such as chicken age, storage temperature, CO2 presence, and storage time affect egg quality (Lee et al., 2016) resulting in albumen depletion, increased pH, weakening and stretching of the vitelline membrane, and increasing the moisture content in the yolk.
The quality of albumen is also an indicator of egg freshness which is influenced by genetics and environmental factors such as storage time, storage conditions, transportation, as well as the sales process can also affect the decline in egg quality (Altunatmaz et al., 2020.).
The time of distribution of eggs from chicken farmers to consumers is very varied, so the quality of eggs consumed by the community also varies, as well as consumer habits of storing eggs in open space before consumption. Therefore, the purpose of this study is to evaluate the effect of various storage durations at room temperature on the internal and external quality of early egg production originating from the chicken farming business.
Research materials
The research material was that Isa Brown's strain of chicken eggs came from the Diva Farm Wonokoyo livestock business, Kedung Kandang District, totaling 64 eggs with the age of the initial chicken production.
Research Methods
The research method is a Laboratories experiment using a Complete Randomized Design (CRD). The study treatment included egg storage duration during (P0: 0 days, P1: 7 days, P2: 14 days, and P3: 21 days) with 4 repeats and each repeat unit of 4 eggs. The temperature used at storage time averages 20-25°C.
Research Variables
The research variables observed, including the external and internal qualities of the eggs, include: 1) Egg weight (grams) Weighted using digital scales 2) Egg index (%) Measured using calipers, the ratio between the width of the egg and the length of the egg 3) Shell weight (grams) Weighted using digital scales 4) Shell thickness (mm) Measured using calipers 5) Egg white index Measuring the height and diameter of the egg white by breaking the egg on the glass plane, then measuring the height of the egg white using a calipers IPT = High Egg White Average Diameter of Egg White 6) Yolk color The color of the yolk is obtained by comparing using a Roche yolk color fan on a scale of 1-15 7) Yolk index Measurement of the height and diameter of the yolk by breaking the egg on the glass field of the data then measure the height using a toothpick and after that using a caliper IKT = Yolk Height Average Diameter of the Yolk 8) Egg pH Using a pH meter to measure the degree of acidity of eggs
Data Analysis
The data obtained in this study were analyzed with ANOVA, if there are differences between treatments, then proceed with the Least Significant Different test.
Quality of Early Production Chicken Eggs
The quality of early chicken eggs in production can be judged from various storage lengths at room temperature including external qualities: Egg weight, eggshell weight and thickness, and egg index. The internal qualities of eggs: white index, yolk index, yolk color, and pH.
External qualities of eggs Egg Weight
As per Table 1. The weight of chicken eggs in this study based on the National tandardization Agency is classified as medium category (50-60 g), and large (> 60 g) (SNI 2008) with an average of 58.88 g to 65.75 g.
The results of the various analysis showed that various storage lengths had a very noticeable influence (P < 0.01) on the egg weight of the initial chicken production. The average weight of chicken eggs is indicated in Table 1.
The highest percentage of egg weight loss at 21 days of storage length was 10.45% and statistically no different from the percentage of egg weight loss at 14 days of storage length. The longer the storage results in a higher decrease in egg weight due to the accumulation of egg organic matter degradation results in the form of H2O evaporation and the release of CO2, NH3, N2, and H2S gases. The synergy of the results of the research of Quan et al. (2021) Egg weight loss during storage will naturally occur due to the evaporation of water. Greater evaporation results in the loss of solvent in the egg through the cracks of the eggshell (Yeasmin et al., 2014;Lee et al., 2016).
In addition, egg weight loss is associated with the porosity and thickness of the egg shell, egg white (albumen), and water conductivity (Brodacki et al., 2019). An increase in the number of eggshell pores during storage at room temperature results in an increase in the release of moisture and carbon dioxide gases, ammonia, nitrogen, and hydrogen sulfide gas resulting from the respiratory activity of eggs (Dada et al., 2018). Sufficiently high storage temperature conditions lead to drying of the cuticle and the shell membrane in the egg, resulting in an increase in the pore area and permeability of the egg (Kopacz and Drażbo, 2018), the breakdown of carbonic acid from the egg white produces carbon dioxide (CO2) and water (H2O), subsequently exiting through the pores of the shell resulting in a decrease in the thickness of the egg white and becoming watery, thus affecting the weight loss of the egg (Eke, Olaitan, and Ochefu, 2013).
Egg Index
As per Table 1. Various storage lengths do not affect (P > 0.05) the initial chicken egg index of production. The egg index of the study ranged from 76.87% -80.49%, showing no difference in various storage durations. The egg index is classified by the criteria of oblong egg shape (SI < 72), normal eggs (standard) (SI = 72-76), and round eggs (SI >76). The egg index is related to the performance of the egg shape, the higher the egg index value, the more round the egg shape, while the lower the egg index value results in an increasingly oblong egg shape (Duman et al., 2016). The uterine diameter factor is more decisive and controls the presence of differences in the index of chicken eggs. The shape of the eggs produced tends to be around when the diameter of the uterus is wide, while the shape of the eggs produced tends to be oblong when the diameter of the uterus is narrow (Rahman, 2013;Liu et al., 2017).
It is stated by Altunatmaz et al. (2020) that the quality of eggs is significantly influenced by environmental conditions such as temperature, humidity, and also the length of storage while the temperature and duration of storage do not have a significant effect on the external quality characteristics of eggs (P>0.05) including egg length, egg width, egg index, and shell weight, as well as the results of the study of Okonkwo et al. (2021) storage methods and storage duration do not have different against egg indices.
The results of the study by Khatun et al., 2016, storage duration of 0 days (control) 3 and 7 days at room temperature (200-250C) did not give a significant difference in egg length and egg width (P>0.05) with an average of 5.73±.01 cm and 4.41±.01 cm. Supported by the results of the study of Sati et al. (2020) showed that the duration of egg storage of 0 days (control), 5 days, and 10 days did not give a difference (P>0.05) in the percentage of egg index of 78.77, 78.39 and 78.52, synergy with the results of the research of Şekeroğlu, Gok, and Duman (2016)
Eggshell Weight
As per Table 1. Various storage lengths exert a very noticeable influence (P<0.01) on the shell weight of earlyproduction chicken eggs.
The eggshell is the outermost layer of protective egg contents containing about 95.1% inorganic matter and 3.3% protein with components in the form of calcium carbonate and the rest such as magnesium, phosphorus, sodium, potassium, zinc, iron, manganese, and copper with a structure including (a) the cuticle, has no pores, but is gas-passable; (b) the spongy/calcareous layer consists of protein fibers and a lime layer (c) mammary layer; and (d) membrane layers ( Ketta and Tůmová, 2016;Ajala et al.,2018) The percentage of eggshell weight from the study of chicken egg weight at various storage durations ranged from 11.92% -13.42%. The decrease in eggshell weight is thought to be due to the high level of moisture loss through the eggshell from the albumen during storage at room temperature. The eggshell is directly related to the surrounding atmospheric conditions, drying becomes very fast and the shell becomes drier as the storage time increases, thus making the eggshell lighter. The results of the research of Hagan, Adjei, and Baah, 2013 showed that storage time had a significant influence on reducing eggshell weight and egg weight (P<0.05). The weight of the shell is part of the total weight of the egg, the decrease in the weight of the egg will reflect in the decrease in the weight of the shell because moisture is lost from the shell before the effect is transferred to the contents of the egg.
The results of the study by Khatun et al. (2016) the length of storage did not have a different effect on the weight of the eggshell with an average of 0 days (6.18±0.10), 3 days (5.35±0.10) and 7 days (6.04±0.10), the results of this study were inversely proportional as reported by Ibrahim et al. (2020) there was a significant influence (P<0.001) of storage duration on the weight of the eggshell. The longer the storage, gives the increase in the weight of the eggshell, with the lowest average on the 0th day (5.53+0.18 g) and the highest weight on the 28th day (6.21+0.10 g). The varied weight of the eggshell can be caused by factors such as chicken strains, age, and storage environmental conditions.
Shell thickness
As per Table 1. Various storage lengths exert a very noticeable influence (P<0.01) on the thickness of the shell of early-production chicken eggs. The thickness or strength of the shell is the most commonly used parameter for measuring the external quality of eggs.
The percentage decrease in the thickness of chicken eggshells at various storage lengths statistically ranges from 37% to 76% with the highest decrease in eggshell thickness at a storage period of 21 days. The increase in storage time provides a decrease in the thickness of the eggshell, synergy with a decrease in the weight of the eggshell. It is suspected that the evaporation of the contents of the egg can degrade the thickness of the eggshell. The longer the storage causes more evaporation of gas, resulting in the degradation of the eggshell. The results of the research of Hagan, Adjei, and Baah (2013) and Grashorn (2016) showed a significant effect of storage time (P<0.05) on the thickness of the eggshell and the weight of the shell. The increase in the length of storage leads to a decrease in the thickness of the eggshell. The results of the study of Khatun et al (2016) there was a decrease in the thickness of the eggshell at a storage period of 0 days (0.37a±0.008), 3 days (0.33b±0.01) and 7 days (0.34ab±0.01). Supported by Okonkwo et al., 2021 reported a significant decrease in shell thickness from a 3-day storage period of 0.80 to 0.70 at a storage time of 15 days as a result of the rate of CO2 loss from eggs.
Internal qualities of eggs Egg White Index
As per Table 2. The results showed that various storage lengths had a very noticeable influence (P<0.01) on the egg white index of early-production chickens.
The value of the egg white index is related to the indicator of egg quality, the higher the value of the egg white index, the better the quality of the eggs, this is related to the level of freshness of the eggs. At storage lengths of 14 days and 21 days, the egg white index showed a very significant decrease compared to the storage lengths of 0 days and 7 days. This can be due to the increase in storage time affecting the viscosity of egg whites due to the evaporation of CO2 and H2O. Santos et al. (2019) reported egg white index values were associated with an increase in albumen width and length along with an increase in storage length (p<0.05). In contrast, albumen height decreased due to the storage temperature increasing from 4 to 24°C (p<0.05). The results of the study by Okonkwo et al., 2021 showed that the length of storage influenced the quality properties of internal eggs (P<0.05).
Albumen height, Albumen index, Albumen ratio, Albumen weight, yolk height, yolk index, and HU decrease with an increase in storage time while increasing storage length results in an increase in Albumen length, Albumen width, and yolk diameter.
The increase in the length and width of albumen is due to the high CO2 loss that occurs at room temperature (Dada et al., 2018). The results of Khatun's 2016 study, storage lengths of 3 days and 7 days resulted in a difference in Albumen height (P<0.05) of 5.06±0.56 and 3.95±0.56. The synergy of the results of the study of Sati et al., (2020) the length of storage provided a significant difference in the Albumen index (%) 0 days (11.37a), 5 days (8.53b) and 10 days (7.21c).
Yolk Index
As per Table 2. The results showed that various storage lengths had a very noticeable influence (P<0.01) on the yolk index of early-production chickens. The yolk index is an indication of the freshness of the egg, the higher the index, the more desirable the quality of the egg. Each length of storage contributes to a decrease in the value of the yolk index. The 14-day and 21day storage periods gave the yolk index the largest drop of 0.12. This can be caused by an increase in the length of egg storage at room temperature resulting in the yolk getting bigger and mushy then the vitellin membrane will be damaged, so that the yolk breaks and results in a decrease in the value of the yolk index. Okonkwo et al., (2021) conveyed that the yolk index has decreased with the increase in storage time due to the movement of CO2 and moisture through the eggshell, causing changes in albumen, yolk, and egg weight (Dada et al., 2018). Supported by Ebegbulem and Asukwo (2018). The duration of storage of 7 days at room temperature lowers the value of the yolk index from 0.47 to 0.38, a decrease in the yolk index as a result of the decomposition of the ovomucin glycoprotein in the egg, The results of the study of Feddern et al. (2017) also gave a decrease in the egg index at 1-week, 2 weeks, 3 weeks, and 4 weeks storage lengths by 0.39; 0,32; 0.24, and 0.17. The fresh yolk is round and hard, as the storage time increases, the yolk degrades in quality by absorbing water and has an increasing size. The integrity of the yolk depends on the strength of the vitelline membrane which is inversely proportional to the length of storage, the decrease in the strength of the Vitelline membrane during storage makes the yolk more prone to rupture. An increase in the weight of the yolk as a result of the absorption of water by the yolk from the albumin layer results in a decrease in the yolk index with prolonged storage time.
Yolk Color
As per Table 2. The results showed that various storage lengths had a very noticeable influence (P<0.01) on the color of the yolk of early-production chickens.
The color density of the yolk at 14 days and 21 days of storage resulted in the largest decrease in the color density of the yolk by 1.25 compared to other storage hours. The color of the yolk is one of the factors in determining the internal quality of the egg. The color range of egg yolk in the color fan (Roche yolk color fan) is 1--15 from pale to dark orange (intense).
The highest concentration of yolk color is P0, which is an egg that has just been released by the hen because in this treatment the egg does not pass the storage period it making the yolk color value the highest compared to other treatments. The color of the yolk is influenced by the age of the chicken and the substances contained in the feed such as xanthofilbeta carotene, chlorophyll, and chitosan. At the time of egg storage, there will be a migration of H2O from the egg white to the yolk. The color density of the yolk will be lower with the longer storage of the egg. The color of the yolk is directly related to the carotenoid pigment, of corn-like xanthophylls (Souza et al., 2021) Reduction of the color of the yolk with storage time, the possibility of dilution of the yolk pigment caused by damage to the vitelline membrane. The synergy of the research results of Hagan et al. (2013), Feddern et al. (2017), and Kruenti et al. (2022) there is a decrease in the color density of the yolk along with an increase in the length of egg storage.
Egg pH
As per Table 2. The results showed that various storage durations had a very noticeable influence (P<0.01) on the pH of chicken eggs at the beginning of production. The increase in egg pH at 14-day and 21-day storage lengths resulted in the highest increase of 0.41. This can be caused by the diffusion of several components, including CO2 from the egg white through the eggshell, and the diffusion of H2O from the egg white to the yolk. Egg whites mostly contain inorganic elements sodium and potassium bicarbonate, when CO2 evaporation occurs during storage, the egg white becomes alkaline which results in an increased pH of the egg white, increasing the pH of the egg. Souza et al. (2021) report an increase in pH associated with solid albumen damage (decreased HU), causing the albumen to become increasingly liquid and dilute, due to changes in the ovomucinlysozyme complex from increased pH over time to storage time (Feddern et al., 2017).
CONCLUSION
Based on the results of the study, it can be concluded that the initial chicken eggs of production at a storage duration of up to 21 days at room temperature experience a decrease in external and internal qualities but still meet the Standards National Indonesia 3926:2008. | 2023-05-05T15:13:12.050Z | 2023-03-01T00:00:00.000 | {
"year": 2023,
"sha1": "20e0ee765b4a9053c76861b0c7e804cdd6aaa78b",
"oa_license": "CCBY",
"oa_url": "https://jitek.ub.ac.id/index.php/jitek/article/download/681/381",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e1ade0d641bdec9c8ce990ca8969f758ce96b39c",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": []
} |
233922388 | pes2o/s2orc | v3-fos-license | hess-2021-36
This manuscript presents a method to automatically measure water level in streams using NIR-cameras and image processing techniques. The paper is generally well written and the results are promising. However, I do think that the paper should be improved before it can be considered for publication in HESS. First, the structure of the paper is somewhat unbalanced. The introduction is relatively extensive, making the results and discussion section seem rather marginal. I think the latter section would benefit from a stronger and more elaborated evaluation of the presented method, and include a more detailed outlook to future work and potential applications. Second, the data availability statement is not in line with HESS policy. This should be updated before the manuscript can be considered. Finally, the paper would benefit from additional support for and clarification about the setup and choices made, detailed in the general and specific comments below.
Introduction
Another reason why ephemeral streams are so relevant is perhaps that the onset of flow may result in the mobilization of (anthropogenic) debris and sediment as well?
The link with citizen science makes sense, as this offers an unprecedented opportunity for upscaling of data collection. However, how would this work for the locations of interested in this manuscript, i.e. ungauged headwater catchments? These may not be the locations where many citizens may be available to contribute with data collection. The introduction in general is well-written. I do think it is a bit long and goes on a tangent here and there. Perhaps the authors can reduce the length a bit and focus more on the potential of their approach, and why this is a promising addition to the existing suite of monitoring techniques.
Methods
Perhaps a sketch of the monitoring setup can be included in addition to Fig. 1. What is the motivation for taking images every 30 minutes? What is the relevant timescale for ephemeral streams? I'd argue that a single to a couple of images per day would suffice, drastically reducing the required storage. With the current setup someone needs to read out the data every two weeks, which I would personally find quite much for ungauged headwater catchments. 1: I find this figure a bit unclear. Perhaps some additional headings to complete the workflow makes it a bit clearer. Please include some more details about the setup. How long is the pole? How is the pole robustly placed in what looks to be a rather "wild" environment? What is the distance between the pole and the camera? How is the camera fixed? What is the estimated pixel length (mm, cm)? Maybe a overview map can be included to show the outdoor testing locations. For the data validation, was the water level identification done by the same person? Or by a group of people? If the latter, was there any bias between the observers? Also, I was wondering if there was a reason to not measure the water level with an accurate water level logger.
Results and discussion
Why was Test A done with the same water level for each image? As this method is most valuable to detect changes in water level, would it not have been valuable to test the method for the full ranges of values? The method seems to work quite well for Test C, which includes quite some dynamic behaviour. For Test D and E, the dynamics seem not to be captured completely. Can the authors elaborate on this, including the implications for what that would mean for long-term monitoring? The discussion is rather limited. I would encourage the authors to include a critical synthesis and more elaborated outlook on future work. What are the next steps for this method? How do the authors envision application in the field? Only for measurements of a couple of days, or also for seasonal or even multi-year monitoring efforts? When reading the paper I partially get very enthusiastic about this method, because it offers a nice new method for automatic monitoring. On the other hand, I keep on wondering what the added value of this method is over a traditional water level logger with millimetre accuracy, at more or less the same price. Such sensors are very robust, don't need frames, and additional constructions, have a very long battery life (weeks, months), and don't need any further processing. What I also wonder is whether this approach may be expanded with detection and monitoring of (anthropogenic) debris, such as woody debris, plastic pollution, or otherwise (van Lieshout et al., 2020). Then there's a clear added value over more traditional sensing equipment.
Conclusions
In the conclusions the authors sate that their method allows for "supervising the stream area and banks". This is not elaborated on in the paper, so I suggest to either omit this statement or actually provide some additional analyses to support this in the paper.
Data and code availability
The data availability is not in line with HESS policy: https://www.hydrology-and-earthsystem-sciences.net/policies/data_policy.html. I would strongly suggest to make the data openly available through one a repository. And otherwise follow HESS' policy to include a statement on why there are not available ("if the data are not publicly accessible, a detailed explanation of why this is the case is required").
Specific comments:
Line 18-21: Maybe omit some references, seems a bit much. Line 26-48: Useful summary of other techniques and drawbacks, but can maybe be written more concisely. Line 85: Although not "purely hydrological", van Lieshout et al. (2020) recently demonstrated the potential of using cameras and deep learning for automatic plastic monitoring in rivers. Quite some lessons learned and practical considerations may be relevant for this manuscript as well. Line 122: How is the ROI automatically trimmed around it? Line 137: What moving average is used? E.g. how many datapoints? How does the length of the window influence the accuracy? Line 138: Is the 90% quantile based on the entire dataseries? Or a subset (e.g. without outliers)? | 2021-05-08T00:02:35.425Z | 2021-02-27T00:00:00.000 | {
"year": 2021,
"sha1": "71f0338b0879e94bdddbb4bdc17ee0a09fead34b",
"oa_license": "CCBY",
"oa_url": "https://hess.copernicus.org/preprints/hess-2021-36/hess-2021-36.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c18c83f22da332d36185330de84ab52f639c133a",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": []
} |
245720909 | pes2o/s2orc | v3-fos-license | Literacy Challenges in Rural China
This article discusses the obstacles encountered in the process of literacy in rural China. Although China's overall literacy rate is high, there is a huge literacy rate gap between urban and rural areas. On this basis, this article explores the factors affecting the literacy rate in rural China, and concludes that the main factors are financial status, gender differences, health challenges, and policy interventions.
Introduction
China is located in eastern Asia, and it is the most populous developing country in the world. China has a large rural population, accounting for 40% of the total population (UNESCO 2020). According to data released by the National Bureau of Statistics (2020), China's GDP is 15.52 trillion US dollars, of which rural areas contribute 0.7 trillion US dollars. China's gross national income is 16057 U.S. dollars per capita in purchasing power parity. The country's HDI is 0.762, which is a high number (numbers between 0.7 and 0.8 are classified as high).
The Chinese literacy standard is to master 4000-6000 Chinese characters and use them proficiently (Ministry of Education, 1993). The adult literacy rate is 96.8% (95.2% for women and 98.5% for men), which shows that China's literacy rate is very high. The gap in literacy levels between urban and rural areas is obvious, with only 87.13% of people in rural areas literate (National Bureau of Statistics, 2020). There are more than 80 million illiterate people in China, 90% of which are from rural areas (National Bureau of Statistics, 2020). Therefore, this article will focus on the challenges facing Chinese rural literacy.
Financial situation
The income gap between urban and rural residents is large, and the income of urban residents is about 2.5 times that of rural residents (National Bureau of Statistics, 2020). Inequality in income has also indirectly caused the rural population to pay a higher price for literacy. According to a study by Park et al. (2002), rural households spend 14% of their annual income on education, and huge expenditures lead rural people to believe that they have no right to literacy. 11% of Chinese children living in rural areas drop out of school (National Bureau of Statistics 2020). In addition, the impact of poverty on literacy levels will continue to the next generation. Studies have shown that this leads to the persistence of illiteracy and poverty (Xing et al, 2019). In addition, the lack of funds in rural areas means many challenges. For example, lack of material resources such as textbooks, inaccessible dormitories, inconvenient transportation, and the inability of local authorities to provide acceptable wages to attract educated teachers to rural areas. (Fan and Xie, 2014). Connelly and Zheng (2003) believe that gender literacy gaps are mainly concentrated in rural areas. The traditional view in rural China is that boys are first and the lowest in family hierarchy is women (Hu and Scott, 2016). Therefore, literacy is useless for girls (Lin and Pei, 2016). Because of this perception, women account for 71% of the illiterate population in rural areas (National Bureau of Statistics, 2020). The ratio of males to females in rural areas is 122:85, which is a serious imbalance (National Bureau of Statistics, 2020).
Health challenges
Malnutrition is a serious obstacle to literacy in rural China (Hannum and Liu, 2014). Malnutrition can cause serious damage to children's development (such as inattention and memory loss), which causes some rural children to lose literacy or die of disease (Lee and Frongillo, 2001).
Policy intervention
National policy intervention has played a key role in promoting rural literacy. China implemented the Compulsory Education Law as an education reform in 1986 and achieved this goal in 2011 (Wu, 2012). Since implementation, the rural enrollment rate has increased significantly. In 1987, only 49% of citizens aged 15-24 in rural areas received a junior high school education, compared with 94% in 2015 (Yang and Guo, 2020). In addition, the one-child policy reduces discrimination against girls (Lee, 2014). However, due to weak rural supervision, this policy has not been fully implemented in rural areas (Greenhalgh, 2008).
Implications
Education and literacy rates are powerful solutions to rural poverty (Liu, 2020). Sen (1999) attributed poverty to loss of ability. In addition, literary ability is closely related to people's future development in employment, quality of life, and empowerment (Cree et al., 2012).
Conclusion
There are some literacy challenges in rural areas, such as financial difficulties, gender inequality, and health problems. These challenges need to be addressed through national policies and the rural population must cooperate in order to solve them. The improvement of rural literacy rates will lead to rural employment, economic growth, and the development of the country as a whole. | 2022-01-06T16:26:47.897Z | 2021-12-31T00:00:00.000 | {
"year": 2021,
"sha1": "1126d6c4e0eda2d75a1816f54342454f71e51bce",
"oa_license": "CCBYNC",
"oa_url": "https://en.front-sci.com/index.php/jher/article/view/577/683",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e7993bed10c60153dd70eec51092846bf2661828",
"s2fieldsofstudy": [
"Education",
"Sociology",
"Economics"
],
"extfieldsofstudy": []
} |
214395857 | pes2o/s2orc | v3-fos-license | Representing Organic Compound Oxidation in Chemical Mechanisms for Policy-Relevant Air Quality Models under Background Troposphere Conditions
: This intercomparison has taken thirteen chemical mechanisms and compared how they treat VOC oxidation and degradation and its relationship to the photochemical formation of ozone and hydroxyl radicals. Here, we have looked in some detail at the incremental responses of hydroxyl radicals to incremental additions of a range of organic compounds under conditions appropriate to the background atmosphere. Most of the time, with most organic compounds and most chemical mechanisms, incremental additions of an organic compound led to depletion of hydroxyl radical concentrations. The chemical mechanisms studied demonstrated increasingly negative incremental hydroxyl radical reactivities with increasing carbon numbers for the alkanes ethane, propane and n-butane. Hydroxyl radical incremental reactivities for the simple alkenes, ethylene and propylene, were reasonably consistent across the chemical mechanisms studied. However, this consistent representation did not extend to trans but-2-ene, where reactivity estimates spanned a range of a factor of five. Incremental reactivities were reasonably well-defined for isoprene which was encouraging in view of its importance to background tropospheric chemistry. The most serious discrepancies emerging from this study were found with the aromatics toluene and o-xylene, and with the Master Chemical Mechanism and these are discussed in some detail.
Introduction
Air quality models are employed by policy-makers to formulate emission control strategies with a view to combatting photochemical air pollution, particulate matter and acid rain in Europe, Asia and North America. Chemical mechanisms are essential components of air quality models because ozone (O 3 ) is a secondary pollutant that is not emitted into the atmosphere. Ozone is formed by photochemical reactions in the sunlit atmosphere from emitted precursors: oxides of nitrogen (NO x ) and volatile organic compounds (VOCs). Hence air quality policies and strategies, and the models that address them, have to focus on the NO x and VOC precursors. Tackling elevated ozone levels is a major policy activity because ozone is an important air pollutant that at elevated levels damages human health and vegetation and also contributes to climate change [1].
The chemical mechanisms in air quality models incorporate information on the chemical kinetics and pathways that transform the primary emitted precursor pollutants into secondary pollutants, particularly ozone and suspended particulate matter. It is essential that these chemical mechanisms faithfully represent the actual chemistry of the real world if the derived policies are to deliver the required improvements in air quality. If the chemical mechanisms contain inadequately characterised representations of important atmospheric chemistry processes, then the policy predictions may underestimate the emissions reductions required or may overstate them, causing unnecessary implementation costs. How are chemical mechanisms to be evaluated? Perhaps more importantly, how are chemical mechanisms to be chosen for particular air quality policy modelling applications from the plethora of available chemical mechanisms? Chemical mechanism selection has been a difficult issue for regulatory policy development since the early days [2]. These concerns still exist [3,4]. These difficulties are compounded by the often limited choice of chemical mechanisms available within the widely used air quality modelling systems because chemical mechanisms are not offered on a plug-and-play basis as with meteorological parameterisations, for example. Where differences in policy-relevant predictions have been found by substituting different chemical mechanisms as in, for example [5], there is then the difficulty of proving which predictions are correct.
In previous chemical mechanism intercomparison studies, the impact of different chemical mechanisms for air quality policy formulation have been investigated under North American [6] and European [7] conditions. Here the focus is more global in context, rather than regional, as policy attention has shifted to the background troposphere. This change in emphasis reflects the shift in the attention of policy-makers to address the intercontinental transport of ozone [8][9][10][11], the issue of the policy-relevant ozone background across the United States of America [12][13][14][15][16][17][18][19][20][21] the global burden of disease resulting from air pollution [22,23] and the linkage between air quality and climate change [11,[24][25][26][27][28][29].
There are several aspects of chemical mechanism development that could be compared and assessed in an intercomparison. In this study, attention is focussed on the representation of the oxidation and degradation of the organic compounds and its relationship to the photochemical formation of ozone and hydroxyl (OH) radicals. The representation of these processes presents a formidable challenge to mechanism developers because of their complexity, because of the myriad of organic compounds that need to be considered and because of the limited nature of current understanding. Mechanism developers inevitably have to rely on approximations, assumptions and in many cases, important processes have to be neglected altogether. To enable a fair intercomparison of the representation of the oxidation and degradation chemistry of organic compounds in each chemical mechanism, a common basis has to be established so that the main differences between the chemical mechanisms are highlighted and brought to the fore. Attention is given to the photochemical ozone production rates and the OH number densities in the base case and their responses to incremental additions of organic compounds.
Methodology
A zero-dimensional box model was set up to provide a framework for the intercomparison and evaluation of the thirteen chemical mechanisms. The focus of the intercomparison was on O 3 and OH radicals, together with other reactive free radical species. In the paragraphs below, the background environmental conditions underpinning the intercomparison are described. Attention is then directed to the implementation of the chemical mechanisms themselves. Finally, a description is given of the chemical development in the box model and how the mixing ratios of the box model species were constrained to the background environmental conditions.
The formulation of the box model was based on the Photochemical Trajectory Model (PTM), the details of which are given elsewhere [30]. In this application, wet and dry deposition, exchange with the free troposphere and emission processes have been switched off, leaving the complete focus on the chemical development of photochemical ozone and that of the hydroxyl (OH) radical species that drives it. Table 1 summarises the details of the thirteen chemical mechanisms studied, together with their literature references. They varied marked in complexity from the highly detailed and explicit Master Chemical Mechanism (MCM) v3.3.1 to the highly condensed Carbon Bond and SAPRC mechanisms. Our focus was principally on the background troposphere and the regional and global chemistry-transport models used in policy formulation and assessment on the regional and hemispheric scales. Each of the major chemical mechanisms studied here has undergone frequent updating and no attempt has been made to keep track of these. There is no implication underpinning our choice of mechanism and version that we consider them to be of more policy importance than any other. For most chemical mechanisms, there are a range of versions spanning many years of development and, again, our choice of version is completely arbitrary. [32]. c. CAMx mechanism 2: CBr6 gas-phase chemistry, Appendix A, [32]. d. http://mcm.york.ac.uk/MCM, [33]. e. Default chemistry EmChem09 [34] f. GEOS-Chemv9-02f (accepted 07 Feb 2013): downloaded from http://wiki.seas.harvard.edu/geos-chem/index.php/ updating_standard_chemistry_with_JPL_10-6#Species [35]. g. http://mcm.york.ac.uk/MCM, [36]. h. MELCHIOR2 is a condensed version of MELCHIOR1; [37]. i. [38]. j. [39]. k. RACM2.5M4c [40]. l. CAMx mechanism 5: SAPRC99 gas-phase chemistry, Appendix D [32]. m. CS07A is the most highly condensed version of SAPRC-07: [41].
The mechanisms were, however, not implemented as published but were harmonised to minimise the influence of publication date which spanned four decades from RADM to CRIv2.2. The first harmonisation addressed the so-called 'inorganic chemistry'. This set of chemical reactions establishes the fast photochemical balance and involves the hydroxyl (OH), hydroperoxy (HO 2 ) and oxygen (O 1 D and O 3 P) atoms and their reactions with nitric oxide (NO), nitrogen dioxide (NO 2 ), ozone (O 3 ), water vapour, carbon monoxide (CO), hydrogen (H 2 ) and sulphur dioxide (SO 2 ). The 'inorganic chemistry' provided with each mechanism was removed and replaced with a set of 49 chemical reaction pathways and rate coefficients, together with their temperature and pressure dependences, taken from IUPAC (http://iupac.pole-ether.fr/) [42].
The second harmonisation involved the photolysis rate coefficients. The photolysis rate coefficients provided with each mechanism were replaced with a standard set taken from the MCM website (http://mcm.york.ac.uk/MCM) [33]. The final harmonisation involved the rate coefficients for the formation and decomposition of the peroxyacyl nitrates and for the reactions of the methyl peroxy (CH 3 O 2 ) radical. Again, these were removed and replaced with a standard set of rate coefficients, together with their pressure and temperature dependences taken from the MCM website (http: //mcm.york.ac.uk/MCM) [33].
It is understood that these harmonisation steps may well move the mechanisms away from the conditions and chemical regimes under which they were developed by their originators. This was considered inevitable. As a result, the performance of the mechanisms may be different from that if no changes had been made. To draw attention to the harmonisation and standardisation steps, the names of the mechanisms have been printed in italics to indicate that they have not been implemented as originally developed.
Any intercomparison of chemical mechanisms needs input data on background environmental conditions to set up an appropriate chemical regime to frame the evaluation. In this study, output has been taken from a global Lagrangian chemistry-transport model (STOCHEM-CRI) [43] and was used to provide realistic mixing ratios for a number of trace gases in the box model. These trace gases, of which there are thirty in all see Table 2, included closed-shell ozone precursors and reaction products with atmospheric lifetimes of the order of minutes and longer. Free radical species have much shorter lifetimes and were not set up in the same way but allowed to establish their own levels based on the time-dependent photochemical activity. The background environmental data were taken from the STOCHEM-CRI model for a grid of 108 points at 20 • intervals, covering a region between 60 • N and 60 • S, near-ground, 1st July and are summarised in Table 2. For further details of the STOCHEM-CRI model [43]. Not all of these species were represented in each chemical mechanism and so care had to be taken not to introduce bias into the model results with any chemical mechanism through the choice of background environmental conditions. Since some mechanisms relied heavily on lumped VOC emission surrogates, VOC coverage was limited to common VOCs and to those explicitly or near-explicitly represented in all chemical mechanisms. The background data therefore necessarily represented the minimum or lowest common denominator approach so as not the introduce bias between the highly detailed and highly reduced chemical mechanisms. In a box model, a differential equation of the form was set up for each model species, i, where c i is the concentration of species, i, in the box, P i is its production rate from emissions, chemistry and boundary conditions and l i is the first order loss coefficient arising from chemistry, deposition and boundary conditions. In a constrained box model, the above differential equation is modified by the addition of a flux, F i , to the right hand side of the equation so that the rate of change of the constrained species, i, as listed in Table 2, remains zero and its concentration remains constant at the constrained value, c i *: The Gear's method automatic numerical integrator FACSIMILE [44] returns the flux, F i , required at the end of each time step to maintain the concentrations of each species at its constraint. This flux, which is diurnally-varying through the time-dependent photolysis rates, is then integrated over a time period of 5 days to give the time-integrated production or loss flux (depending on its sign) for that species. The output of the constrained box model is therefore the time-integrated production or loss fluxes for each of the thirty constrained species in Table 2. Particular focus was given in this study to the time-integrated ozone production flux, P O3 .
For the many species without constrained values in Table 1, these species would reach some form of local instantaneous photochemical steady state and their time varying concentrations would be set by equations of the form of (1) above. Their concentrations were averaged over the 5-day time period and provided another component of the output from the constrained box model. Particular focus was given to the 5-day average hydroxyl radical number density, [OH], in this study.
Testing and Evaluating the Constrained Box Model
The constrained box model was set up with each of the thirteen reduced chemical mechanisms and each set of 108 sets of background conditions and was integrated for five days. For each set of conditions and each chemical mechanism, the P O3 and [OH] values were noted, and these comprised the base case results for each chemical mechanism. P O3 for MCMv3.3.1 varied from −0.5 ppt per hour to 11 ppb per hour over the 108 sets of conditions, with an average of 1.1 ppb per hour. The P O3 values were closely similar between the different chemical mechanisms and closely tracked each other across the different sets of conditions. Scatter plots were constructed between the base case P O3 results for each reduced chemical mechanism versus the MCMv3.3.1 results. Linear regression equations were fitted through each scatter plot, revealing excellent correlations with R 2 values consistently greater than 0.99, slopes that were close to unity and intercepts that were statistically indistinguishable from zero. P O3 values closely overlapped and average intermechanism differences from the MCMv3.3.1 were within the range −0.6% to +0.4% across all 108 sets of background conditions and are plotted out in Figure 1. [OH] for the MCMv3.3.1 varied from 0.3 to 3.6 × 10 6 molecule cm −3 , with an average of 1.6 × 10 6 molecule cm −3 . Again, [OH] values were closely similar between the different chemical mechanisms. However, there was much more variability between the chemical mechanisms and sets of conditions compared to PO3. Scatter plots were constructed between [OH] for each chemical mechanism versus MCMv3.3.1 and linear regression equations were fitted through the scatter plots. Correlations were good with R 2 values generally greater than 0.98. The slopes and intercepts were generally indistinguishable from zero. Average intermechanism differences were generally in the range −4% to +8%, see Figure 2, (RADM results excluded). [OH] for the MCMv3.3.1 varied from 0.3 to 3.6 × 10 6 molecule cm −3 , with an average of 1.6 × 10 6 molecule cm −3 . Again, [OH] values were closely similar between the different chemical mechanisms. However, there was much more variability between the chemical mechanisms and sets of conditions compared to P O3 . Scatter plots were constructed between [OH] for each chemical mechanism versus MCMv3.3.1 and linear regression equations were fitted through the scatter plots. Correlations were good with R 2 values generally greater than 0.98. The slopes and intercepts were generally indistinguishable from zero. Average intermechanism differences were generally in the range −4% to +8%, see Figure 2, (RADM results excluded). This close agreement in PO3 and [OH] between the chemical mechanisms confirmed the view that the standardisation and harmonisation steps had not introduced significant bias between the chemical mechanisms.
Overview
For the purposes on this intercomparison, we consider that organic compounds exert two major impacts on tropospheric chemistry. In the first, the hydroxyl radical driven degradation of an organic compound leads to the formation of hydroperoxy and organic peroxy radicals (RO2) which oxidise nitric oxide (NO) to nitrogen dioxide (NO2), and hence producing ozone (O3This impact is most important in urban areas and in the polluted atmospheric boundary layer where there are abundant sources of nitrogen oxides (NOx = NO + NO2). However, in the background troposphere, the subject of this intercomparison, it is of diminished policy importance because background tropospheric chemistry is generally NOx-limited.
In the second major impact of organic compounds on tropospheric chemistry, the hydroxyl radical driven degradation of organic compounds changes the sources and sinks of the major free radical species: hydroxyl (OH), hydroperoxy (HO2) and organic peroxy (RO2) radicals. Since background tropospheric chemistry is generally free radical limited then these changes may be of some significance. Changes to the sources and sinks of the hydroxyl radical have an important impact on tropospheric chemistry because of the vast numbers of trace gases whose sinks are controlled by OH radical driven oxidation and degradation. This second major impact of organic compounds on This close agreement in P O3 and [OH] between the chemical mechanisms confirmed the view that the standardisation and harmonisation steps had not introduced significant bias between the chemical mechanisms.
Overview
For the purposes on this intercomparison, we consider that organic compounds exert two major impacts on tropospheric chemistry. In the first, the hydroxyl radical driven degradation of an organic compound leads to the formation of hydroperoxy and organic peroxy radicals (RO 2 ) which oxidise nitric oxide (NO) to nitrogen dioxide (NO 2 ), and hence producing ozone (O 3 This impact is most important in urban areas and in the polluted atmospheric boundary layer where there are abundant sources of nitrogen oxides (NO x = NO + NO 2 ). However, in the background troposphere, the subject of this intercomparison, it is of diminished policy importance because background tropospheric chemistry is generally NO x -limited.
In the second major impact of organic compounds on tropospheric chemistry, the hydroxyl radical driven degradation of organic compounds changes the sources and sinks of the major free radical species: hydroxyl (OH), hydroperoxy (HO 2 ) and organic peroxy (RO 2 ) radicals. Since background tropospheric chemistry is generally free radical limited then these changes may be of some significance.
Changes to the sources and sinks of the hydroxyl radical have an important impact on tropospheric chemistry because of the vast numbers of trace gases whose sinks are controlled by OH radical driven oxidation and degradation. This second major impact of organic compounds on tropospheric chemistry is therefore of immediate policy interest and is the focus of this intercomparison.
Methodology
The methodology chosen to address the impact of organic compounds on the major free radical species involved adding increments of each organic compound and following the changes in hydroxyl radical number densities ([OH]) and the rates of oxidation of organic compounds (ROC). Experiments were performed with 100 ppt (1 ppt = 1 part in 10 12 parts of air) increments of each organic compound from 100 ppt up to 500 ppt at each of the 108 background locations. This methodology gave four differences (∆): 100 ppt vs. 200 ppt, 200 ppt vs. 300 ppt and so on. For each difference, the following hydroxyl radical incremental reactivity (IROC) was defined and estimated: 100 ppt vs. 200 ppt was found to be −7.2 × 10 6 and this decreased to −6.9 × 10 6 , −6.7 × 10 6 , −6.5 × 10 6 and −6.5 × 10 6 molecule cm −3 per ppb hour −1 for 200 ppt vs. 300 ppt, 300 ppt vs. 400 ppt and 400 ppt vs. 500 ppt, respectively. The decline from −7.2 to −6.5 × 10 6 molecule cm −3 per ppb hour −1 illustrated that the hydroxyl radical responses were not accurately linear in ethylene increments and that there was some non-linearity in the reacting system amounting to ±10%. Overall, it was concluded that using the MCMv3.3.1 mechanism, hydroxyl radical number densities declined with increasing ethylene additions, with an 'initial', (that is to say, 100 ppt vs. 200 ppt) IR C2H4 of −7.2 × 10 6 molecule cm −3 per ppb hour −1 .
This same methodology was then applied to all thirteen organic compounds and all thirteen chemical mechanisms. The results for all the alkanes in each chemical mechanism are presented in Table 3, man-made alkenes in Table 4, biogenic alkenes and dialkenes in Table 5 and aromatics in Table 6. A rapid overview of Tables 3-6 revealed that all the organic compounds studied gave negative initial hydroxyl radical incremental reactivities. Generally speaking, incremental reactivities were smallest for the alkane class in Table 3 and largest for the aromatics in Table 6. There were, however, significant divergences between the incremental reactivities found by the different chemical mechanisms. Understanding these divergences is the main aim of this chemical mechanism intercomparison. Table 3. Initial a hydroxyl radical incremental reactivities for the alkanes in units of 10 6 molecule cm −3 per ppb hour −1 .
Mechanism
Ethane Propane N-Butane Notes: a. 'Initial' refers to the 100 ppt vs. 200 ppt increment, see text. b. '-' the organic compound is not explicitly treated in this mechanism version. Notes: a. 'Initial' refers to the 100 ppt vs. 200 ppt increment, see text. b. '-' the organic compound is not explicitly treated in this mechanism version. Table 5. Initial a hydroxyl radical incremental reactivities for the biogenic alkenes and alkadienes in units of 10 6 molecule cm −3 per ppb hour −1 .
Notes: a. 'Initial' refers to the 100 ppt vs. 200 ppt increment, see text. b. '-' the organic compound is not explicitly treated in this mechanism version. Table 6. Initial a hydroxyl radical incremental reactivities for acetylene and the aromatics in units of 10 6 molecule cm −3 per ppb hour −1 .
Mechanism Acetylene Benzene Toluene O-Xylene
Notes: a. 'Initial' refers to the 100 ppt vs. 200 ppt increment, see text. b. '-' the organic compound is not explicitly treated in this mechanism version. Table 3 presents the initial hydroxyl radical incremental reactivities for the three simplest alkanes using the thirteen chemical mechanisms. Incremental reactivities were the most negative for n-butane and least negative for ethane in almost all of the mechanisms. Averages and standard deviations of the estimates are presented in Table 3. There was a spread of values for each organic compound and, generally speaking, the values for MCMv3.3.1 were close to the mechanism average. It was concluded that most of the mechanisms were able to account reasonably accurately for the impact of alkane increments on hydroxyl radical number densities. However, incremental reactivities did span over a factor of two between CBM4 and RACM-2.
Assessing the Impact of the Alkanes on Hydroxyl Radicals
The mechanism for the hydroxyl radical-driven oxidation of ethane (C 2 H 6 ) laid out in the MCMv3.3.1 involves 46 chemical species, 119 chemical and photochemical reactions producing 21 stable product species including alkyl hydroperoxides, nitrates, alcohols, aldehydes and peroxyacyl nitrates. The complete mechanism represents a cascade system through the organic peroxy radicals, ethyl peroxy, acetyl peroxy, methyl peroxy and hydroperoxyl, after the initial attack of OH on ethane: There is therefore a single reaction which is unique to ethane and a core of 118 reactions that are used in many other VOC oxidation mechanisms. Almost all of the 119 reactions have some impact on hydroxyl radical number densities because they may be sources and sinks of free radicals or convert one free radical into another. Overall, however, the oxidation of ethane is a net sink for hydroxyl radical, as shown by the negative hydroxyl radical incremental reactivities in Table 3.
The twelve reduced or condensed mechanisms, see Table 3, appear to have reduced or condensed markedly the detail of ethane oxidation into a handful of reactions without losing any of the chemical complexity and all confirm that ethane oxidation is a net sink for hydroxyl radicals. In the CB05 mechanism, for example, the reaction of hydroxyl radicals with ethane (ETHA) is represented by: OH + ETHA = HO 2 + 0.991 ALD 2 + 0.991 XO 2 + 0.009 XO 2 N, where the generic organic peroxy radical XO2 and XO2N are shared by the degradation schemes of several hydrocarbons and the non-stoichiometric terms: 0.991 and 0.009 represent the net of the peroxy radical cascade system. In the SAPRC-99 mechanism, the oxidation of ethane is represented with a single generic peroxy radical product, RO2_R as: At the other end of the reduced mechanism spectrum, the CRIv2.2 mechanism uses 29 species and 68 chemical reactions to describe ethane oxidation in a manner that is accurately consistent with that in the MCMv3.3.1.
The initial hydroxyl radical incremental reactivities for propane, see Table 3, were about three times more negative than those for ethane. Increments of propane therefore showed a significantly greater propensity to deplete 5-day mean hydroxyl radical number densities compared to those of ethane. This increased the propensity results from the longer organic peroxy radical cascade between propyl peroxy and hydroperoxyl. The MCMv3.3.1 oxidation scheme for propane employs 90 chemical species and 256 chemical reactions whilst that for the CRIv2.2 employs 49 and 129, respectively.
The reduced mechanisms gave a reasonably accurate account of the increased incremental hydroxyl radical reactivity for propane relative to ethane, showing that they were able to account for the significant increase in complexity in moving from ethane to propane. This increased complexity arises not only from the increased length of the peroxy radical cascade but also from the formation of relatively unreactive ketones such as acetone, compared with the highly reactive and photochemically-labile aldehydes such as acetaldehyde formed in ethane oxidation.
Initial hydroxyl radical incremental reactivities for n-butane are considerably more negative than those of propane and ethane, see Table 3. There is a noticeably increased range in the available values for n-butane compared with those for ethane which amounts to a factor three. The MCMv3.3.1 mechanism represents the oxidation of n-butane using 184 species in 550 reactions, representing a dramatic increase in complexity over and above the oxidation of propane. This follows on from the increased number of isomeric peroxy radicals and stable products, such as ketones and hydroperoxides, and the increased length of the peroxy radical cascade system. Before moving on from the simple alkanes, we note that the initial hydroxyl radical incremental reactivities for n-butane, propane and ethane have the relative ratios It therefore appears that in constructing mechanisms for alkane oxidation from smog chamber data or by compiling near-explicit mechanisms from individual elementary reactions, it has been possible to describe the impacts of alkanes on both ozone and hydroxyl radicals in ways that are mutually consistent. We have been able to show here that the descriptions of the impacts on photochemical ozone production rates and on 5-day mean hydroxyl radical number densities are reasonably consistent across the mechanisms. Examples will be found later where this degree of mutual consistency between photochemical ozone production rates and mean hydroxyl radical number densities is not so clearly demonstrated.
Assessing the Impact of the Alkenes on Hydroxyl Radicals
The hydroxyl radical incremental reactivities for the man-made alkenes are presented in Table 4. The incremental reactivities for ethylene and propylene (propene) are similar in almost all the mechanisms (except for CBM4 and RADM) and appear to be three times more negative compared with those for trans but-2-ene. The relative ratios of the incremental reactivities for ethylene propylene trans, but-2-ene were found to be 1.0:1.0:0.3 compared to 1.0:3.2:7.8 for their OH rate coefficients. Since the alkenes are all highly reactive on the 5-day timescale of this study, it is not surprising that the OH rate coefficients give little indication of the relative hydroxyl radical incremental reactivities. In comparison, the relative numbers of reactions in the MCMv3.3.1 oxidation schemes were 1.0:2.1:1.5, respectively. Almost all the mechanisms indicated a strong depletion of hydroxyl radical number densities driven by additional increments of ethylene, see Table 4, except for the RADM mechanism, the oldest mechanism studied, which appears to be an outsider in this instance. The remaining mechanisms showed excellent agreement and a relatively small range about the MCMv3.3.1 value. The similarity between the incremental reactivities found for ethylene and propylene in Table 4 was somewhat surprising, as was the observations that this close similarity applied to almost all of the mechanisms studied.
There appeared to be a significantly large range in the estimates of the hydroxyl radical incremental reactivities for trans but-2-ene, one of the most reactive of all organic compounds. Despite this exceptional OH-reactivity, its incremental hydroxyl radical incremental reactivity was found to be significantly less than that for either ethylene or propylene, see Table 4. The range in the estimates was also significantly larger than for the other alkenes, with the MCMv3.3.1 value towards the lower end of the range of estimates.
All three alkenes are expected to generate similar first-generation reaction products, namely; ethylene oxidation generates two formaldehyde molecules, propylene generates one molecule of formaldehyde and one of acetaldehyde and trans but-2-ene generates two molecules of acetaldehyde. Hence the less negative incremental reactivity and its greater uncertainty range of the estimates for trans but-2-ene are, at first sight, somewhat surprising. We explain these discrepancies as symptoms of the difficulty mechanism developers have in reconciling both photochemical ozone production and free radical sources and sinks, in the face of incomplete and conflicting understanding of the elementary reactions involved in free radical production and loss in the OH-oxidation of trans but-2-ene. Table 5 presents the hydroxyl radical incremental reactivities for the biogenic alkenes and alkadienes. The biogenic alkenes have been separated from the man-made alkenes because of the different environments in which their atmospheric reactions become of policy significance. The biogenic alkenes are emitted in largely rural environments where NO x levels are low, compared to the man-made alkenes that are emitted mainly in urban environments with associated high NO x levels. Smog chamber studies have been largely performed under high NO x conditions and so the preparation of mechanisms for biogenic alkenes involves significantly greater extrapolation to conditions appropriate to the atmosphere. It is possible therefore that OH oxidation mechanisms for biogenic alkenes are necessarily somewhat more uncertain than for man-made alkenes.
Assessing the Impact of the Biogenic Alkenes and Alkadienes on Hydroxyl Radicals
The hydroxyl radical incremental reactivities for isoprene in Table 5 found with the different mechanisms span a factor of two range from −4.9 for CBM-4 to −9.8 for MOZART-4, with an average of −7.8 ± 1.4. The MCMv3.3.1 value lies at the lower end of the 1-σ confidence range with the CRIv2.2 value close to the middle of the confidence range. Overall, almost all of the mechanisms gave incremental reactivities for isoprene that were more negative than the MCMv3.3.1 value, with only CBM-4 returning a less negative value.
In contrast, hydroxyl radical incremental reactivities for α-pinene were significantly less negative than those for isoprene and significantly more accurately defined as indicated by the 1-σ confidence range. For this organic compound, the difference between the MCMv3.3.1 and CRIv2.2 mechanisms was small and significant. The MOZART-4 mechanism gave the most negative reactivity whilst CB6 gave the least negative.
Hydroxyl radical incremental reactivities for β-pinene were the most negative of all the three biogenic alkenes and alkadienes. However, coverage by the available mechanisms was poor and results were limited to those from the MCMv3.3.1 and CRIv2.2 mechanisms. Both mechanisms made β-pinene between two and three times more negative than α-pinene.
Assessing the Impact of the Acetylene and the Aromatics on Hydroxyl Radical Number Densities
Incremental reactivities for the acetylene (ethyne) are presented in Table 6. Acetylene is generally considered unreactive because of its low reactivity with OH. Estimates of its hydroxyl radical incremental reactivity are available from six mechanisms and they demonstrate that acetylene increments have a relatively weak influence on hydroxyl radical number densities. Incremental reactivities from CB6 and RACM-2 lie within the ranges of those from the CRI and MCM mechanisms.
The incremental reactivities estimated for the aromatics in Table 6 present a complex picture which is difficult to resolve. Starting with benzene, we find that estimates span a wide range from MELCHIOR-2, the least negative, to SAPRC-99 the most negative. The CRIv2.2 estimate was found to be significantly less negative than the mechanism-average and the MCMv3.3.1 estimate significantly more negative. The RACM-2 and SAPRC-07 estimates were close to the average.
There was no agreement concerning the hydroxyl radical incremental reactivities for toluene (methylbenzene) and o-xylene (1:2-dimethylebenzene). There was a factor of three range between the CRIv2.2 and the MCMv3.3.1 estimates which was unusual since both mechanisms are generally close. All mechanisms gave estimates that were less negative for both toluene and o-xylene compared to the MCMv3.3.1 and many were less negative than the CRIv2.2 mechanism. The MCMv3.3.1 mechanism therefore appeared to be an outsider for both toluene and o-xylene.
Discussions and Conclusions
This intercomparison has taken twelve chemical mechanisms and compared how they treat VOC oxidation and degradation and its relationship to the photochemical formation of ozone and hydroxyl radicals. Representation of VOC chemistry represents a formidable challenge for mechanism developers because of the complexity involved and because of the sheer number of organic compounds and chemical reactions that require treatment. Mechanism developers have to rely on approximations, assumptions and, in many cases, important processes have to be neglected. Here, we have looked in some detail at the incremental responses of hydroxyl radicals to incremental additions of a range of organic compounds under conditions appropriate to the background atmosphere.
Most of the chemical mechanisms studied demonstrated increasingly negative incremental hydroxyl radical reactivities with increasing carbon numbers for the alkanes ethane, propane and n-butane. That is to say, increasing alkane oxidation rates led to increased hydroxyl radical depletion, with the depletion caused by n-butane > propane > ethane.
Hydroxyl radical incremental reactivities for the simple alkenes ethylene and propylene, were reasonably consistent across the chemical mechanisms studied. However, this consistent representation did not extend to trans but-2-ene where reactivity estimates spanned a range of a factor of five. Incremental reactivities were reasonably well-defined for isoprene which was encouraging in view of its importance to background tropospheric chemistry.
The most serious discrepancies emerging from this study were found with the aromatics toluene and o-xylene and with the Master Chemical Mechanism. These discrepancies are in part explained by the Master Chemical Mechanism being an assembly of reported information on elementary reactions which has not been designed for explaining aromatic + NO x smog chamber data accurately. For the Master Chemical Mechanism to do this, an additional hydroxyl radical source had to be included [45]. Because the reduced mechanisms effectively included this additional hydroxyl radical source, they necessarily gave significantly less negative incremental hydroxyl radical reactivities for toluene and o-xylene in this intercomparison.
There is a general level of uncertainty in the understanding and representation of aromatic degradation chemistry in chemical mechanisms and their ability to represent photochemical ozone formation relative to hydroxyl radical depletion in smog chamber experiments. In view of the lack of agreement between the mechanisms, it is difficult to comment on the relative incremental reactivities for the aromatics in the real world. Further laboratory and smog chamber studies will be required to resolve the differences between the Master Chemical Mechanism and the condensed and reduced smog chamber mechanisms.
Funding: This intercomparison received no external funding. | 2020-02-13T09:16:28.014Z | 2020-02-07T00:00:00.000 | {
"year": 2020,
"sha1": "3a5263804da37de2aef707b5267711f4eac6a504",
"oa_license": "CCBY",
"oa_url": "https://res.mdpi.com/d_attachment/atmosphere/atmosphere-11-00171/article_deploy/atmosphere-11-00171-v2.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "444d4d7f86251abff4a7b27c0c69c968f1269bb4",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
259684780 | pes2o/s2orc | v3-fos-license | Circular RNA as Therapeutic Targets in Atherosclerosis: Are We Running in Circles?
Much attention has been paid lately to harnessing the diagnostic and therapeutic potential of non-coding circular ribonucleic acids (circRNAs) and micro-RNAs (miRNAs) for the prevention and treatment of cardiovascular diseases. The genetic environment that contributes to atherosclerosis pathophysiology is immensely complex. Any potential therapeutic application of circRNAs must be assessed for risks, benefits, and off-target effects in both the short and long term. A search of the online PubMed database for publications related to circRNA and atherosclerosis from 2016 to 2022 was conducted. These studies were reviewed for their design, including methods for developing atherosclerosis and the effects of the corresponding atherosclerotic environment on circRNA expression. Investigated mechanisms were recorded, including associated miRNA, genes, and ultimate effects on cell mechanics, and inflammatory markers. The most investigated circRNAs were then further analyzed for redundant, disparate, and/or contradictory findings. Many disparate, opposing, and contradictory effects were observed across experiments. These include levels of the expression of a particular circRNA in atherosclerotic environments, attempted ascertainment of the in toto effects of circRNA or miRNA silencing on atherosclerosis progression, and off-target, cell-specific, and disease-specific effects. The high potential for detrimental and unpredictable off-target effects downstream of circRNA manipulation will likely render the practice of therapeutic targeting of circRNA or miRNA molecules not only complicated but perilous.
Introduction
Much attention has been paid lately to harnessing the diagnostic and therapeutic potential of non-coding circular ribonucleic acids (circRNAs) and micro-RNAs (miRNAs) for the prevention and treatment of cardiovascular diseases. The advent of next-generation sequencing and the development of bioinformatics databases have facilitated the rapid expansion of circRNA research [1]. One recent review by Wang et al. [2] suggests that, although further clinical trials and basic scientific research are needed, targeting cardiovascular disease pathways via circRNA-mediated mechanisms may prove to be an efficacious strategy for preventing and diagnosing cardiovascular diseases [2]. A large body of ongoing research involves the roles of circRNA molecules in atherosclerosis, the underlying condition contributing to most cardiovascular diseases and the leading cause of death in the world.
The desire to attenuate the progression of atherosclerosis at the genetic level is appealing. Gene regulation likely plays a large role in the pathogenesis of atherosclerosis [3]. By understanding and manipulating the genetic pathways involved in atherosclerosis, we may be able to develop novel therapeutic targets, potentially including drugs for primary prevention. This would be a momentous development; given the recent demotion of
Results
This review identified 140 studies conducted between 2016 and 2022 (Table S1). The majority employed in vitro models of human vascular smooth muscle cells (VSMCs) or endothelial cells (ECs). Atherosclerosis was simulated by stimulating cells with known pathogenic triggers, with oxidized low-density lipoprotein (ox-LDL) being the most common. Other triggers include platelet-derived growth factor-BB (PDGF-BB), high glucose, or high-fat diets in in vivo models. The effects of these pathogenic states on circRNA, miRNA, and their associated gene expression were then analyzed, as were their effects on cell proliferation, migration, apoptosis, inflammation, and oxidative stress. The interaction of these genetic molecules and their effects of expression on cell behavior and inflammation were elucidated via various assays, especially immunohistochemical staining techniques, after silencing the molecules of the presumed pathway. Only 19 (13.6%) studies performed an ancillary in vivo mouse/rat model. More commonly, the particular circRNA or miRNA level under investigation was measured in the serum of human subjects or mice with atherosclerosis to corroborate the experimental findings in vitro.
Of the 140 studies reviewed, 95 unique circRNA molecules were identified. The majority (76.8%) were up-regulated in in vitro atherosclerotic environments and/or the serum of patients with atherosclerosis. Of these, 79% correlated with mechanisms known to have pro-atherosclerotic effects in vivo, while 12.3% were associated with protective mechanisms. Of the 25.3% of cirRNAs found to be downregulated, overexpression of these molecules was more commonly associated with mitigation (69.6%) than propagation (4.3%) of atherosclerosis-associated mechanisms. One circRNA molecule, circHIPK3, was shown to be both up-regulated and downregulated across different studies [37,61,82,143]. Of note, at least 9.6% of circRNAs that were demonstrated to have increased expression had equivocal outcomes, i.e., they were unable to discern if the overall effects observed were pathogenic or protective. This percentage was even higher (26.1%) when analyzing only the molecules that were downregulated by atherosclerotic stimuli in vitro ( Figure 1). pro-atherosclerotic effects in vivo, while 12.3% were associated with protective mechanisms. Of the 25.3% of cirRNAs found to be downregulated, overexpression of these molecules was more commonly associated with mitigation (69.6%) than propagation (4.3%) of atherosclerosis-associated mechanisms. One circRNA molecule, circHIPK3, was shown to be both up-regulated and downregulated across different studies [37,61,82,143]. Of note, at least 9.6% of circRNAs that were demonstrated to have increased expression had equivocal outcomes, i.e., they were unable to discern if the overall effects observed were pathogenic or protective. This percentage was even higher (26.1%) when analyzing only the molecules that were downregulated by atherosclerotic stimuli in vitro ( Figure 1).
Figure 1.
The association between the overall effects of circRNA expression on the process of atherosclerosis. Up-regulation of circRNAs in atherosclerosis was found to be most associated with harmful (78%) effects, while downregulation was most associated with protective mechanisms (70%). Equivocal effects were demonstrated in 10% and 26% of studies in which circRNAs were observed to be up-regulated and downregulated, respectively.
The overwhelming majority of studies showed that circRNA molecules exerted their effects via sponging a cognate miRNA (Table S1). In turn, this led to increased expression of a particular gene and subsequent effects on cell mechanics and inflammatory pathways. Only 10 of the 140 studies demonstrated that circRNAs either executed their effects through a mechanism independent of miRNA sponging or failed to identify an associated miRNA [6,9,22,25,29,32,51,52,66,106]. Additionally, of the 133 miRNA molecules identified, 102 were unique, with the corollary being that 23.3% of miRNA molecules were found to interact downstream of multiple circRNAs. To highlight the redundancies and issues with replicability of these studies, the experimental findings of the most investigated molecules are discussed below, with attention paid to opposing or equivocal findings and mechanistic overlap.
CircANRIL
One of the first circRNAs discovered to play a role in atherosclerosis was circANRIL (Antisense non-coding RNA in the INK4 locus). It is located on chromosome 9p21, variants of which are known genetic risk factors for developing cardiovascular disease [146]. In 2010, Burd et al. [147] demonstrated that homozygous individuals for the atherosclerotic risk allele showed decreased expression of circANRIL and the coding INK4/ARF transcripts [147]. CircANRIL was later found to impair ribosome biogenesis, leading to activation of p53, which then resulted in decreased proliferation and increased apoptosis by directly binding to PES1 (pescadillo ribosomal biogenesis factor 1), an essential 60Spreribosomal assembly factor in VSMCs, ECs, and adventitial fibroblasts [6]. This represents one of the rare instances discovered in which circRNAs modulate atherosclerotic events via transcriptional regulation rather than indirectly through miRNA sponging. However, whether inhibition of cellular proliferation or apoptosis ultimately has positive or negative effects on atherosclerosis development depends on various factors that are difficult to determine with certainty [6]. The association between the overall effects of circRNA expression on the process of atherosclerosis. Up-regulation of circRNAs in atherosclerosis was found to be most associated with harmful (78%) effects, while downregulation was most associated with protective mechanisms (70%). Equivocal effects were demonstrated in 10% and 26% of studies in which circRNAs were observed to be up-regulated and downregulated, respectively.
The overwhelming majority of studies showed that circRNA molecules exerted their effects via sponging a cognate miRNA (Table S1). In turn, this led to increased expression of a particular gene and subsequent effects on cell mechanics and inflammatory pathways. Only 10 of the 140 studies demonstrated that circRNAs either executed their effects through a mechanism independent of miRNA sponging or failed to identify an associated miRNA [6,9,22,25,29,32,51,52,66,106]. Additionally, of the 133 miRNA molecules identified, 102 were unique, with the corollary being that 23.3% of miRNA molecules were found to interact downstream of multiple circRNAs. To highlight the redundancies and issues with replicability of these studies, the experimental findings of the most investigated molecules are discussed below, with attention paid to opposing or equivocal findings and mechanistic overlap.
CircANRIL
One of the first circRNAs discovered to play a role in atherosclerosis was circANRIL (Antisense non-coding RNA in the INK4 locus). It is located on chromosome 9p21, variants of which are known genetic risk factors for developing cardiovascular disease [146]. In 2010, Burd et al. [147] demonstrated that homozygous individuals for the atherosclerotic risk allele showed decreased expression of circANRIL and the coding INK4/ARF transcripts [147]. CircANRIL was later found to impair ribosome biogenesis, leading to activation of p53, which then resulted in decreased proliferation and increased apoptosis by directly binding to PES1 (pescadillo ribosomal biogenesis factor 1), an essential 60S-preribosomal assembly factor in VSMCs, ECs, and adventitial fibroblasts [6]. This represents one of the rare instances discovered in which circRNAs modulate atherosclerotic events via transcriptional regulation rather than indirectly through miRNA sponging. However, whether inhibition of cellular proliferation or apoptosis ultimately has positive or negative effects on atherosclerosis development depends on various factors that are difficult to determine with certainty [6].
Other studies attempted to corroborate the effects of circANRIL expression in an in vivo mouse model of atherosclerosis. Song et al. [9] showed that circANRIL overexpression was associated with the formation of atherosclerotic plaques and thombi in rats that were fed a high-fat diet and injected with a large dose of vitamin D3 to promote arterial calcification [9]. The study further supported the findings of increased rates of apoptosis demonstrated by Holdt et al. [6], but also showed higher levels of total cholesterol, triglycerides, LDL, and several pro-atherosclerotic and inflammatory markers, including interleukin (IL)-1, IL-6, matrix metallopeptidase-9 (MMP-9), c-reactive protein (CRP), BCL2 associated X (Bax), and caspase-3 [9]. Another investigation demonstrated that inhibition of circANRIL expression in a similar in vivo rat model reduced markers of vascular endothelial injury, oxidative stress, and inflammation [148]. These early studies suggested that in vivo models analyzing plaque development and inflammatory markers may produce reliable results to establish a causal link between circRNA expression and atherosclerosis development.
Circ_USP36/Circ_0003204
This review identified 11 different studies evaluating the role of circ_USP36 (ubiquitin specific peptidase 36)/circ_0003204 in the pathogenesis of atherosclerosis, establishing it as the most investigated circRNA molecule. All experiments were conducted in in vitro models of human VSMCs and ECs [29,42,55,72,74,75,78,81,92,98,139]. Liu et al. [29] showed that hsa_circ_0003204 was aberrantly overexpressed in ox-LDL-induced human umbilical vein ECs (HUVECs), while knockdown of this molecule promoted proliferation, migration, and invasion but reduced apoptosis [29]. Reduced expression of circ_0003204 also significantly correlated with lower E-cadherin but increased activity of N-cadherin and vimentin in oxLDL-induced HUVECs, findings that are associated with reduced cell mobility and plaque stability, respectively [149,150]. Thus, the knockdown of circ_0003204 was associated with both increased (harmful) and decreased (protective) cellular proliferation in this study. No associated miRNA was identified in this particular study.
Several other in vitro experiments demonstrated suppressed cell viability and promotion of apoptosis, inflammation, oxidative stress, and cell migration and invasion associated with increased expression of circ_USP36 and subsequent miRNA sponging in response to ox-LDLstimulated ECs. Specifically, these effects were attributed to circ_USP36/circ_0003204 inhibition of miR-20a-5p, miR-98-5p, and miR-188-3p leading to increased ROCK2, vascular cell adhesion protein-1 (VCAM-1), and TRP6 gene expression, respectively [72,74,75,81,92]. On the other hand, Huang et al. [55] observed that, through increased expression of WNT4 from sponging of miR-637, circ_USP36 overexpression was associated with suppressed proliferation and migration of human aortic ECs treated with ox-LDL in vitro [55]. While largely agreeing with the effects that circ_USP36/circ_0003204 promotes inflammation and oxidative stress, a recent study also found decreased tube formation in HUVECs stimulated with ox-LDL through sponging of miR-491-5p and increased expression of intercellular adhesion molecule-1 (ICAM-1) [139]. Taken together, these studies present contradictory results regarding the effects of circ_USP36/circ_0003204 on VSMC and EC proliferation, migration, and invasion, suggesting that their regulation is complex and clinical significance challenging to capture.
Another study showed that the expression of circ_USP36 was also increased in ox-LDLtreated human umbilical vein VSMCs via sponging of miR-182-5p [42]. This led to increased activity of the KLF5 gene, which induced VSMC proliferation and metastasis. Circ_USP36 knockdown inhibited this proliferation and metastasis by up-regulating miR-182-5p [42]. However, lower levels of circMTO1 in the serum of humans with atherosclerosis coincided with augmentation of miR-182-5p, increased proliferation, and reduced apoptosis in an in vitro analysis of ox-LDL-stimulated VSMCs. Overexpression of circMTO1 led to less inhibition of miR-182-5p and subsequently greater activation of the RASA1 gene, reduced proliferation, and increased apoptosis of VSMCs [57]. Similarly, while lower levels of circ_0065149 were observed in a model of ox-LDL human umbilical vein ECs in vitro, overexpression was associated with miR-330-5p sponging and associated effects of increased cell viability, proliferation, and migration, but reduced apoptosis and inflammation [62]. These outcomes are opposed to those of increased inflammation with miR-330-5p sponging observed by Su et al. [78]. These studies demonstrate that different circRNA molecules can exhibit both protective and detrimental effects on the development of atherosclerosis via sponging of the same miRNA. This suggests that targeting a specific circRNA for therapeutic purposes could possibly result in unintended pathogenic consequences in opposition to the objective of such manipulation ( Figure 2).
vein ECs in vitro, overexpression was associated with miR-330-5p sponging and associated effects of increased cell viability, proliferation, and migration, but reduced apoptosis and inflammation [62]. These outcomes are opposed to those of increased inflammation with miR-330-5p sponging observed by Su et al. [78]. These studies demonstrate that different circRNA molecules can exhibit both protective and detrimental effects on the development of atherosclerosis via sponging of the same miRNA. This suggests that targeting a specific circRNA for therapeutic purposes could possibly result in unintended pathogenic consequences in opposition to the objective of such manipulation ( Figure 2).
Figure 2.
The role of circ_0003204/USP36 in the pathogenesis of atherosclerosis. Sponging of miR-NAs by up-regulated circ_0003204/USP36 has been shown to lead to (1) promotion of growth, proliferation, migration, apoptosis, inflammation, and oxidative stress of ECs [72,74,75], but also (2) attenuation of proliferation and migration of ECs, known to be protective from further intimal hyperplasia [55]. Furthermore, some miRNAs are inhibited by multiple circRNAs, the effects of which have (3) and (4) contradictory outcomes on cell growth, proliferation, migration, and inflammation [29,42,78,81,92,98,139]. These opposing effects make it difficult to predict the overall effects of targeting a specific circRNA or miRNA for therapeutic purposes.
CircCHFR
Six in vitro experiments examined circCHFR [21,43,85,112,117,128]. All studies showed consistent results of upregulation of circCHFR in atherosclerotic environments simulated by treating cells with ox-LDL or PDGF-BB. Subsequent miRNA sponging and overexpression of various genes were further associated with pro-atherosclerotic mechanisms. In one model, sponging of miR-370 led to increased FOXO1/Cyclin D1 expression, facilitating VSMC proliferation and migration [21]. Another in vitro study linked these findings with increased markers of inflammation via miR-214-3p inhibition and increased Wnt4 expression [43]. Increased circCHFR activity was also associated with the augmentation of apoptosis and proinflammatory cytokine secretion. Reciprocally, silencing of circCHFR increased cell survival and reduced apoptosis in ECs [112]. When analyzed at the level of circRNA expression alone, these studies appear to show that up-regulation of circCHFR in models of atherosclerosis consistently and reliably leads to pathogenic progression. The role of circ_0003204/USP36 in the pathogenesis of atherosclerosis. Sponging of miRNAs by up-regulated circ_0003204/USP36 has been shown to lead to (1) promotion of growth, proliferation, migration, apoptosis, inflammation, and oxidative stress of ECs [72,74,75], but also (2) attenuation of proliferation and migration of ECs, known to be protective from further intimal hyperplasia [55]. Furthermore, some miRNAs are inhibited by multiple circRNAs, the effects of which have (3) and (4) contradictory outcomes on cell growth, proliferation, migration, and inflammation [29,42,78,81,92,98,139]. These opposing effects make it difficult to predict the overall effects of targeting a specific circRNA or miRNA for therapeutic purposes.
CircCHFR
Six in vitro experiments examined circCHFR [21,43,85,112,117,128]. All studies showed consistent results of upregulation of circCHFR in atherosclerotic environments simulated by treating cells with ox-LDL or PDGF-BB. Subsequent miRNA sponging and overexpression of various genes were further associated with pro-atherosclerotic mechanisms. In one model, sponging of miR-370 led to increased FOXO1/Cyclin D1 expression, facilitating VSMC proliferation and migration [21]. Another in vitro study linked these findings with increased markers of inflammation via miR-214-3p inhibition and increased Wnt4 expression [43]. Increased circCHFR activity was also associated with the augmentation of apoptosis and proinflammatory cytokine secretion. Reciprocally, silencing of circCHFR increased cell survival and reduced apoptosis in ECs [112]. When analyzed at the level of circRNA expression alone, these studies appear to show that up-regulation of circCHFR in models of atherosclerosis consistently and reliably leads to pathogenic progression.
However, miR-370 has been found to be regulated by other circRNA molecules with opposing downstream consequences. For example, sponging of miR-370 by circ-BANP has been associated with reduced proliferation and migration of HUVECs, the opposite of that found through the interaction of cricCHFR and miR-370 [21,45]. Overexpression of circ_0124644 leading to inhibition of miR-370 similarly had equivocal outcomes on atherosclerosis progression by inhibiting cell viability, proliferation, and angiogenesis but promoting apoptosis and inflammation [121]. Silencing of miR-370 has also been associated with sinus node function recovery in patients with heart failure [151]. These studies suggest contradictory effects via similar mechanisms of miR-370 inhibition ( Figure 3).
However, miR-370 has been found to be regulated by other circRNA molecules with opposing downstream consequences. For example, sponging of miR-370 by circ-BANP has been associated with reduced proliferation and migration of HUVECs, the opposite of that found through the interaction of cricCHFR and miR-370 [21,45]. Overexpression of circ_0124644 leading to inhibition of miR-370 similarly had equivocal outcomes on atherosclerosis progression by inhibiting cell viability, proliferation, and angiogenesis but promoting apoptosis and inflammation [121]. Silencing of miR-370 has also been associated with sinus node function recovery in patients with heart failure [151]. These studies suggest contradictory effects via similar mechanisms of miR-370 inhibition (Figure 3). Figure 3. The role of circCHFR in the pathogenesis of atherosclerosis. Sponging of several miRNAs by circCHFR has been shown to lead to (1) increased cellular proliferation, migration, invasion, inflammation, and reduced cell cycle survival-all mechanisms known to contribute to atherosclerosis development [43,85,128]. However, circCHFR has also been demonstrated to (2) reduce the expression of miR-370 [21], the inhibition of which has also been shown to have opposing effects compared to circCHFR via sponging by (3) circ-BANP and (4) circ_0124644 [45,121]. (5) Disinhibition of mir-370 also likely adversely affects sinus node function in patients with heart failure [151].
Thus, therapeutic interventions aimed at reducing circCHFR expression may lead to conflicting results via downstream gene regulation and disparate effects on VSMC and EC proliferation and migration. Even if these effects ultimately reduce atherosclerosis progression, disinhibition of miR-370 may promote life-threatening arrhythmias in patients with heart failure, suggesting that cardiovascular pathologies other than atherosclerosis may also be negatively affected [151]. Furthermore, since these studies were not corroborated in in vivo models, it is hard to determine the ultimate effects of the mechanisms elucidated on the process of atherosclerosis. While most studies also observed that circCHFR was up-regulated in the serum of patients with atherosclerosis, this up-regulation may lead to the promotion of some genes that foster protective effects against atherosclerosis. Figure 3. The role of circCHFR in the pathogenesis of atherosclerosis. Sponging of several miRNAs by circCHFR has been shown to lead to (1) increased cellular proliferation, migration, invasion, inflammation, and reduced cell cycle survival-all mechanisms known to contribute to atherosclerosis development [43,85,128]. However, circCHFR has also been demonstrated to (2) reduce the expression of miR-370 [21], the inhibition of which has also been shown to have opposing effects compared to circCHFR via sponging by (3) circ-BANP and (4) circ_0124644 [45,121]. (5) Disinhibition of mir-370 also likely adversely affects sinus node function in patients with heart failure [151].
Thus, therapeutic interventions aimed at reducing circCHFR expression may lead to conflicting results via downstream gene regulation and disparate effects on VSMC and EC proliferation and migration. Even if these effects ultimately reduce atherosclerosis progression, disinhibition of miR-370 may promote life-threatening arrhythmias in patients with heart failure, suggesting that cardiovascular pathologies other than atherosclerosis may also be negatively affected [151]. Furthermore, since these studies were not corroborated in in vivo models, it is hard to determine the ultimate effects of the mechanisms elucidated on the process of atherosclerosis. While most studies also observed that circCHFR was up-regulated in the serum of patients with atherosclerosis, this up-regulation may lead to the promotion of some genes that foster protective effects against atherosclerosis.
CircHIPK3
Four in vitro models studied the effects of circHIPK3 in atherosclerosis pathogenesis [37,61,82,143]. Two of these investigations showed higher levels of circHIPK3 in pro-atherosclerotic in vitro environments, while two demonstrated attenuated activity (Table S1). Wang et al. [82] showed that increased levels of circHIPK3 in mice aortic ECsecreted exosomes in response to high glucose levels correlated with more significant proliferation and inhibition of apoptosis of VSMCs and VCAM-1 expression and uptake of glucose-rich exosomes by VSMCs. This occurred via sponging of miR-106a-5p and amplified expression of FOXO1 and VCAM-1 [82]. Similar effects were seen in human aortic and umbilical artery VSMCs through a mechanism involving inhibition of miR-637, leading to increased expression of cyclin-dependent kinase 6 (CDK6) [61]. In opposition to these findings, sponging of miR-637 by circ_0002194 correlated with reduced angiogenesis and increased apoptosis rates of ox-LDL-treated vascular ECs [122].
Zhang W-B et al. [143] found lower levels of circHIPK3 in the serum and tissues of patients with atherosclerosis. This was associated with osteogenic and chondrogenic differentiation and increased cell mineralization and calcium content in VSMCs in vitro. In fact, overexpression of circHIPK3 led to sponging of miR-106a-5p and subsequent activation of the MFN2 gene, which inhibited osteogenic and chondrogenic differentiation, ultimately leading to less calcium accumulation in VSMCs [143]. In this case, miR-106a-5p sponging had beneficial effects, which contradicts the findings that miR-106a-5p inhibition facilitated pathogenic proliferation and migration of VSMCs [82]. Another study showed that downregulation of circHIPK3 led to disinhibition of miiR-190b, decreased activity of the ATG7 signal pathway, and subsequently lower rates of autophagy and higher rates of lipid accumulation in both mice in vivo and ox-LDL-treated human umbilical vein ECs in vitro [37]. On the other hand, overexpression of circHIPK3 resulted in sponging of miR-190b and increased activity of the ATG7 pathway, which correlated with reduced lipid accumulation and promoted autophagy (Figure 4).
Wang et al. [82] showed that increased levels of circHIPK3 in mice aortic EC-secreted exosomes in response to high glucose levels correlated with more significant proliferation and inhibition of apoptosis of VSMCs and VCAM-1 expression and uptake of glucose-rich exosomes by VSMCs. This occurred via sponging of miR-106a-5p and amplified expression of FOXO1 and VCAM-1 [82]. Similar effects were seen in human aortic and umbilical artery VSMCs through a mechanism involving inhibition of miR-637, leading to increased expression of cyclin-dependent kinase 6 (CDK6) [61]. In opposition to these findings, sponging of miR-637 by circ_0002194 correlated with reduced angiogenesis and increased apoptosis rates of ox-LDL-treated vascular ECs [122].
Zhang W-B et al. [143] found lower levels of circHIPK3 in the serum and tissues of patients with atherosclerosis. This was associated with osteogenic and chondrogenic differentiation and increased cell mineralization and calcium content in VSMCs in vitro. In fact, overexpression of circHIPK3 led to sponging of miR-106a-5p and subsequent activation of the MFN2 gene, which inhibited osteogenic and chondrogenic differentiation, ultimately leading to less calcium accumulation in VSMCs [143]. In this case, miR-106a-5p sponging had beneficial effects, which contradicts the findings that miR-106a-5p inhibition facilitated pathogenic proliferation and migration of VSMCs [82]. Another study showed that downregulation of circHIPK3 led to disinhibition of miiR-190b, decreased activity of the ATG7 signal pathway, and subsequently lower rates of autophagy and higher rates of lipid accumulation in both mice in vivo and ox-LDL-treated human umbilical vein ECs in vitro [37]. On the other hand, overexpression of circHIPK3 resulted in sponging of miR-190b and increased activity of the ATG7 pathway, which correlated with reduced lipid accumulation and promoted autophagy (Figure 4). (1) and (4) harmful effects of cellular proliferation, migration, and apoptosis [61,82], and (2) and (3) protective effects of inhibited osteogenic differentiation, mineralization, and calcium deposition, reduced lipid accumulation, and increased rates of autophagy [37,143]. Both (1) harmful and (2) protective effects were seen from sponging of miR-106a-5p [82]. Opposing effects on angiogenesis were also demonstrated via sponging of miR-637 by (4) circHIPK3 and [61] (5) circ_0002194 [122], the latter of which suggests equivocal effects on the pathogenesis of atherosclerosis due to the observed impaired angiogenesis but enhanced oxidative stress. (1) and (4) harmful effects of cellular proliferation, migration, and apoptosis [61,82], and (2) and (3) protective effects of inhibited osteogenic differentiation, mineralization, and calcium deposition, reduced lipid accumulation, and increased rates of autophagy [37,143]. Both (1) harmful and (2) protective effects were seen from sponging of miR-106a-5p [82]. Opposing effects on angiogenesis were also demonstrated via sponging of miR-637 by (4) circHIPK3 and [61] (5) circ_0002194 [122], the latter of which suggests equivocal effects on the pathogenesis of atherosclerosis due to the observed impaired angiogenesis but enhanced oxidative stress.
Analysis of the results of the circHIPK3 investigations illustrates three major points of contention: 1. In similar proxies for atherosclerotic environments, crcHIPK3 expression was found to be both increased and decreased. 2. Analysis downstream of circHIPK3 expression, i.e., of miR-637, showed opposing effects when it was inhibited by other circRNA molecules, i.e., circ_0002194. 3. The ultimate effects of overexpression of circHIPK3 were found to be both pathogenic (increased proliferation, apoptosis, and glucose uptake) and protective (reduced angiogenesis, apoptosis, osteogenic differentiation, and lipid accumulation) in regard to atherosclerosis development and progression. It is possible disparate effects were seen due to different tissue types and methods of atherosclerosis stimulation in vitro.
However, similar results would have been expected regardless of the method used to simulate atherosclerosis. Furthermore, the possibility that intervening in one tissue type to halt atherosclerosis progression could promote atherosclerosis in a different tissue type is alarming. Alternatively, these findings could point to issues with the general replicability of these studies in vitro. Regardless, they underscore the complexity of the genetic milieu of atherosclerosis and the likely unintended negative consequences of circRNA manipulation.
Discussion
Several issues have been raised with using circRNAs as potential therapeutic targets to modify disease processes. Some systemic problems include the toxicity of nanoparticles, mis-spliced byproducts, and synthetic circRNA immunogenicity [152]. Highlighted by this review-with respect to in vitro models evaluating atherosclerosis-are also questions of study design, interpretation of overall results, contradictory effects caused by off-target RNA silencing, and cell-specific and disease-specific effects. No two studies that used a supplemental in vivo model studied the same circRNA. Thus, we cannot comment on the redundancy or reproducibility of the effects of a particular circRNA within in vivo models.
Study Design and Issues with Interpretation
As previously discussed, the majority of the circRNA molecules that were studied were upregulated in the serum of subjects with atherosclerosis and in in vitro models of atherosclerosis induced via established pathogenic triggers. This is likely because circRNA molecules with increased levels are easier to identify than ones with decreased expression, which represents a kind of ascertainment bias in identifying potential circRNA targets for investigation. The majority of experiments, which found increased expression of cir-cRNA molecules in states of atherosclerosis, also found an association with mechanisms known to promote atherosclerosis in vivo-most commonly proliferation and apoptosis of ECs or VSCMS-while silencing the circRNA under investigation and promoting its cognate miRNA resulted in opposite effects. However, whether angiogenesis and apoptosis in atherosclerosis are beneficial or harmful depends on their effects on intimal hyperplasia, plaque stability, plaque content, phenotypic switching, and the stage of atherosclerosis [153,154]. Thus, it is difficult to determine the clinical significance of atherosclerotic mechanisms in vitro.
Even when conceding the benefit of the doubt that a particular mechanism known to promote atherosclerosis in vivo, e.g., proliferation and migration of EC and VSMCs, has similar effects in vitro, a significant percentage of studies yielded equivocal results. Often, seemingly opposing effects were observed in response to upregulation or downregulation of a particular circRNA in vitro. For example, Chen et al. [45] demonstrated that while circ-BANP was associated with apoptosis and inflammation and promoted cell viability, it also correlated with increased migration, invasion, and tube formation of ECs [45]. Antagonistic effects on proliferation, migration, and promotion of calcification of VSMCs by circHIPK3 sponging of miR-106a-5p were also observed ( Figure 4). Whether these mechanisms, which have seemingly oppositive effects on atherosclerosis development, lead to the progression or attenuation of atherosclerosis in toto, it is difficult to ascertain via in vitro analyses alone. EC dysfunction present in the early stages of atherosclerosis is associated with chronic inflammatory changes in the arteries [155]. Alternatively, the results could have been inaccurate, pointing to potential issues with the general replicability of the results of these study designs.
Ancillary in vivo studies often investigate different pathogenic processes or stages of atherosclerosis and therefore do not effectively corroborate the in vitro findings. For example, one study that evaluated the effects of circGSE1 expression on EC senescence also looked at the effects of angiogenesis on limb ischemia in mice via femoral artery ligation [125]. Few studies have analyzed the in vivo formation of atherosclerosis. Song et al. [9] showed that circANRIL overexpression was associated with the formation of atherosclerotic plaques and thrombi in rats that were fed a high-fat diet and were injected with a large dose of vitamin D3 (to promote arterial calcification) [9]. However, as extensively demonstrated in this review, circRNA molecule expression can correlate with either promotion or attenuation of atherosclerosis and therefore does not establish a causative relationship. Min et al. [123] showed increased expression of ciPVT1 in the senescent umbilical vein and coronary artery ECs, while silencing ciPVT1 led to delayed senescence, promoted proliferation, and increased the angiogenic activity of ECs. A correlative in vivo mouse study using a plug assay found that plugs mixed with silenced ciPVT1-transfected HUVECs showed less new vessel formation macroscopically [123]. This study shows the potential of the findings of in vivo studies to corroborate those of in vitro analyses of circRNA interactions and their effects on atherosclerosis [123]. However, such a model was rarely used in these investigations.
Off-Target RNA Silencing
This review identified a significant overlap of circRNA and miRNA interactions, resulting in disparate and opposing effects on mechanisms associated with the development of atherosclerosis. As seen in Figures 2-4, the most investigated circRNA molecules, circ_USP36/circ_0003204, circCHFR, and circHIPK3, were found to have disparate, opposing, and often contradictory results across studies. While the majority of miRNAs inhibited by circ_USP36/circ_0003204 led to the regulation of genes that promoted pathogenic mechanisms such as increased proliferation, migration of cells, and inflammation, the sponging of others was found to be correlated with the opposite effects ( Figure 2). Similar findings of harmful, protective, and equivocal effects on atherosclerosis development were seen when analyzing the mechanisms of circCHFR ( Figure 3) and circHIPK3 (Figure 4) across studies. Sponging the same miRNA by different circRNAs also had contradictory effects on cell proliferation, apoptosis, and inflammation. For example, miR-182-5p was demonstrated to be affected downstream of four different circRNA molecules: circ_USP36, circMTO1, hsa_circ_0004831, and Circ_0050486 [42,57,77,142]. While sponging of miR-182-5p by circ_USP36 led to increased activity of the KLF5 gene, which induced VSMC proliferation and metastasis, inhibition of miR-182-5p via overexpression of circMTO1 and subsequent RASA1 gene activation had the opposite effects of decreased VSMC proliferation and decreased apoptosis (Table S1). As another example, silencing of overexpressed circ-CHFR molecules led to de-inhibition of miR-370, allowing it to prevent expression of FOXO1/cyclin D1 genes, resulting in decreased proliferation and migration of VSMCs [21]. Circ-BANP silencing, which similarly resulted in increased levels of miR-370, however, was ultimately associated with the opposite finding: increased EC migration, invasion, and tube formation [45]. Similar results were seen when looking at the different effects of circHIPK3 and circ_0002194 on the sponging of mir-637 ( Figure 4). All these cases underline the ubiquitous collateral off-target downstream and lateral effects of targeting a particular circRNA or miRNA for therapeutic purposes.
Differential Effects across Cell Types and Diseases
There were also significant cell-specific effects observed on the process of atherosclerosis development. Ox-LDL-treated HUVECs were associated with reduced expression of circHIPK3 in vitro, while overexpression correlated with reduced lipid accumulation and the promotion of autophagy [37]. In contrast, increased proliferation and reduced apoptosis of VSMCs, most likely a pathogenic mechanism, were observed in conjunction with increased circHIPK3 expression of aortic and umbilical artery VSMCs in vitro [61]. In response to a high-glucose environment, mouse aortic EC-secreted exosomes also promoted proliferation and inhibited apoptosis of VSMCs while promoting VCAM-1 expression and uptake of exosomes by VSMCs [82]. In human VSMCs, circHIPK3 was downregulated in tissues and blood samples of atherosclerosis patients and VSMCs with osteogenic and cartilage differentiation. Concordantly, overexpression of circHIPK3 was associated with the athero-protective effects of inhibited osteogenic and chondrogenic differentiation and reduced cell mineralization and calcium content [143].
Thus, increased expression of circHIPK3 was associated with both protective and detrimental mechanisms in the context of atherosclerosis development. The effects likely depend on particular cell types tested, e.g., VSMCs, ECs, and/or atherosclerosis-inducing agents, and overall milieus. Opposing effects in different cells further complicate the selection of cricRNA molecules such as circHIPK3. In this particular case, these studies suggest that silencing of circHPIK3 would lead to the negative effects of increased lipid accumulation in ECs and calcification of VSMCs but the positive effects of reduced proliferation and increased apoptosis in VSMCs, as well as decreased VCAM-1 expression and VSMC adhesion, indicating contrasting effects across different cell types.
Furthermore, it is likely that targeting a specific disease process, such as atherosclerosis in this case, may have unintended effects on other cardiovascular pathologies. While sponging of miR-370 by circCHFR led to increased FOXO1/Cyclin D activity which enhanced VSMC proliferation and migration, inhibition of miR-370 was also associated with beneficial effects on sinus node function in an in vitro mouse model of heart failure [21,151]. Thus, therapy aimed at silencing circCHFR to mitigate atherosclerosis development would likely lead to increased miR-370 expression, which may have pathogenic effects on sinus rhythm function in patients with heart failure. In addition, there are numerous extra-cardiac disease processes that may be affected by such genetic manipulation, the effects of which are hard to account for. For example, miR-370 has also been shown to play a regulatory role in various cancers, including cervical, ovarian, lung, gastric, and hepatocellular, among many others [156][157][158][159][160].
In summary, silencing of a particular circRNA leading to disinhibition of its related miRNA could result in the intended effect of halting atherosclerosis. However, several other pathways would need to be accounted for to mitigate the unintended consequences of amplifying atherosclerosis or other disease progressions. These include disparate, opposing, and contradictory downstream and lateral effects of silencing a particular circRNA in different tissue types and varying disease processes. Any risk-benefit analysis aimed at evaluating the adoption of such a therapeutic approach would ultimately be limited by the sheer scope of genetic interactions and their effects, as well as the discovery and knowledge of those mechanisms and effects.
Conclusions
This review represents the largest and most systematic review of studies evaluating the role of circRNA in the pathogenesis of atherosclerosis. With a focus on the most studied molecules, many disparate, opposing, and contradictory effects were observed across experiments. These include levels of the expression of a particular circRNA in atherosclerotic environments, attempted ascertainment of the in toto effects of circRNA or miRNA silencing on atherosclerosis progression, and off-target, cell-specific, and diseasespecific effects. Accordingly, many of these studies conclude that a specific circular RNA regulates atherosclerosis. This review shows that this regulation is a complex orchestration more akin to directing traffic with multiple moving vehicles and intersections than a linear assembly line. Given the high potential for detrimental and unpredictable off-target effects downstream of circRNA manipulation, the practice of therapeutic targeting of circRNA or miRNA molecules appears too complex at the current level of knowledge. Future studies need to pay attention to the mechanisms being examined and manipulated in the context of stages of atherosclerosis, cell type, and downstream and lateral effects of circRNA manipulation. In this regard, we need more correlative in vivo studies designed to investigate the role of circRNAs in atherosclerosis development and progression.
Supplementary Materials: The following supporting information can be downloaded at: https://www. mdpi.com/article/10.3390/jcm12134446/s1, Table S1: A summary of the mechanisms of the 140 studies investigating the role of circular RNA in the pathogenesis of atherosclerosis from the years 2016 to 2022, as identified in PubMed. | 2023-07-12T06:03:31.083Z | 2023-07-01T00:00:00.000 | {
"year": 2023,
"sha1": "03e0d94ba27e088fc75942242b2202922e65f91c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/12/13/4446/pdf?version=1688365692",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f1d95bffd8a8ad329beb1ec91c6260b745551542",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
5342021 | pes2o/s2orc | v3-fos-license | The Clinical Efficacy of Prostate Cancer Screening in Worldwide and Iran: Narrative Review
Prostate cancer (CaP) imposes a great health burden on men, while its incidence has significantly increased in recent years. The screening program for CaP is still controversial and recent large trials have failed to present a significant reduction in prostate-specific mortality and all-cause mortality. An entire body of data obtained from worldwide studies conducted on CaP screening is required to better evaluate health policy decisions and patient decision-making. In current review, the clinical efficacy of screening programs on CaP was discussed in numerous parts of the world, such as in the US, Europe, and Asia, to provide an updated screening recommendation. Finally, we discuss about CaP screening status in Iran and update the screening recommendation in Iran.
Introduction
As a most prevalent form of cancer in men, prostate cancer (CaP) is known to be a serious health threatening disease worldwide. According to the latest estimates of global cancer incidence, CaP is the third and sixth most common cancer in men and in the world (numerated by new cases), respectively. Nearly 10% of all cancers in men have appeared in North America, Europe, and some parts of Africa with annual 500,000 new cases [1,2]. Some reports on its malignant form indicate it as the second cause of deaths in the world. Its risk of occurrence is enhanced by such factors as being from a black race and having a positive family history or prostate intraepithelial neoplasia displayed in the previous biopsies [3].
A gradual increase is being witnessed for CaP incidence and rate of mortality among world's population. The incidence of CaP has undergone a high geographical variability. North America, as well as northern and western Europe countries have shown higher incidence rates of the disease compared to those of Asian countries, while south America and Europe have undergone an intermediate rate of incidence [4]. CaP has less frequently occurred in Japanese and Chinese men [5], whereas it has been reckoned to be the third most common cancer among men in Iran. Such differences are seemingly related to ethnic traits [6].
Although it has a high incidence and prevalence, its progression from an early to advanced disease takes a longer time compared to other malignant types of cancer, thus showing a rather slow growth rate [7]. For this reason, to promptly and potentially conduct a life-saving treatment for this disease, a reliable way for its detection in an early stage has been attempted to be found by Lamb et al [8] with the two goals of alleviating significant morbidities associated with the advanced prostate disease and its mortalities [9].
Cancer has been diagnosed with higher probability in the developed compared to developing countries. Due to early detections through some screening programs, the relevant mortality has been reported to be lower in the developed countries. The higher rates of morbidity and mortality in the developing countries are due to cancer detection in late stages and older ages, thus showing that screening programs are important since they lead to reduced diseases and mortalities and improved life qualities. Studies are indicative of probable live-saving in case of an early detection via CaP screening. Thus, it is highly important to use screening methods for cancer detection at curable stages.
The screening options for CaP include digital rectal examination (DRE) and blood test of prostate-specific antigen (PSA). For men of 50 -69 years old, PSA and DRE screening of CaP have shown the overall $3,574 -4,627 costs per year of life-saving. For PSA alone, these figures for men of 50 -70 years old have been $3,822 -4,956 [10].
No uniform recommendations for the current screening of CaP have been offered by National Health Organizations [11]. Controversial attitudes towards using DRE and PSA tests for the early detection of CaP have resulted from the recent guidelines and recommendations presented [12]. Yet, an entire body of data obtained from the studies conducted on CaP screening worldwide is required to provide individual patients
Screening CaP in the US and Canada
Although no evidence exists based on large randomized trials to produce a net benefit in the US, most men over 50 years old were found to have undergone a PSA test [13]. Moreover, male urologists (95%) and primary care physicians (78%) with an age of 50 years or more were found to have practiced the PSA test [14]. Indeed, 5 years after PSA test introduction in 1992, the death rates due to CaP declined to nearly 4% per year in the US [15].
Since conflicting results were observed to be obtained from the largest trials on CaP based on a randomized controlled screening in the US [16], no consensus on the net benefit of its early detection was discovered. Thus, no final evidence for or against the screening as a method of reducing the mortality rates of CaP was achieved in the US. The Prostate, Lung, Colorectal, and Ovarian (PLCO) Cancer Screening Trial showed no success though expected. In fact, no unequivocal benefit was obtained from PSA screening in this large study [17]. During a median 11-year follow-up, no mortality benefit was arrived at through a combined screening with PSA test and DRE in the PLCO Cancer Screening Trial, which might have been practiced within a too short period for providing reliable data on the relevant mortality. Nevertheless, a wide confidence interval was deduced from the rather low number of end-points for CaP mortality regardless of the insignificant effect of PLCO trial assessed so far. The possible explanations for the negative results are high pre-screening levels in the PLCO population and the control group contamination [18].
According to the current guidelines presented for CaP screening in the US, no best tradeoff between harms and benefits has been achieved through a consensus. Considering the updated recommendations of the US Preventive Services Task Force (USPSTF) against PSA screening for CaP provided in 2012 (grade D recommendation), a moderate certainty on the fact that such screening benefits do not outweigh the harms has been achieved [19]. In fact, not sufficient evidence has been found by the USPSTF for evaluating the screening risks and benefits in the men younger than 75 years. It is commonly stated that a strong recommendation for patients' informed decisions from all the groups should be preceded with regard to the increasing number of men diagnosed with an early nonmetastatic disease during screening as a prevailing benefit of the clinical over-diagnosis and overtreatment of insignificant cancers. The USPSTF recommends providing an informed choice for patients when physicians are to offer PSA screening since some uncertainties may be associated [20]. The updated screening guideline for CaP released by American Urological Association (AUA) in 2013 is indicative of no such recommendations for men younger than 40 years, routine screening for those aged between 40 and 54 showing an average risk, and those older than 70 or guessed to have a life expectancy of less than 10 -15 years. As noted by the AUA, an individual decision-making should be considered for higher-risk men aged between 40 and 54 years, while screening benefits may belong to those over an age of 70 years and who are in excellent health. Furthermore, a shared decision-making for PSA screening in the men aged between 55 and 69 years has been strongly recommended by the AUA [21].
As shown in another study performed on 1,067 US counties by Howrey et al, PSA testing rate was significantly related to both rates of CaP treatment and mortality for men (P < 0.001 for both rates), while no other causes were found for mortality. The mortality rate related to CaP demonstrated a reduction via PSA testing at the county level, while the number of overdiagnosed and over-treated men significantly increased [22].
Some guidelines for CaP screening through PSA has been recently published by the Canadian Urological Association (CUA), thus recommending beginning screening for all men with at least 10 years of life expectancy at the age 50 and repeating it every 1 -2 years. Also, starting screening at the age of 40 has been recommended for "high-risk" men. This is while PSA testing for men has been suggested by Towards Optimized Practice (TOP) to be commenced at the age of 50. The relative risk of mortality from CaP in a screened population was reported to be improved by 67% in another randomized study carried out in the area of Quebec City [23]. Nevertheless, the study was methodologically criticized [23,24] and thus a re-analysis plan is being currently pursued.
Noticeably, minor to major severity of harms and duration of screening were resulted. A short-term anxiety associated with bruising and bleeding was some common minor harm resulting from screening, while blood loss, infection, pneumonia, erectile dysfunction, and incontinence were some common major harms caused by over-diagnosis and overtreatment. Screening through PSA could lead to false-positive results and subsequent over-diagnosis. Biopsies guided through transrectal ultrasound (TRUS) could result in such adverse events as bleeding, pain, and infection. No detailed or comprehensive assessments were provided by the studies in terms of the screening effects on life quality or resource utilization [25].
Nonetheless, CaP screening should not be completely considered as non-beneficial when regarding all the above considerations. Hence, after informing the patient and his clinician and weighing his risk factors, decisions can be made about screening.
Screening CaP in Europe
The ultimate evidence for or against CaP screening as an approach to alleviate its mortality rate was expected to be provided by the European Randomized Screening for Prostate Cancer (ERSPC) as a multicenter trial in Netherlands, Switzerland, Sweden, Belgium, Finland, Italy, and Spain. Based on a randomized controlled trial (RCT), 82,816 and 99,184 men participated in the intervention and control groups (total of 182,000) for the screening procedure, respectively. PSA without DRE was reported by Schroder et al to lead to a
Shahyad et al
World J Oncol. 2018;9(1):5-12 relatively reduced mortality rate of 20% during the average 9 years of follow-up. The absolute reduction of deaths from CaP was nearly seven men per 10,000 men screened. Therefore, it was recommended that this screening be weighed for additional interventions based on the burdens imposed only when the results are real and not yielded by chance or through bias. The side effects were estimated to be rather higher though the screening benefits were somewhat greater for men undergoing an actual testing without compliance compared to the untested ones. Overall, assessments of life quality and cost-effectiveness are the promising issues to be addressed by ERSPC in the future analyses. Although chance alone may be involved in the higher mortality of CaP induced by screening the subgroup of men over 69 years old, ERSPC has re-emphasized a necessarily cautious approach towards this decision. Assuming its correct point estimate, ERSPC recommended the necessity of screening 1,410 men and treating 48 additional men to prevent one death due to CaP within a period of 10 years [26]. GOTEBORG Randomized Prostate Cancer Screening (GRPCS) began another European prospective study on 19,904 men aged between 50 and 64 years at the time of randomization in 1995. A 14-year follow-up revealed a 44% reduction of mortality rate in CaP screening compared to the control group. Upon finding a statistically significant difference between the screening and control arms in terms of the relative risk of CaP mortality in a clinical trial, GOTEBORG reached a life-saving result obtained from an organized screening based on PSA and early intervention for treatment [27].
The results of both ERSPC and GOTEBORG Swedish trials on prostate screening demonstrated reduced mortalities from CaP [27,28]. All the groups have highlighted the common themes of patients' necessarily informed decisions and increased number of men diagnosed with an early non-metastatic disease in the screening process. These are the benefits, which can be weighed against the present restrictions of potential downsides resulting from over-diagnosis and overtreatment of clinically insignificant cancers [29].
Another RCT was conducted by the Department of Urology and the South-East Region Prostate Cancer Register in Norrkoping, Sweden, to evaluate the probability of mortality specific to CaP caused by screening. A total of 9,026 men aged between 50 and 69 years were identified by the National Population Register in the city of Norrkoping, Sweden, in 1987. However, the men in the screening and control groups showed no significant differences in terms of mortality rate due to CaP after a 20-year follow-up program [30].
In 1988, 2,400 men aged between 55 and 70 years were randomly selected for CaP screening in Sweden and 65 men were detected to have CaP. Then, CaP diagnosis via screening and survival rate in the entire source population of 27,204 men in addition to 618 non-attendees for 15 years revealed no beneficial effects on the possible risk of death related to the disease or any other causes following their comparison with the mentioned invited men. However, the screening program could lead to a significantly reduced risk of death from any other causes [31].
Although the UK's National Screening Committee (UKN-SC) has offered no universal recommendation for CaP screening, a decline of CaP mortality in the UK [32] and the Neth-erlands [33] has been evidenced. Thus, an informed shared decision-making program has been provided for those requesting PSA testing after exchanging detailed information [34].
Also, an individualized approach based on a shared decision-making instead of a population-based screening has been recommended by the European Society for Medical Oncology (ESMO). As stated by ESMO, incongruent evidence has been discovered for screening the men younger than 50 years old and aged between 70 and 75 years, while its harms for those older than 75 years outweigh its benefits [35]. However, one has to acknowledge that despite all these efforts, the insights into the dynamics of collective decision-making in political science have yet remained far from complete.
In short, studies in Europe acknowledge that making decisions about existing CaP screening should consider the age of men and the risks of existing screening methods.
Screening CaP in Asia
There are unclear benefits of population-based screening based on PSA in the Asia-Pacific region since having a very low rate compared to those of Western countries.
In 2012, a total of 191,054 incidences and 81,229 mortalities related to CaP were recorded in Asian countries. Turkey, Lebanon, Israel, Singapore, and Japan were the five Asian countries with the highest standardized incidence rates, while Lebanon, Turkey, Armenia, Timor-Leste, and the Philippines were the five countries with the highest standardized mortality rates [34].
There are no available official guidelines on CaP screening in Asian countries, except in Japan, and thus, there is an urgent need to develop general guidelines for screening CaP for Asian individuals [36]. Screening through PSA has been recommended by the Japanese Urological Association (JUA) only for 50-year-old men or older. The recommendation is based on the merits and demerits of CaP screening in Japan with regard to the present and future perspectives. Therefore, the best screening system available for men who wish to be screened is provided by JUA [37].
The only known controlled study on CaP screening in Asia is the Japanese Prospective Screening for Prostate Cancer (JPSPC), which began in 2002 and ended in 2014. The study aimed at comparing the mortality rate of CaP between the screening and control cohorts. A total of 200,000 men aged between 50 and 79 years, who were from the prefectures of Gunma, Hokkaido, Hiroshima, and Nagasaki participated in this research. During 1992 -2006, the compliance rate of PSA and contamination to CaP screening cohorts in Isesaki and Kiryu cities were almost 75% over 5 years and as low as 8%, respectively [36]. Since no opportunistic screening for CaP could be detected in Japan, a significantly low rate of contamination was expected for the whole control cohort. The outcome of the study is being well awaited to understand any possible potentiality for screening based on PSA in Asia [36].
Another large study conducted on CaP screening in Japan is a screening cohort study based on between 55 and 69 years took part in the program and 249 cases (0.76%) were diagnosed with CaP. A radical treatment was conducted for 75% of the patients. The overall survivals and cause-specific mortalities after 8 years were 93.3% and 97.5%, respectively. Four patients diagnosed to be involved in an advanced CaP died from the disease. Thus, the screening system effectiveness of this study, as well as its good clinical outcomes detected in the CaP patients was so well shown [38]. Assessment of the tendency and quality control of CaP screening was serially performed in the study of Okihara et al in an area of Japan for 10 years. Since 1995, 39,213 men older than 55 years have totally participated in the mass CaP screening in Otokuni District. In Japan, the primary screening of CaP has been widely recognized and the screening rate has thus increased through the basic health screening system. An extremely high rate of PSA exposure was found to have been practiced in Otokuni District; yet, it must be evaluated if such a procedure has reduced the mortality rate related to CaP. The need for prostate biopsy is substantially lowered by using prostate-specific antigen density (PSAD) in the secondary screening; yet, the quality of the screening system can be maintained only when the PSA-positive individuals are encouraged to be periodically screened for CaP [39].
The relationship between PSA screening and CaP mortality was investigated in South Korea. A total of 118,665 men participated in the study during 1994 -2004 and then followed up to 2011. During the follow-up period, 56 and 6,036 men died for CaP and any other causes, respectively. A statistically significant enhancement of the multivariate-adjusted hazard ratio was found for CaP mortality with more concentrations of PSA (P trend < 0.0001) in a way that 1 ng/mL increase in PSA led to 7% enhancement of the hazard ratio. A stronger relationship was seen between CaP mortality and PSA concentration in younger and heavier men than in older and leaner ones. In this study, some implications were provided for biopsy recommendation through the development of targeted cut-points of PSA [40].
The impact of CaP mass screening in Vietnam has been evaluated in a study conducted in Binh Dan Hospital in Ho Chi Minh City since January 2008. During CaP program, 408 patients were totally screened. Generally, a low CaP prevalence (2.5%) and a high occurrence of medium-grade lesions (Gleason 7) among CaP-positive subjects were discovered via the initial outcomes. Although the value of CaP screening programs for the patients and doctors was highlighted through this observation and more cases were detected in the early stages of development, the mass screening benefits of CaP program were not proven. Nonetheless, prostate cancer diagnosis and treatment in Vietnam revealed to be promising via a selective CaP screening [41].
To explore CaP status in a healthy population in Nepal, 1,521 men aged over 50 years were evaluated from July of 2010 to June of 2011. In this study, the overall rate of locally advanced cancer detection was 0.73%. DRE specificity, sensitivity, and positive predictive value were 66.0%, 90.9%, and 38.5%, respectively. For detecting prostate carcinoma, the PSA sensitivity of higher than 4 ng/mL and the positive predictive value for serum PSA were 100% and 19.0%, respectively. A higher guarantee must be provided for larger studies based on community, especially for high-risk groups [42].
Screening of 12,027 Chinese men for CaP was performed in Changchun in a Chinese cohort study in Changchun through the total PSA of serum and TRUS-guided systematic biopsies. Forty-one cases out of 12,027 cases were found to have prostatic carcinoma and moderately differentiated carcinoma as the most common type of CaP was revealed by the results. Also, an association between the total PSA value of serum in CaP, Gleason score, and tumor size was discovered through this study.
To evaluate the practicability of a potential screening program in Saudi Arabia, CaP prevalence was investigated in a healthy cohort of men through a small study of CaP screening. A total of 2,100 healthy subjects participated in the study from January to December of 2008. An elevated PSA value (≥ 4 ng/mL) was seen among 223 men, while 132 men were prepared for prostate biopsy. Fifty-two men were diagnosed to have CaP, while almost half of them had been already involved in locally advanced or metastatic cancers. A higher rate of prevalence of an advanced disease than what was expected was detected through screening.
Racial differences between Saudi and Canadian populations were studied in terms of the detection rate of CaP in a study conducted by Al-Abdin et al. The data prospectively obtained by the Urology Clinics of McGill University Health Center and King Saud University Hospital over 5 consecutive years were retrospectively analyzed. In this study, 414 Saudi and 1403 Canadian patients with a median age of 64 -68 years were assessed. Compared to Western populations, Arabic populations demonstrated a significantly lower prevalence of CaP, for which there was no explanation. As a valuable marker for performing prostate biopsy, PSA is recommended to be adjustably applied with regard to the geographic and/or ethnic differences in the study populations. Also, a different set of PSA cutoffs compared to the current standards used in North America may be needed for an Arabic population. Furthermore, to determine this cut-off and provide a better definition of the optimal PSA values usable for the Arab world, more prospective analyses are required [43].
The most common non-cutaneous malignancy and third most common cause of death in men after bowel and lung cancers in New Zealand is prostate cancer [41]. An inquiry was conducted into the early detection and management of CaP by the Health Committee in New Zealand in search of screening advantages or disadvantages for the disease and its early diagnosis. Seventeen recommendations were included in the report of the Inquiry into an Early Detection and Treatment of Prostate Cancer presented to the House in July 2011. The Health Committee stated the necessity of clear evidence on any possible harms caused by over-diagnosis and overtreatment before establishing any organized national screening program besides outweighing its reduced morbidity and mortality. However, no conclusive evidence has been currently acquired in this issue. The Health Committee recommended a Quality Improvement Program (QIP) based on equity though no national prostate screening programs are available at present. According to this program, men must receive CaP information based on evidence to make informed decisions for testing and treatment, during which timely access to high-quality care can be ensured. In
Shahyad et al
World J Oncol. 2018;9(1):5-12 New Zealand, inconsistent quality and equity of services has been noted by the Ministry of Health for an early detection and treatment of CaP. Currently, evidence-based information is not accessible to all men for making informed decisions. A framework within the existing resources was definitely developed by the Ministry of Health for the QIP [44]. In brief, it is still not clear that screening based on PSA can reduce deaths from CaP in Asia. Currently no official guidelines on screening for CaP in Asian countries are available, except in Japan [37]. Therefore, all the mentioned data suggest a need for developing a population-specific guideline since CaP features are diverse among different races in Asia. Notably, unlike the PLCO or ERSPC studies mentioned above, no large controlled trials can be easily organized for CaP screening in Asia due to the significant differences in the political systems, economic climates, and health policies of the countries involved. Therefore, there should be an option for applying a pre-determined statistical modeling and combining the available results of various Asian screening trials.
Yet, an optimal and standard screening system adjusted for Asian individuals through history can be established based on the PSA-related indices, serum PSA kinetics in middleaged men, and new biomarkers discovered for CaP screening through the recent evidence [36].
Screening CaP in Iran
Cancer distribution significantly varies from country to country in the world. It can be said that the third and sixth most common cancer in men and in Iran is CaP, respectively [45]. Iran is a sovereign state in Western Asia. With over 79.92 million inhabitants (as of March 2017), Iran is the world's 18th most populous country. Comprising a land area of 1,648,195 km 2 (636,372 sq mi), it is the second largest country in the Middle East and the 18th largest in the world. During 2003 -2008, the trend of CaP incidence was investigated in Iran. Totally, 16,071 CaP cases were identified in Iran. A significantly increasing incidence of the disease, especially for older men, with an annual percentage change of 17.3% was found. It is essential to conduct etiological and epidemiological studies and planning evaluation of CaP besides detecting and screening it at an early stage due to the changing lifestyles and population aging [46].
A significantly lower rate of CaP incidence has been detected in Iran compared to Western countries like the US. A combination of genetic and environmental factors can be the reason for this large disparity in CaP incidence. The high rates of CaP incidence reported in the Western countries may be partly due to people's increased awareness of prostate screening conditions and nationwide programs [47].
Consequently, the detection of localized latent cancer lesions in an attempt to detect CaP at an early stage through PSA screening has enhanced CaP incidence. In contrast, only clinically obvious diseases have been detected in Iran as reflected by the data on CaP incidence. This is certainly due to lacking any screening and early detection programs for CaP in this country. Moreover, high life expectancy in Western popu-lations has resulted in the greater proportion of elderly men in those countries and consequent differences considering CaP occurrence mainly in higher ages. Some other reasons may be the Western risk factors of high-fat diet, sexual behavior, infectious agents, smoking, occupational exposure, and socioeconomic status. Finally, the number of people affected in Iran has been undoubtedly underestimated due to the lack of a registration system of high quality for CaP, whereas it has provided the most precise data in the Western world [47].
Only two large RCTs were found to be conducted on PSA screening in Iran since CaP has not been a suitable candidate for providing a national screening program in this country. A total of 3,758 Iranian males aged over 40 years were mass screened through PSA testing by Hosseini et al (2007). An extended prostate biopsy through TRUS-guided was practiced on the men having a total serum PSA level of higher than 4 ng/mL and undergoing an abnormal DRE. In this study, 65.9% of the cancers detected were clinically significant. Quite a common CaP development would occur to the Iranian male population if they had a serum PSA level of higher than 4.0 ng/mL [45].
Another study conducted by Safarinejad (2006), a large population-based study of screening using total prostatespecific antigen (tPSA) and percent free PSA (fPSA) as the initial test was performed. A total of 3,670 Iranian men aged over 40 years were mass screened with PSA in Tehran during 1996 -2004. The subjects were invited for a DRE, PSA assay of serum, and TRUS-guided. The detection rate of clinically significant organ-confined CaP with potential curability is increased by screening via PSA associated with its low values of cut-off points [48].
A shared decision-making has been emphasized by the current guidelines though contradictory data have been obtained on CaP screening in the available research. This informed decision-making must be motivated by the physicians who are in charge of helping patients. On this basis, 184 urologists were invited by Ali Asgari et al (2015) to take part in a survey on CaP screening through a questionnaire. They showed that most Iranian urologists (76.8%) prefer to perform CaP screening despite the controversy on PSA testing. Many Iranian urologists with different backgrounds have been in favor of CaP screening regardless of their ages, years of experience, fellowship statuses, and types of medical prac tice. Of the urologists, 35.8% and 62.8% preferred biopsies and serial PSA screening in case of higher PSA levels than normal ranges in their follow-up plans, respectively. Therefore, PSA screening has been fa vored by Iranian urologists although its usefulness has still remained controversial. DRE has not been chosen by most Iranian urologists as part of a screening program. However, to investi gate a ration ale behind their decisions on CaP screening, large high-quality studies are required [49].
Clinically insignificant cancers may be over-diagnosed during an early detection of CaP and the subsequent overtreatment can reduce the life qualities of patients who inevitably experience untoward side effects. In any case, fighting against cancer as a priority, especially CaP, is highly recommended to be supported by Iranian government through the Comprehensive National Cancer Control Program (CNCCP). CaP prevention and early detection should be controlled via this program. Early detection of symptomatic benign prostatic hy-
Clinical Efficacy of Prostate Cancer Screening
World J Oncol. 2018;9(1):5-12 perplasia (BPH) is also recommended for men over 40 years old though population screening is infeasible based on the currently available evidence. Nonetheless, population-based studies are needed for defining the PSA cut-off point and early detection method besides clarifying the suspected cases. Prior to the integration of this program into the CNCCP, assessment of the cost effectiveness of this method should be done through pilot studies. To take an urgent strategy and provide evidence on CaP treatment and a national guideline protocol for it for the next 5 -10 years by using the existing experts' consensus, the local clinical trials should be supported [50] though unnecessary costs and burdens may be imposed on our health care system. The two above-mentioned studies could not determine CaP screening effectiveness in Iran. Further research associated with additional follow-up years is required based on the existing RCTs to designate population screening advantage by reducing mortality. There are no available official guidelines on CaP screening in Iran. Nevertheless in Iran most urologists preferred biopsies and serial PSA screening in case of higher PSA levels. Yet, it must be evaluated if such a procedure has reduced the mortality rate related to CaP.
Although an early detection or symptomatic BPH for those aged over 40 years is highly recommended in Iran, population screening is not suitably feasible based on the currently available evidence. A clarified method of early detection of suspected cases should be provided for Iranian population by defining the PSA cut-off point. Before integrating this program into the CNCCP, evaluation of the cost-effectiveness of this approach should be done via pilot studies. Finally, to provide an urgent strategy based on the national guideline protocol for CaP treatment by using the current experts' consensus, the local clinical trials should be supported so as to achieve elaborate evidence in this area for the next 5 -10 years [50].
Conclusion
Although various studies have been conducted on the effectiveness of CaP screening in different countries all over the world, the conflicting recommendations have further highlighted its uncertainty. In some areas, screening for CaP is recommended for a specific age range, and for other age, the cost and clinical beneficence should be considered. In some other areas, screening for CaP is recommended based on the life expectancy which is difficult to determine. Some populationbased investigations have revealed need for defining the PSA cut-off point for each region. Some researchers believed that advantage and disadvantages of screening program should be clarified for the related offices of decision-making, but unfortunately in most cases comprehensive data are not available for them. Some studies have revealed that mortality reduced by screening program and generally recommended it, while some say there is no difference. Some researchers believed that along with the over-diagnostic disadvantage and unnecessary treatments, the novel method with higher sensitivity and specificity must be invented.
The best screening method for CaP is unknown though both morbidity and mortality are probably reduced by screening. It may promote unwarranted treatment procedures or adversely affect the patients' health outcomes with an equal result of no net benefit or harm. It can be only justified if the potential follow-up tests and treatments are cost-effective though there is not a known economic implication for CaP screening. Still, it is not clear that CaP mortality can be lowered by screening. A high over-diagnosis rate may be resulted by applying PSA screening policies to asymptomatic men. It is not certain that more benefit than damage is achieved through the best screening and treatment methods. Finally, well-informed patients can be screened upon request. For this purpose, validated tools of information should be developed and the men willing to be screened should be provided with clear information. Fortunately, these issues are being addressed through recent increasing reports [51]. | 2018-04-03T02:48:44.983Z | 2018-02-01T00:00:00.000 | {
"year": 2018,
"sha1": "9a08ab8e3971d5975191bc5b6f0e2b6cc1c6b536",
"oa_license": "CCBYNC",
"oa_url": "https://www.wjon.org/index.php/wjon/article/download/1082/847",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9a08ab8e3971d5975191bc5b6f0e2b6cc1c6b536",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55491341 | pes2o/s2orc | v3-fos-license | An applicable approach for performance auditing in ERP
This paper aims at the realistic problem of performance auditing in ERP environment. Traditional performance auditing methods and existing approaches for performance evaluation of ERP implementation could not work well, because they are either difficult to work or contains certain subjective elements. This paper proposed an applicable performance auditing approach for SAP ERP based on quantitative analysis. This approach consists of 3 parts which are system utilization, data quality and the effectiveness of system control. In each part, we provide the main process to conduct the operation, especially how to calculate the online settlement rate of SAP system. This approach has played an important role in the practical auditing work. A practical case is provided at the end of this paper to describe the effectiveness of this approach. Implementation of this approach also has some significance to the performance auditing of other ERP products.
Introduction
In the 1990 of the 20th century, according to the development of information technology and enterprise's demand for supply chain management, based on enterprise's management information systems in the era of information on future development trend forecast, the United States computer technical advisory and evaluation group (Gartner Group) first proposed the concept of Enterprise Resource Planning (ERP).After more than 20 years of development, ERP has been widely used all over the world.ERP is widely recognized as a kind of advanced concept of enterprise management.Supply chain management is the core idea of ERP.
From the perspective of management science, ERP is an integration of business processes, data, human resources, computer hardware and software.It acts as an enterprise resource management system based on the advanced concept of ERP.It not only provides decision support for enterprise decision management, but also provides operation means for the end users.From the perspective of information technology, ERP is a consolidated information system, in which enterprise's logistics, capital flow and information flow are comprehensive integrated.Business modules of ERP can be applied not only to the manufacturing enterprise, but also to other types of businesses and organizations.
In the field of ERP products, from Germany's SAP is the most representative.SAP is known as management gurus behind successful enterprise, occupies an extremely important market share in ERP products.SAP industry solutions covering almost all the major sectors, at present, such as Sinopec, Petro China, national grid and many large enterprises have implemented SAP.
As an integrated computer information system, it brings great deal of benefits to many enterprises.Widespread use of ERP systems has fundamentally changed the way of business data processing.The complexity of ERP systems has brought new challenges to performance audit under ERP environment.Many traditional performance auditing methods could not work well in ERP environment because they are difficult to work.Due to the lack of quantitative evaluation method, performance audit under ERP environment has remained stagnant.In order to meet the new demands of ERP, we have to find new performance auditing method in an applicable way.This paper proposed a quantitative performance audits method for SAP based on the audit work.We believe that the implementation of this approach also has some significance to the performance auditing of other ERP product.
Literature review
Current study literatures on the performance auditing methods under ERP environment is generally little.Current research focuses on performance evaluation of ERP systems implementation.Performance evaluation of ERP systems implementation is multi-layered complex systems, which has multiple indicators.There are many system evaluation methods from different ways, such as the commonly used ABCD checklist, the balanced scorecard, the weighted average method, fuzzy comprehensive evaluation and Data Envelopment Analysis (DEA), the Analytic Hierarchy Process (AHP).These assessment methods are generally using a combination of quantitative and qualitative methods.Zhang Hui, Mou Yankui (2009) performed metric analysis from multiple angles on the results of ERP implementation, using ABCD check table.They provided assessment scores to the running situation of enterprise's ERP system according to 4 aspects and 25 questions respectively.These aspects contain technology, data integrity, education and training, and system usage.According to the degree of performance for each items, different scores are given (excellent 4 points, good 3 points, medium 2 points, poor 1 points, not implementation 0 points).The cumulative total 90 points above of determine for A level, 71~90 points determine for B level, 50~70 identified as C level, less than 50 points identified as D level.According to different levels of results (ABCD), a qualitative description and conclusions can be provided.Ma Guangqing (2009) using the weighted average method which is simple and intuitive, from technical capacity, managerial capacity and elements of efficiency evaluation, construct a set of index system of performance evaluation for ERP system implementation.The evaluation model and its solution are given on this basis.This method uses AHP method to determine the relative weight of each index based on experts' scores.Zou Mingxin, and Xu Xuejun (2006) proposed a three-layer index system based on the balanced scorecard to evaluate the performance of ERP implementation combined with the overall goals of ERP implementation.They also used expert comprehensive evaluation method for example of application.Sun Yuefan, Zhang Zhenhao (2007) conducted a quantitative research on how investment of ERP affects the economic efficiency and competitive of enterprises based on DEA model.This provides a possible idea for study on performance evaluation of ERP investment.Analytic Hierarchy Process (AHP) is a common method of performance evaluation of ERP, which is discussed in many literatures.The key point of Analytic Hierarchy Process is to establish the index system and determine the standard points of each indicator.It is a subjective and an objective, quantitative and qualitative methods.
There are some empirical researches on ERP performance.Zheng Chende, Chen Jinyong, and WangYan (2008) took listed enterprises which implemented ERP as samples to study the impact of ERP implementation on the performance of State-owned enterprises.Empirical results show that ERP systems can improve the operation of State-owned enterprises.Inventory turnover, accounts receivable turnover, turnover and total assets turnover were all increased significantly after the ERP implementation.ERP system can effectively improve the productivity of workers, reducing the number of employees; but the profitability of State-owned enterprises is not increased after the implementation of ERP system.Xu Ming (2010) took manufacturing listed companies in Shanghai and Shenzhen as the research object, conducting an empirical research on how the implementation of ERP systems affects the performance of listed companies.Research showed that, implementation of ERP does help to improve enterprise's operating performance, and affect financial performance index such as assets turnover and inventory turnover.Improved performance of enterprise through the implementation ERP has time lag effect.It always needs 3 to 4 years of using to gain the effect of ERP.In short-term view, enterprise using domestic ERP system would improve its benefits faster than enterprise using foreign ERP systems, but from long-term view, Indicators of enterprises using foreign ERP systems is superior to the using domestic ERP systems.Li Benbo, ChenSheng, and Qiang Haitao (2006) used 87 valid questionnaires from 30 small and medium enterprises in Chongqing for empirical research.Results showed that quality of members in project team, budget, enterprise management, ERP project implementation are significant factors of the performance of ERP implementation.Quality of project members is the fundamental factor, and software selection and performance of ERP implementation has no significant positive relationship.
Existing approaches for performance evaluation of ERP implementation can be little help to performance auditing under ERP environment.First of all, because these approaches contain certain subjective elements (such as the construction of index system), and even adopt the experts' advice cannot eliminate subjectivity.On the contrary, audit evidence is required to be objective.Therefore, these methods are rarely used.Secondly, these methods are systematic evaluation method, which needs large collection of information, and has a long period of data acquisition and processing.It is difficult to meet the requirements of the time limit for the field audit.An objective, quantitative audit approach is urgently needed in the performance audit process under ERP environment.
Applicable performance auditing approach of SAP
Through the study of auditing work, we summarized 3 parts of the performance indicators in ERP system according to the feature of SAP.Quantitative performance audit methodology for SAP consists of system utilization, data quality and the effectiveness of system control.System utilization consists of 2 aspects, which are the condition of user login and online settlement rate.Data quality consists of master data quality and interface data quality.Effectiveness of system control consists of the achievement of key business controls and the validity of control data.
System utilization
System utilization is the most intuitive judgment of the application performance of ERP system.If an ERP system has a low utilization rate after the completion of implementation, and a large number of businesses are conducted without ERP system, the performance is clearly low.The indicator of system utilization can be divided into two indexes, one is the condition of user login, and the other is the online settlement rate.
User login condition reflects to what extent the SAP users rely on the ERP system.Users of SAP will be charged by license fee.The license fee for each user in China will be USD 2300 to USD 3000.One license is generally for one user.In order to save money and improve the efficiency, enterprises should give full play to each user role to minimize the occurrence of idle users during the implement and application of ERP.ERP generally records the user information and login information in the database.During the process of audit, we downloaded the user table of SAP (USR01, USR02, USR04, and USR21 etc.) to query the last login date of each SAP user.If the value of last login date is null, then indicate the user has never login the system.If the date of last login is several months ago, it also indicate the idle user of the system.If there are large numbers of idle users, not only illustrates the waste may exist in the process of ERP implementation, but also indicates that ERP system may not play as expected.For example, Part of the users who never (up to year 2009) login SAP system is shown as table 1. Online settlement rate refers to the proportion of the business which are conducted and settled by ERP system in an online manner.High online settlement rates indicate the business functions of the ERP system have been fully utilized, and the system performance is high.Conversely, it indicates that ERP system has not been fully utilized.Therefore, the online settlement rates can act as an important index which reflecting the effect of ERP application.
In order to calculate the online settlement rate, we have to find the differences of vouchers between online settlement and off-line settlement.In SAP, for example, the voucher header data is stored in the SAP table "BKPF", and the type of business transaction is stored in the field GLVOR, which indicates the business type of the voucher.For example, if the field has the value of "RMRP", it means that the voucher is generated by the MM module of invoice verification transactions.Take the business of procurement as another example, to calculate the accounting entries which have the credit account as accounts payable can calculate the settlement amount of purchase.The online settlement amount for the purchase consists of the accounting entries which has the content "RMRP" in the field "GLVOR".
Online settlement rate calculation also needs to consider the business process in ERP.In the procurement process, for example, assume that procurement in the SAP business settlement meets the following prerequisites: • Accounts payable change the error using the red writeoff method; • Accounting methods of the receipt or acceptance of services through the ERP Material Module is as follows: - Online settlement amount = credit amount of accounts payable (2121000000, 2121010000) current year, which has the RMRP type in account (that is, fields of business transactions for "RMRP" indicates online vouchers of settlement) + current year amount of GR/IR account (credit amount-debit amount) Online settlement ratio = amount of online settlement current year / amount of procurement settlement current year If the online calculation is low, this indicates the utilization of ERP system is poor, and system performance is not good.It also implies that there is a lot of off-line settlement of business which can result in control risk.Therefore, the online settlement rate is an important indicator reflects the system utilization and performance.For example, Part of the online settlement ratio of each branch in year 2009 is shown as table 2.
Data Quality
There are 3 ways through which data come into ERP system: manual input, system interfaces and migration of legacy data.Data in ERP can be divided into master data and transaction data.Master data is the foundation of shared data between systems, such as customer, vendor, material, organization, and so on.These data entrance the system only once, for common use of each module, and will be maintained in a unified manner by the system.If the master data information is inaccurate, then an error will be diffused in the system, resulting in a significant impact on data quality.For example, if the vendor's bank account information could not be updated timely in the master data, it may cause failure of payment.Therefore, the quality of master data is particularly important for system data integrity and reliability.We can start from the following point of view to analysis the quality of master data using technical means: • If key information about the master data exists vacancy, • If master data which describe the same object is consistent, • If information about master data in the parent company and its subsidiary or between subsidiaries is consistent, • Whether the master data redundancy.Interface data quality is another important aspect of ERP data quality.ERP does not cover all aspects of the enterprise production and management.Most enterprises in the ERP environment integrates a variety of external systems, such as procurement bidding systems, customer relationship management systems, financial systems, and weighbridge system and so on.These external systems generally exchange data with the core modules of ERP.For example, the results of the procurement bidding system (procurement of goods name, code, quantity, price, the successful vendor information, delivery and payment conditions, etc.) will pass to ERP to form procurement orders.Customer relationship management system is in charge of the customer information, and with the ERP customer master data to interact in real-time or on a regular basis so as to maintain the timeliness of customer information.Weighbridge system will transmit metering information of materials to ERP inventory management module.
These data is an important data source of external systems to ERP system, and is important to the accuracy, reliability and integrity of system data.The interface between ERP and external system has important implications for data quality.In the SAP audit process, we can evaluate the data quality of ERP by checking the master data tables in relevant modules and the tables imported from the interface of external systems.
Effectiveness of the system control
Traditional means of controls are usually replaced by information system controls in ERP.Information system controls consist of those internal controls that are dependent on information systems processing.Many control criteria and procedures of internal control have been previously develop and embedded in ERP modules or achieved by control data configuration.When business activities occur, the information system may automatically initiate transactions or perform processing functions.Relevant information is recorded and stored in the central database of ERP.
General controls and business process application controls are highly integrated in ERP.These controls can be divided into programmed controls, parameterized controls and other controls.Programmed control refers to control activities embedded in a computer program.Parameterized control means control activities implemented by configuring the parameters in ERP.These parameters can also be called control data.Business process is controlled by business rules in ERP system.Business rules are basic and important business process application controls in ERP.ERP system may automatically initiate transactions or perform processing functions according to pre-defined business rules.Processing steps and related controls may be set in business rules, which are a kind of critical control data.
Control data can be set in the period of ERP system implementation.Necessary adjustments should be made during system running.The effectiveness of the programmed control depends on the quality of programming.If the program does not work or contains an error, the control failure is likely to exist for a long time.The effectiveness of the parameterized control depends on the accuracy and the timeliness of control data setting.If a control data is not properly set for a long time, the control failure may also exist for a long time.Because information systems process groups of identical transactions consistently, any misstatements arising from erroneous computer programming or control data will occur consistently in the same types of transactions.Therefore, it is necessary to assess the state of control data, and to review the history of control data, in order to assess the potential impact of the changes, and ensure control data set or changed by appropriate personnel at the appropriate time in a timely manner.
Control data play a significant role in ERP's internal control system.It is generally set in the system configuration during the period of system implementation.Necessary adjustments are also needed during the system running.Auditors may evaluate the effectiveness of the controls implemented through the use of ERP's configuration management tools.For example, in SAP, transaction code SPRO can be used to configure the control data during the system configuration.Control data is generally stored in the central database of ERP.Some control data can be retrieved directly and easily through database queries, while others are stored in a special format which can only be read and set by specific configuration management tools.
Application case
In the audit of a petroleum corporation in year 2010, we used quantitative method described above to evaluate the performance of the corporation, and found the following problems:
01048-p.4
According to statistics, to the start date of auditing, the petroleum corporation has created about 5,000 SAP users, including more than 300 users never logged, accounting for about 6%.More than 1,000 users have not logged in the ERP system for more than 60 days, about 22%.
The creation of ERP users is usually according to business process requirements.User idle indicated that resources in ERP system have not been fully used.In addition, the petroleum corporation needs to purchase SAP user licenses, each ranging from approximately USD 2300 to USD 3000, and have to pay an annual service charge 17% of the purchase price.By the end of year 2009, the petroleum corporation has paid license fees and service charges more than USD 13.9 million.A large number of idle users created losses and waste to some extent.
Low utilization of core business processes
The most advantage of ERP systems is the integration of logistics, capital flow and information flow.This makes the enterprise management more efficient.The audit found that to the procurement management processes, one of the core processes in ERP system, usage of subordinate enterprises is not high.According to statistics, in 2009, only about 60 subordinate companies deployed the procurement management module, accounting for 21% of the total companies which implemented ERP.To those companies which deployed the procurement management module, 5 subordinate units simply unused, 16 companies had the online settlement rate of 60% or less, the lowest online settlement rate is only 5.1%.
Lack of system control
After audit analysis on the ERP system of the petroleum corporation, we found that about 1500 items of sales contract have the final execution exceeds 10% of the amount.There are more than 100 crude oil sales transactions which have the ultimate shipments exceeded the sales contracts and orders 10000 barrel or more.There are more than 1000 orders which have a total of receiving exceeding the amount of orders by 5%.There are more than 10 sales records of urea, which have the quantity of 1 ton and the price from USD 6165 to USD 1.54 million, whereas the normal price at around USD 262.After verification, we found that these companies are in this way to improve the prices of the whole batch of fertilizer in order to skirt state limits on factory price of chemical fertilizer.We also found a large number of incomplete master data and data error problems of ERP data interface, which affected the cost accounting of ERP system seriously.
Conclusions
Many traditional performance auditing methods could not work well in ERP environment because they are always time-consuming and difficult to work.Existing approaches for performance evaluation of ERP implementation can be little help to performance auditing under ERP environment because these methods contains certain subjective elements, while audit evidence is required to be objective.This paper proposed an applicable performance audit approach for SAP ERP based on quantitative analysis, which could help auditor to gain substantial audit evidence with quantitative results.This approach can minimize the subjective judgment of the ERP in the performance audit process.Believe that the implementation of this approach also has some significance to the performance auditing of other ERP product.
DOI: 10
.1051/ C Owned by the authors, published by EDP Sciences, 201
Table 1 .
Part of the users never login SAP system
Table 2 .
Online settlement ratio of each branch in year 2009 | 2018-12-06T16:18:05.017Z | 2016-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "8fc232501dcdf111a749d4ac41ff17d7b13f373d",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2016/07/matecconf_iceice2016_01048.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8fc232501dcdf111a749d4ac41ff17d7b13f373d",
"s2fieldsofstudy": [
"Computer Science",
"Business"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
150141745 | pes2o/s2orc | v3-fos-license | The Feasibility of Increasing Hospital Surge Capacity in Disasters through Early Patient Discharge
Objective: Hospitals are expected to be able to provide quality services during disasters. However, hospital capacity is limited and most hospital beds are almost always occupied. The aim of this study was to determine the feasibility of increasing hospital surge capacity during disasters through identification of patients suitable for safe early discharge. Methods: This cross-sectional study was conducted from May 2017 to February 2018 in two phases. In phase I, the Early Discharge Checklist was developed by a multidisciplinary panel of experts. Then in phase II, the checklist was used to assess the dischargeability of 396 in-patients in general wards of hospitals in Alborz province, Iran. Data were analyzed through the SPSS software (v. 22.0) and the results were presented by descriptive and analytical statics at a significance level of less than 0.05. Results: Of 396 patients, (64.65%) were male, (68.9%) were married, and (38.6%) aged more than 54. Moreover, (34.6%) patients were dischargeable. Patients in cardiology wards were more dischargeable. At follow-up assessment, 33.3% of patients had been discharged after 48 hours. There was a significant relationship between patient dischargeability and 48-hour hospitalization status (p=0.001). Dischargeability had no significant relationships with patients’ demographic characteristics (p>0.05). Conclusion: A considerable percentage of in-patients are dischargeable during disasters. The Early Discharge Assessment Checklist, developed in this study, is an appropriate tool to provide reliable data about early dischargeability in disasters.
Introduction
H ealth protection is the first concern of human beings during disasters. Thus, the number of people who refer to hospital settings unexpectedly and suddenly increases immediately after disasters, so that hospital management may be impaired [1]. The ability to provide quality of care proportionate to patient surge has always been among the main concerns of healthcare delivery systems during disasters [2]. Management of massive patient surge during disasters necessitates abundant resources and activities [3][4][5][6]. However, hospital resources are usually inadequate for fulfilling the numerous needs of a large number of disaster victims. For instance, hospital capacity is mostly limited and there are few empty beds even in normal conditions. Thus, strategies are needed to increase hospital surge capacity (HSC) during disasters [3,[7][8][9]. HSC is the ability of a hospital in rapidly expanding and promoting its services during disasters or mass casualty incidences [4,10,11]. The four main elements of HSC include stuff, staff, structure, and system. Thus, HSC can be increased through discharging patients with no serious conditions, withholding routine activities, elevating staffing level, using vacant spaces, and increasing medication and equipment supplies [12,13].
Early patient discharge is one of the methods for increasing HSC [14,15]. Through this method, patients who will not be at serious health risks after treatment discontinuation are discharged [4,16]. A study into the emergency department response capacity in crises in Iran reported that early discharge of patients with stable conditions can increase emergency department capacity by 27.5% [17]. Another study in Iran reported that using different capacity-increasing measures increased the admission capacity of the emergency department from 16 to 42 patients [18]. Similarly, a study in the United States reported that one third of hospitalized patients were dischargeable within 24 hours [10]. A series of studies by Kelen et al. in the United States also showed that hospitalized patients can be categorized according to their dischargeability into five levels ranging from absolutely non-dischargeable to dischargeable with no serious complications [3]. Around 44% of hospitalized patients were then determined as dischargeable because they did not need critical care [19]. Moreover, 11% of hospitalized patients were identified as immediately dischargeable and 13% of them as dischargeable within 96 hours [16]. A study in Norway used the results reported by Kelen et al. and reported a 16%-increase in bed surge capacity within four hours [20].
Despite the known importance of early discharge to HSC, there are no clear guidelines for identifying dischargeable patients [10,16,19,21,22]. Moreover, there is limited information about dischargeable patients and HSC increase following early patient discharge. Thus, the present study was designed and carried out to determine the feasibility of increasing HSC during disasters through early patient discharge.
Materials and Methods
This cross-sectional analytical study was conducted from May 2017 to February 2018 in two main phases. The first phase was related to the development of the Early Discharge Assessment Checklist (EDAC) and the second phase was related to the use of the checklist to assess early discharge among a sample of hospitalized patients.
Phase I: Development of the EDAC
The criteria for early discharge were developed based on the comments of a multidisciplinary panel of experts (ten males and three females) who were interested in disaster management. All experts were faculty members of Alborz University of Medical Sciences, Alborz, Iran, and were working in hospitals affiliated to the university. The means of their age and work experience were 42 and 16, respectively. We held several sessions with the experts, from May to September 2017, in order to generate the checklist items. In the first session, the aims and necessity of the study and a summary of the existing literature were provided to the experts. Moreover, a hypothetical scenario was presented to them respecting a disaster in a neighbor province which would result in the transfer of its victims to the hospitals in Alborz province, Iran. The assumptions related to the hypothetical disaster scenario and the necessity of early patient discharge were as the following: • The Emergency Operation Center of the local university announces the disaster in a neighbor province and requires the maximum level of alert; • Around 80% of all hospital beds are occupied; • There is an overcrowding in the hospital due to the transfer of the disaster victims; • This critical situation is expected to continue at least for 72 hours; • The quality of care should be maintained up to standard levels. As the aim of the study was to estimate the feasibility of early discharge, rather than the actual discharge of patients, the experts were asked to determine the basic criteria for early discharge in case of disasters which would create the need for increasing HSC. Accordingly, the experts agreed on 25 essential items on early discharge. The items fell into the four main domains of abnormal vital signs (six items), serious symptoms or conditions (nine items), the need for in-hospital medical interventions (six items), and abnormal laboratory findings (four items). The primary checklist was piloted in a teaching hospital in Karaj, Iran, and the findings were presented to the experts. They amended the items based on the findings and agreed on the final checklist which included 25 items in the following four domains: A. Vital signs: a temperature of less than 36°C or more than 38.5°C; a blood pressure of less than 90/60 or more than 180/11 mm Hg; a pulse rate of more than 100 beats per minute; a respiratory rate of more than 22 per minute; a Glasgow Coma Scale score of less than 15; and an arterial oxygen saturation of less than 95%; B. Serious symptoms or conditions: any neurologic symptoms in the past 48 hours; gastrointestinal bleeding in the past 48 hours; nausea/vomiting after eating; uncontrolled diabetes mellitus; acute asthma; chronic obstructive pulmonary disease; mental disorders associated with the risk of harm to the self or others; acute coronary syndrome; and convulsions in the past 24 hours; C. The need for in-hospital medical interventions: emergency surgery; relocation of joint dislocation; intravenous antibiotic therapy; advanced wound care (with debridement); cardiac monitoring; and respiratory support; D. Abnormal laboratory findings: a hemoglobin level of less than 8 mg/dL; a blood sodium level of less than 130 mEq/L; a blood potassium level of more than 5.5 mEq/L; and a blood glucose level of less than 80 or more than 250 mg/dL. Items were rated either "Yes" ("Present") or "No" ("Absent"). If all items were rated "No", the intended patient was considered dischargeable. However, even one item with "Yes" response showed that the intended patient was non-dischargeable. Early discharge assessment for each patient was started using the vital signs items and continued with the items in the serious symptoms or conditions, the need for in-hospital medical interventions, and the abnormal laboratory findings. The content validity of the checklist was assessed using content validity ratio and index. For content validity ratio calculation, ten experts in different medical specialties rated the necessity of each checklist item as either "Essential", "Useful but unessential", and "Unessential". On the other hand, for content validity index calculation, the same experts rated the simplicity, relevance, and clarity of each item on a four-point scale. All items had content validity ratios and indices of respectively more than 0.62 and 0.8 and hence, none of them were deleted [23].
Phase II: Early Discharge Assessment Study Setting
The study was conducted in teaching and nonteaching hospitals affiliated to Alborz University of Medical Sciences, Karaj, the center of Alborz province, Iran. Alborz province is located twenty kilometers west to Tehran, the capital of Iran, and has six counties with a total population of 2712400 people. This province is the neighbor of three provinces with high disaster rate (i.e. Tehran, Qazvin, and Gilan) and is located in the road of more than fifteen provinces in Iran. Karaj, the center of the province, is the fourth most populated city in Iran with a population of 1615218 people [24]. There are eleven public hospitals in the province, including four teaching and seven non-teaching hospitals, all of which are affiliated to Alborz University of Medical Sciences, Karaj, Iran. At the time of the study, three hospitals had a low bed occupation rate and one hospital was a maternity hospital. These four hospitals were not included in the study. Finally, seven hospitals were studied, which included three teaching and four non-teaching hospitals ( Table 1). All 396 patients who were hospitalized in the general wards of these hospitals during the study were recruited to the study through census. Based on the comments of the experts who participated in checklist development, patients in pediatric, psychiatric, and burn care wards, coronary care units, and pediatric, neonatal, and adult intensive care units were not included in the study due to their special care needs.
Data Collection
Necessary data were collected using a demographic and clinical characteristics questionnaire and EDAC. The questionnaire contained items on patients' age, gender, marital status, educational level, hospitalization ward, length of hospital stay, and medical diagnosis based on the tenth edition of the International Classification of Diseases [25]. This questionnaire also included one item on patient hospitalization status at 48 hours after early discharge assessment. The two possible responses to this item were "Has discharged" and "Still hospitalized". The second data collection instrument was EDAC, which assessed patient dischargeability using 25 items in the four aforementioned domains. For data collection in each hospital, one day of the weak was randomly selected using a table of random numbers. Then, we referred to the intended hospital in the selected day and started data collection. We continued data collection in that hospital until all patients in its general wards were selected and assessed. Data collection was performed from October 2017 to February 2018. Each patient's hospitalization status was re-assessed 48 hours after early discharge assessment.
Data Analysis
The collected data were analyzed using the SPSS software (v. 22.0). Results were presented using the measures of descriptive statistics such as mean, standard deviation, frequency, and percentage. The relationships of dischargeability with patients' demographic characteristics were assessed through the Chi-square or the independent-sample t tests. The independent-sample t test was also used to compare dischargeable and non-dischargeable patients respecting the length of their hospital stay on the data collection day. All statistical analyses were performed at a significance level of less than 0.05.
Ethical Considerations
This study was approved by Shahid Sadoughi University of Medical Sciences, Yazd, Iran (with the code of IR.SSU.SPH.REC.1395.129). Necessary permissions for the study were also obtained from Alborz University of Medical Sciences, Karaj, Iran.
All patients were ensured about the confidentiality of their data and their informed consents were obtained.
Results
In total, 396 patients hospitalized in general hospital wards were studied. They were mostly male (64.65%) and married (69.94%) and more than one third of them had below-diploma education (34.6%) and aged more than 54 (38.64%) ( Table 2). Moreover, around half of the patients were hospitalized in general surgery wards (49.75%) and the most common health problem among them was musculoskeletal disorders (24.75%). Among 396 studied patients, 137 were dischargeable (34.6%) and 259 were non-dischargeable (65.4%). The means of hospital stay among all patients and among dischargeable and non-dischargeable patients were 4.8±6.3, 3.5±3.4, and 5.5±7.4 days, respectively. The mean of hospital stay among dischargeable patients was significantly less than their non-dischargeable counterparts (p<0.001). At 48-hour follow-up assessment, 132 patients had been discharged (33.3%), while 264 patients were still hospitalized (66.7%). The Chi-square test showed significant relationship between patient dischargeability and 48-hour hospitalization status (p=0.001). In other words, initial assessment revealed that one third of patients were dischargeable and 48hour follow-up assessment revealed that one third of patients had been discharged. The same statistical test also indicated a significant relationship between dischargeability and hospital ward, so that patients in cardiology wards were more dischargeable while patients in orthopedic wards were less dischargeable (p=0.005; Table 3). However, dischargeability had no significant relationships with patients' characteristics ( Table 2).
Discussion
In this study, a checklist for early discharge assessment, called EDAC, was developed by a panel of experts for the first time in Iran. Then, the checklist was used to identify dischargeable patients and assess the feasibility of increasing HSC. It is important to note that EDAC is a short and simpleto-use instrument with simple Yes/No items which assesses patient dischargeability in the four main domains of abnormal vital signs, serious symptoms or conditions, the need for in-hospital medical interventions, and abnormal laboratory findings. The main findings of the present study were the early dischargeability of around one third of patients who were hospitalized in general hospital wards and the actual discharge of one third of them within 48 hours after our initial assessment. Study findings showed that 34.6% of hospitalized patients in general hospital wards were dischargeable during a hypothetical disaster. It means that while there are many patients waiting for empty beds in normal conditions, an HSC of more than one third of the total hospital capacity can be created through early patient discharge. In line with our finding, an earlier study in the United States reported that one third of hospitalized patients were dischargeable within 24 hours [10]. Another study in the United States into the creation of HSC through early patient discharge reported that 44% of hospitalized patients did not need critical care and hence, were dischargeable [19]. Similarly, a study in Iran indicated that the early transfer of patients with stable conditions from the emergency department to other hospital wards increased the capacity of the emergency department by 27.5% [17]. Compared with hospitalized patients, disaster victims have greater need for hospital services; thus, early discharge of a large number of patients can significantly reduce mortality rate in disasters. The dischargeability of around one third of the hospitalized patients in the present study implies that there is no necessity for the hospitalization of some patients. Unnecessary hospitalization of some patients is due to different factors such as delays in the process of medical consultations, physicians' uncertainty about patient management, delays in performing some medical procedures, and unavailability of some equipment for diagnostic and paraclinical studies such as computed tomography scanning and magnetic resonance imaging [26]. The presence of medical science students in teaching hospitals also contributes to unnecessary patient hospitalization because some patients are kept hospitalized to teach their underlying conditions to students [27]. A study in Iran reported that medical factors, paraclinical factors, and hospital type (teaching or non-teaching) can affect length of stay in hospital [28]. Another studies in the United States also revealed that there is unnecessary hospitalization which is contributed to the following factors: presence of medical residents instead of medical specialists, postponement of procedures, and difficulty finding a bed in a skilled nursing facility [27,29]. These findings highlight the necessity of developing strategies for managing factors behind unnecessary patient hospitalization, as well as a system approach on the phenomena that have been already shown in other studies [30]. Subsequent low bed occupation rate in hospitals before disasters paves the way for better management of disaster victims. More focus on educational plan can also fill this gap, is it shown in other related studies [31,32]. The other finding of the present study was that 48 hours after initial dischargeability assessment using EDAC, one third of patients had been discharged from hospital. This finding confirms the reliability of the data obtained through EDAC. In other words, in line with EDAC data which revealed that one third of patients were dischargeable, 48-hour follow-up assessment indicated that one third of patients had been actually dischargeable. The availability of quality out-patient and home-based care services can provide the opportunity for early discharge of a large number of patients in [3,19,29] According to Hogg, implying hospital-based cares at home can reduce hospital bed occupancy in disasters [33].
Limitations and Strengths of the Study
A limitation of the study was the exclusion of patients in critical care units and pediatric, psychiatric, and maternity wards from the study. Therefore, more studies need to be conducted to estimate dischargeability or transferability of patients in critical care units in order to create surge capacity in these units. In this study there were no patient discharged practically, so future studies should be carried out on real discharge of low risk patients in controlled conditions and tracking for any untoward events. The strength of the study was the early discharge assessment of all patients in general wards in different hospitals, which expanded the generalizability of the study findings to other hospitals in other cities. In conclusion, the current study shows the feasibility of early discharge of one third of patients hospitalized in general hospital wards in order to increase hospital surge capacity during disasters. The actual discharge of one third of patients within 48 hours after initial assessment highlights that EDAC can provide reliable data regarding the dischargeability of patients in general hospital wards. Implication of appropriate methods such as those introduced in this study for identification of low risk patients can help the decision makers in health system to estimate available beds in disasters. Ultimately, early discharge can significantly increase hospital capacity without any needs to develop other resources. | 2019-05-12T13:27:47.135Z | 2019-04-01T00:00:00.000 | {
"year": 2019,
"sha1": "bcc2d18163a6a45c717376abf71bd8ea5357530e",
"oa_license": "CCBY",
"oa_url": "https://europepmc.org/articles/pmc6555210?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "bf7f3f4c0d674939f8c3fd26f4fd0bec3a4ee1ca",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
16675475 | pes2o/s2orc | v3-fos-license | The Effect of Perioperative E-Health Interventions on the Postoperative Course: A Systematic Review of Randomised and Non-Randomised Controlled Trials
Background E-health interventions have become increasingly popular, including in perioperative care. The objective of this study was to evaluate the effect of perioperative e-health interventions on the postoperative course. Methods We conducted a systematic review and searched for relevant articles in the PUBMED, EMBASE, CINAHL and COCHRANE databases. Controlled trials written in English, with participants of 18 years and older who underwent any type of surgery and which evaluated any type of e-health intervention by reporting patient-related outcome measures focusing on the period after surgery, were included. Data of all included studies were extracted and study quality was assessed by using the Downs and Black scoring system. Findings A total of 33 articles were included, reporting on 27 unique studies. Most studies were judged as having a medium risk of bias (n = 13), 11 as a low risk of bias, and three as high risk of bias studies. Most studies included patients undergoing cardiac (n = 9) or orthopedic surgery (n = 7). All studies focused on replacing (n = 11) or complementing (n = 15) perioperative usual care with some form of care via ICT; one study evaluated both type of interventions. Interventions consisted of an educational or supportive website, telemonitoring, telerehabilitation or teleconsultation. All studies measured patient-related outcomes focusing on the physical, the mental or the general component of recovery. 11 studies (40.7%) reported outcome measures related to the effectiveness of the intervention in terms of health care usage and costs. 25 studies (92.6%) reported at least an equal (n = 8) or positive (n = 17) effect of the e-health intervention compared to usual care. In two studies (7.4%) a positive effect on any outcome was found in favour of the control group. Conclusion Based on this systematic review we conclude that in the majority of the studies e-health leads to similar or improved clinical patient-related outcomes compared to only face to face perioperative care for patients who have undergone various forms of surgery. However, due to the low or moderate quality of many studies, the results should be interpreted with caution.
Methods
We conducted a systematic review and searched for relevant articles in the PUBMED, EMBASE, CINAHL and COCHRANE databases. Controlled trials written in English, with participants of 18 years and older who underwent any type of surgery and which evaluated any type of e-health intervention by reporting patient-related outcome measures focusing on the period after surgery, were included. Data of all included studies were extracted and study quality was assessed by using the Downs and Black scoring system.
Findings
A total of 33 articles were included, reporting on 27 unique studies. Most studies were judged as having a medium risk of bias (n = 13), 11 as a low risk of bias, and three as high risk of bias studies. Most studies included patients undergoing cardiac (n = 9) or orthopedic surgery (n = 7). All studies focused on replacing (n = 11) or complementing (n = 15) perioperative usual care with some form of care via ICT; one study evaluated both type of interventions. Interventions consisted of an educational or supportive website, telemonitoring, telerehabilitation or teleconsultation. All studies measured patient-related outcomes focusing on the physical, the mental or the general component of recovery. 11 studies (40.7%) reported outcome measures related to the effectiveness of the intervention in terms of health care usage and costs. 25 studies (92.6%) reported at least an equal (n = 8) or positive (n = 17) effect of the e-health intervention compared to usual care. In two studies (7.4%) a positive effect on any outcome was found in favour of the control group.
Introduction
In recent years e-health interventions have become increasingly popular in medical care [1; 2].
On the one hand this is because there is a growing demand for electronic technologies in society; the development of these technologies gives people the opportunity to get information and to self-manage all type of activities in daily living, including their health [3]. On the other hand, ehealth may also prove to be of great benefit to health care. It may help to deliver more patient centered care and to involve patients more in their own treatment. Better patient engagement is a crucial factor for improving quality of care and can lead to increased patient safety. It has the potential to motivate people and to turn them into more active and effective managers of their own health [4]. For this reason, also in peri-operative care e-health interventions are broadly applied [5; 6]. They are used pre-operatively with the aim to prepare patients in the best possible manner for surgery or to speed up recovery post-operatively [7][8][9]. Educational or supportive websites are frequently used to suit this purpose. In addition, many e-health interventions are used intra-operatively, for example tools to assist the surgeon during surgery or simulation interventions for educating trainee surgeons [10; 11]. Finally in the post-operative course e-health devices or programs are broadly applied to assist patients in their recovery process [12; 13]. This is also delivered by educational or supportive websites, but several other types of e-health interventions have been developed. For example, telemonitoring, in which patients are monitored from a distance, or telerehabilitation in which patients are supported by e-health devices in their recovery process instead of within a rehabilitation center or physiotherapy sessions in a conventional way. Finally e-consultations rather than the standard postoperative consults are applied.
E-health interventions focusing on recovery are an important topic since literature shows that recovery after surgery takes much longer than expected [14][15][16][17]. Given the growing number of surgeries per year, it is important that we find a way to support these patients in their recovery process. There are two different reasons to use e-health in perioperative care. The first one is to optimise the recovery process by providing additional care. This is evaluated by patient-related outcome measures such as satisfaction, pain or functioning. Another reason to apply e-health interventions is to substitute the usual care by some form of e-health, with the aim of delivering more efficient care. This is evaluated by outcome measures such as costs or health care usage.
Many studies have been carried out to evaluate the potential benefit of e-health interventions on the postoperative course, focusing on a wide range of surgery types, interventions and outcome measures. However, until now, no systematic review of these e-health interventions has been carried out to report the effectiveness of these types of mediation compared to more conventional perioperative care. Therefore we conducted a systematic review with the objective to evaluate the effect of perioperative e-health interventions on the postoperative course including both randomised and non-randomised controlled trials.
Eligibility Criteria
Studies fulfilling the following inclusion criteria were included.
Type of Studies. We included controlled studies, containing both randomised and nonrandomised comparative studies. Studies which did not include a control group drawn from the same population were excluded. The studies must have been written in English.
Type of Participants. Participants of 18 years and older, undergoing any type of surgery were considered.
Type of Interventions. Studies were included if they evaluated any type of e-health interventions. We used the definition of e-health which was defined by Paglari et al: "eHealth is an emerging field of medical informatics, referring to the organization and delivery of health services and information using the Internet and related technologies" [19]. We defined related technologies as modern technologies such as mobile apps or tele-monitoring. Interventions consisting of audiotapes or telephone calls were not considered. We only included studies in which the intervention started before surgery or within the four weeks after surgery.
Type of Outcome Measures. We counted studies with all types of patient-related outcome measures, including costs, with a focus on the period after surgery. Health outcomes specific for the type of surgery, and outcome measures related to knowledge or education were not considered.
Information Sources
A systematic literature search was performed by RO and EM in the bibliographic databases PubMed, Embase.com, the Cochrane Library (via Wiley) and CINAHL (via EBSCO) from inception until the 2 nd of December 2015.
Search
Search terms expressing e-health were used in 'AND' combination with search terms comprising the operative period. Search terms included controlled terms (e.g. MeSH in PubMed and Emtree in Embase) as well as free text terms. We used free text terms only in The Cochrane Library. The full search strategies for all the databases can be found in S1 Text. The selected studies were checked for related citations in PubMed and cross-references.
Study Selection
Two reviewers (EM and FS) independently screened the records that were produced in the search. First, titles were screened according to the inclusion criteria. Second, the abstracts of the remaining records were screened for inclusion. The full text of the remaining articles was reviewed by both reviewers. Hereafter a third reviewer (JA) was consulted when there was disagreement about the in-or exclusion of articles by the first two reviewers. The final decision was based on consensus between the three reviewers. When articles were identified that reported the same study, initially only the parent study was included. The articles were included as separate articles when relevant outcome measures were reported or when subgroup analyses were carried out which reported results which were in line with the aim of this review.
Data Collection Process
One reviewer (EM) extracted the data using a data extraction form which was developed by the authors, based on the Cochrane Consumers and communication Review Group's data extraction template [20]. A second reviewer (FS) checked the extracted data. Disagreements were discussed and when necessary a third reviewer (JA) was consulted. Authors were contacted in the case of missing data.
Data Items
Data were extracted from each included study on: 1) specific study characteristics (authors, year of publication, geographic location, study design and number of participants) 2) characteristics of the study participants (in-and exclusion criteria, reason for surgery (benign or malign), type of surgery, age, gender) 3) type of intervention (type, moment of commencement (before surgery, during hospitalisation or during or shortly after discharge), duration of the intervention) 4) type of control group and 5) outcome (type of outcome measure, methods of assessing outcome measures, timing of assessing outcome measures, follow-up duration)
Assessment of Risk of Bias in Included Studies
Risk of bias of the individual studies was assessed by using the Downs and Black scoring system [21]. This item scoring list was adapted slightly by the authors of this review, in a similar way to previous reviews. (S1 Form) [22; 23]. We changed the answering options of item 27 'Did the study have sufficient power to detect a clinically important effect where the probability value for a difference being due to chance is less than 5%?'. We defined the answering options as 'Yes' when a power calculation was performed and there was sufficient power, 'No' when a power calculation was performed, but the power was not reached or a subsample was drawn from another study and 'UTD' when there was no report of a power calculation. The maximum score for this adapted list was 27 points Two reviewers (EM and FS) independently judged the risk of bias of the included studies. Furthermore, the two reviewers discussed about the items which were not judged the same, until they reached consensus. We defined the following three quality score classifications; good (21)(22)(23)(24)(25)(26)(27), fair (14)(15)(16)(17)(18)(19)(20) and poor (lower than 14).
Quantitative Analysis
Due to heterogeneity in terms of type of surgery, type of intervention, type of outcome measures and study design it was not possible to conduct a meta-analysis. Instead, we aimed to present a descriptive overview of the different studies including their characteristics and results.
Results of the Search
The literature search yielded 3779 records (Fig 1). Seven additional articles were identified by screening the selected studies for cross-references and related citations in Pubmed. Duplicates were removed and the titles of the remaining 2633 records were screened. After reviewing the abstracts of the remaining articles, 189 records were excluded because they did not meet the inclusion criteria. The full text of the remaining 81 articles was examined, which resulted in 33 articles fulfilling the inclusion criteria of this review, reporting on 27 unique studies (six articles reported other outcome measures or subgroup analyses of one of the included studies). (Table 1) Design of the Included Studies. Of the 27 included studies, most studies (n = 22) were randomised controlled trials; of these trials three had a non-inferiority design. The remaining five studies were prospective or retrospective controlled studies. Almost all studies had two arms, (intervention and control) except for one study with three arms [24]. Duration of followup varied from 24 hours [25] up to 12 months [26]. Studies were executed in 12 different countries; most of them in the USA (n = 11), followed by four in Canada. The mean number of participants per study was 130 (range [27; 28]. (14)(15)(16)(17)(18)(19)(20); poor (lower than 14) 3: + = significant difference in favour of the intervention group regarding at least one outcome measure;-= significant difference in favour of the control group regarding one outcome measure; x = no significant difference between groups regarding all outcome measures
Study Characteristics
Participants. Most studies (n = 9) included patients undergoing cardiac surgery, accompanied by seven studies which involved orthopaedic surgery. The indication for surgery was in most studies benign (n = 23); only two studies included patients undergoing surgery because of a malignant indication only [29; 30] and two studies included both [24; 31]. The mean age of the participants varied from 43.2 years [32] to 75.3 years [33]. Most studies included both male and female patients, except for one study [32] which included patients undergoing gynaecological surgery.
Type of Interventions. All studies focused on replacing (n = 11) or complementing (n = 15) perioperative usual care by or with some form of care via ICT. One study evaluated both by using two intervention arms [24]. We categorised the methods into four categories according to the main aim of the intervention: 1. An educational or supportive website or device (ESW) to provide information about the surgery and the recovery process, to give positive reinforcement or to provide a tailored rehabilitation program in addition to the usual perioperative care: 12 studies [8; 25; 29; 32-40].
2. Telemonitoring (TM) through electronic questionnaires or by an electronic symptom alert system in or outside the hospital: eight studies [24; 28; 30; 31; 41-44]. In three studies this took place inside the hospital in the form of robotic telerounding and in five studies the telemonitoring took place outside the hospital by electronic symptom questionnaires or vital functioning monitoring. In one study [28] this was part of an enhanced discharge planning intervention and one of these studies also provided audio-video sessions. 4. Teleconsultations (TC) were used instead of a face to face consult with the surgeon in the decision process whether or not to perform surgery: one study [49].
Type of Outcome Measures. The outcome measures were classified into three categories: Seven studies included also reported on outcome measures specific to the type of surgery or intervention, for example cardiovascular risk factor modification adherence, or outcomes measuring the function or condition of the shoulder or knee [26; 27; 30; 34; 45-47]. One study reported on patient knowledge about surgery and recovery [40]. The results of these outcome measures were not considered in this review.
Risk of Bias in Included Studies
11 studies were judged as having a low risk of bias, 13 studies as medium risk of bias, and three studies as high risk of bias. Five items were scored by a notably low number of studies: if there was made an assumption to blind the patients (n = 2) or the caregivers (n = 5), whether adverse events were being reported (n = 8), if the study had sufficient power to detect a clinically important effect (n = 9) and if compliance with the intervention was reliable (n = 10) (S1 Table).
Outcomes 17 studies (63.0%) reported a significant effect in favour of the intervention group regarding at least one of the reported outcome measures (Table 1). Eight studies (29.6%) reported no significant differences between the groups. Two studies (7.4%) found an effect in favour of the control group, but one of these studies also found a positive effect with regards to the intervention group relating to one outcome measure.
In total, 12 studies evaluated an ESW intervention. In eight studies (66.7%) a significant difference in favour of the intervention group was observed. In the eight studies in which a TM intervention was evaluated, a significantly positive effect was found in four studies (50.0%). Moreover four out of six studies (66.7%) reported a positive effect of a TR intervention. The only study which evaluated a TC intervention found a significant difference with regards to the intervention group. 11 out of the 15 studies that evaluated an intervention in addition to usual care found a significant difference between groups in favour of the intervention group (73.3%). Of the 11 studies that evaluated an intervention which substituted the usual care, six found a positive effect (54.5%). Table 2 shows the overall results of the positive or negative effects for the different types of reported outcome measures.
1. Outcomes Regarding the Physical Component of the Postoperative Course.
1.1 Physical Functioning. In Table 3, the study results of the 10 studies reporting physical functioning scores are presented. Regarding physical functioning, six studies showed significant changes between groups in favour of the intervention group [27; 29; 32-34; 46]. Four of these studies used the SF-36 as a measuring instrument [27; 32-34] Two studies used questionnaires. As well as this one study used a self-developed quality of life questionnaire with five physical functioning subscales [29]. Of these five subscales, the physical self-efficacy subscale showed a significant difference 6 weeks and 3 months after surgery, whereas the general physical complaints and perceived abilities in swallowing and food intake only showed a significant difference 6 weeks after surgery. One study reported a significant difference in the absolute mean change of the Patient-Specific Functional Scale [46]. Pertaining to these six studies, four were rated as being of medium risk of bias and two of low risk. All of these studies (mainly) focused on the period after discharge, with four studies evaluating an ESW intervention. Moreover, only one study started prior to surgery [32], the other five studies started at the moment of discharge or one week afterwards.
A particular study (n = 170) with a medium risk of bias reported no difference in effect between groups for physical functioning, however they reported an increase of scores in both groups compared to baseline values, which was only significant regarding all subscales in the intervention group (TR) [26]. The remaining three studies showed no difference in effect between groups for any of the subscales [28; 35; 38]. One of these was the large, medium risk of bias study of Barnason 2009 [35], in contrast to the two earlier studies of Barnason [33;34] in which a positive effect of the intervention was reported. This study from 2009 was very similar to the previous two studies from these researchers, however it consisted of a bigger sample and a longer follow-up duration.
1.2 Physical Activities. Two ESW studies measured physical activities using an activity diary and RT3 accelerometer. One large study (n = 232, medium risk of bias) reported a significant change in estimated energy expenditure measured by the RT3 accelerometer; the control group showed higher scores three weeks after surgery when compared to the intervention group [35]. One medium risk of bias study (n = 49) reported no difference in effect for this outcome measure between groups [38].
1.3 Pain-related Outcome Measures. Nine studies measured pain scores (Table 4), of which three reported a positive effect in the postoperative pain score for the intervention group. These studies all vary based on the type of surgery, type of intervention and duration and timing of the interventions. Five studies presented no significant differences in pain scores between groups [25; 37; 42; 45; 46]. Two of these were non-inferiority studies [45; 46]. One study (n = 40, high risk of bias) reported significantly higher pain levels in the intervention group [41]. In this study the assessment of postoperative pain was evaluated. The intervention group (TM) responded by mobile phones and the control group by paper-based questionnaires. Two low risk of bias, ESW studies measured analgesic consumption or requirements [25; 37]. One such study (n = 64) reported no change in effect between the intervention and the usual care group [25]. The second study (n = 60) observed significantly more use of opioid medication in the intervention group than in the control group [37]. However, regarding pain interference with daily activities, a positive effect of the intervention was found for pain interference with breathing/coughing 3 days after surgery. A similar positive influence was found for the intervention group in pain interference on appetite on day 7 after surgery compared to the control group.
1.4 Postoperative Symptoms or Problems. Five studies reported problems or symptoms in the postoperative course, which were not rated as complications [8; 30; 33; 39; 51]. In a particular study (sub analyses on female patients, n = 45, low risk of bias) a positive effect was observed for one out of ten symptoms; patients who received daily sessions with a telehealth device (ESW intervention) reported significantly lower fatigue scores than patients who received usual care six weeks after CABG surgery [51]. The other studies (three ESW interventions and one TM intervention) reported no significant differences in symptom scores between groups.
1.5 Complications. Three studies reported complications during follow up. No studies found a higher instance of complications in the intervention group. One study (n = 170, medium risk of bias) reported more difficulties in the control group than in the intervention group (TR) during follow up [26]. Two other studies [28; 31] reported no differences between groups. One of these studies was a non-inferiority study in which complications were the primary outcome measure [28].
2. Outcomes Related to the Psychosocial or Mental Component of the Recovery Process.
Psychosocial Functioning.
There were nine studies that described the psychosocial functioning subscales of quality-of-life questionnaires (Table 5). Of these, four found significant differences between groups that favoured the intervention group with regards to one or more subscales [27; 29; 32; 34]. Two of these were small studies (n = 35 and n = 22) with a medium risk of bias and reported a positive effect of the intervention on the vitality subscales of the SF-36 [27; 34]. In one such study there was a positive effect of the intervention on the Mental Health subscale [34]. The third study which reported an effect was a study with 184 participants undergoing head and neck cancer surgery and with a medium risk of bias. In this study a positive effect was reported on 2 out of 17 mental health subscales (anxiety and fear related to head and neck problems) 6 weeks after discharge [29]. The fourth study which reported an effect in psychosocial functioning was a low risk of bias study with 215 participants [32]. A positive effect of the intervention (ESW) was reported for the mental component of the SF-36.
2.2 Anxiety, Depression and Emotions. In total, four studies measured mental health recovery with instruments other than the quality-of-life questionnaires. There were also two studies with a low risk of bias that measured anxiety with the S-STAI or the HADS respectively [25; 43]. No differences in anxiety scores at follow up were measured. One study measured anxiety about recovery after an ESW intervention with a self-developed questionnaire [40]. Participants from the intervention group were significantly less anxious about their recovery. Depression was measured with the CASD-10, which also did not show any significant differences between groups [43]. Postoperative emotion scores were measured in a low risk of bias study with 147 participants undergoing orthopaedic surgery [53]. No effect on emotions was measured between the ESW intervention and the control group. 2.3 Self-efficacy and Autonomy. We found one study (n = 48, medium risk of bias) which measured functional autonomy in patients undergoing total knee arthroplasty using the SMAF [47; 47; 48], and reported no difference in effect between the two groups after two months. Self-efficacy was measured in a small study (n = 35, medium risk of bias) by the Barnason Efficacy Expectation scale 6 weeks and 3 months after CABG surgery [34]. The intervention group (ESW) reported significantly higher adjusted mean scores across time compared to the usual care groups.
General Outcome Measures in Relation to the Recovery Process.
3.1 General Quality of Life. In total 11 studies used quality of life measurements. Nine of them were represented earlier in this review because they reported separate physical or psychosocial outcomes. However, two relatively small studies measured quality of life total scores for patients undergoing total knee arthroplasty, and found no difference in effect [46; 47]. Furthermore, one study (n = 215, low risk of bias) also measured the total scores of the SF-36, next to the separate physical and mental component scores, and reported a positive effect of the intervention (ESW) [32].
3.2 Satisfaction. Four out of six studies that compared overall satisfaction with the treatment between groups found a significant difference in effect in favour of the intervention group [24; 30; 40; 49]. Five studies [25; 30; 40; 41; 46] evaluated patient satisfaction, with a particular focus on the intervention, without measuring the control group. They all reported that patients were very satisfied.
3.3 Length of Recovery. In only one study (n = 215, low risk of bias) the return to work rate was compared between both groups [32]. In this study a significant difference in return to work of nine days was reported due to the intervention (ESW). They also measured the effect of the intervention by a validated recovery outcome follow-up (RI-10). No difference between both groups was measured.
3.4 Health Care Usage. In all, six studies measured health care usage in the postoperative period, but there were some important differences in the source of health care use the studies evaluated. Four studies measured the number of visits to the physician. However, two medium risk of bias studies (n = 50 and n = 232) reported no significant differences between groups [33; 35]. Two studies reported significantly more visits in the control group [43; 49]. In one study (n = 62, high risk of bias) this was not a surprising finding since the intervention in this study consisted of a teleconsult instead of a regular hospital visit (TC intervention) [49]. One study measured the number of physiotherapy sessions during a TR intervention [27]. The telemedicine group received a greater number of treatments compared to the control group, but it was not described whether or not this difference was significantly or clinically relevant. The three studies which reported the number of emergency department visits found no significant differences between groups [33; 35; 43], nor did the four studies which reported the number of readmissions in the hospital [33; 35; 43; 44].
3.5 Length of Hospital Stay. There were four studies which measured hospital length of stay [27; 28; 31; 44]. Only one of these studies (n = 379, medium risk of bias) reported a positive effect using a home monitoring program (TM intervention) after a pacemaker implantation on hospital length of stay (3.2 days SD 3.2 vs 4.8 days SD 3.7) [28].
3.6 Costs. Five out of six studies reported on costs related to direct and indirect health care costs [26; 28; 48; 54; 55]. The majority (n = 4) included the extra costs for the intervention [26; 48; 54; 55]. Only one trial reported the cost-effectiveness of the intervention calculating the ICER related to the effect on physical activity [55]. There was also a trial which only reported the estimated cost savings based on the length of stay in hospital [44]. Only two studies reported a positive effect in costs [26; 48]. For one such study the effect depended on the travel distance for the patient between their residence and the hospital. For the other three studies no difference in costs were measured between the two groups [48]. All were large studies (at least 147 participants) but with a high [44] or medium risk of bias [26; 28; 48; 54; 55].
Main Findings
In this systematic review we evaluated the effect of complementing or substituting care by perioperative e-health interventions on the postoperative course based on the results of 27 included studies. There was a large diversity in studies regarding to type of patients, interventions and outcome measures. 25 studies (92.6%) reported at least an equal (n = 8) or positive effect (n = 17) of the e-health intervention compared to usual care. In two studies (7.4%) also a positive effect was observed for all outcomes in favour of the control group. Most studies evaluated an ESW intervention. There were no considerable differences in the effectiveness between the different types of e-health interventions. No association was found between the aim of the intervention (addition of care or substitution of care) and its effectiveness. The majority of the studies (n = 9) included patients undergoing cardiac surgery. Of these, seven studies (77.7%) found a positive effect with regards to the intervention group concerning one or more of the noted outcome measures. Seven studies included patients undergoing orthopedic surgery. Three of these studies (42.9%) reported a positive effect. However, these populations are very troublesome to compare as there was a wide diversity in the type of e-health interventions which were evaluated in both groups.
We categorised the outcome measures which were reported in the different studies into physical, mental and general outcome measures. Overall the results in these outcomes measures were comparable which suggests that there were no specific differences in the effect of ehealth interventions on the different types of postoperative outcome measures.
As well as categorising outcome measures into physical, mental and general, another categorisation could have been made: outcome measures focusing on the additional value of ehealth interventions on patients' wellbeing (such as physical or mental functioning, pain, satisfaction) and outcome measures focusing on the efficiency of e-health interventions (such as health care use and costs). The second category type of outcome measures was notably less used in the selected studies (n = 11). Of these, six studies reported on costs. These studies were all relatively large, but were all considered to have a medium risk of bias. Two studies observed a positive effect in costs [26; 48].
Strengths and Limitations
This is the first systematic review published on the usage of e-health in the perioperative care. Another strength of this review is methodological quality, ensured by following the Prisma guidelines [18] for systematic reviews. We conducted a very broad literature search and carefully evaluated the different type of search terms which could possibly be used. Due to the wide range of inclusion criteria, we were able to report a broad overview of the potential health benefits of the application of e-health interventions for various types of perioperative care.
A potential limitation may be the exclusion of four non-English publications that could have been relevant within the scope of our review. Another potential limitation could be not using all search terms within our search strategy because of the enormous amount of literature by using the extra term 'surgery'. This yielded another 4405 extra titles. After screening the first 500 hits, bringing only two extra relevant hits not retrieved in our initial search, we decided to use the cross-references and related citations of the included studies instead. However, we cannot exclude that we missed studies because of this procedure. Most of the comprised studies were judged as being of a medium risk of bias. In this assessment, five items were scored by a notably low number of studies: if an assumption was made to blind the patients or the caregivers, whether adverse events were being reported, if compliance with the intervention was reliable and if the study had sufficient power to detect a clinically important effect. We considered all five items to be important risk factors for introducing bias Although, we understand that blinding of the patients and caregivers is difficult in this type of studies, measuring the compliance and adverse events should be an integral part for this type of research. In addition, the fact that only nine studies performed a power calculation and included enough patients, requires to interpret the results of this review with caution.
Another limitation is that it was not possible to conduct a meta-analysis due to heterogeneity in terms of type of surgery, type of intervention and the follow-up period. Finally, we did not report the disease or surgery-specific health outcomes as the aim of our review was to give a broad overview of the implication of e-health interventions in general perioperative care. It could however be that this may have under-or overestimated the effect of the e-health interventions on recovery.
Comparison with Other Studies
Our results are in line with the results of various systematic reviews focusing on the effects of complementing or substituting care by e-health in general medical care. Flodgren et al, 2015 published a Cochrane systematic review about the effectiveness of e-health on professional practice and health care outcomes [56]. They concluded that the use of e-health leads to, minimally, similar health outcomes as usual care and may probably improve health care. Ekeland et al, 2010 evaluated the effect of telemedicine interventions in general medical care in a systematic review [57]. They included 80 systematic reviews, of which 21 studies concluded that e-health was effective and in 18 studies the evidence was limited and inconsistent. These results are in line with our review in which 17 out of 27 studies (63%) reported a positive effect for one or more outcome measures. In our review, only limited study data were included which related to the cost-effectiveness of e-health. In line, de la Torre-Dıez et al. 2015, published a review about the cost-utility and cost-effectiveness for telemedicine in general medical care and concluded there is a lack of good quality, cost-effective studies. [58] Eland de Kok et al. 2011, systematically reviewed the effects of e-health on care for chronically ill patients [59]. They concluded that the usage of e-health leads to moderately positive effects on primary health outcomes, but again concluded a lack of cost-effectiveness studies. [56]
Conclusion and Clinical Relevance
Based on this systematic review we conclude that e-health interventions with the aim to complement or substitute perioperative care by educational websites, telemonitoring interventions, telerehabilitation programs and teleconsultations probably improves clinical patient outcomes compared with conservative face to face perioperative care for patients who have undergone various forms of surgery. There is, however, a lack of good quality (cost)-effective studies included in this review, with only a limited proportion of studies reporting they have performed a power calculation or have measured the compliance, or report about the occurrence of adverse events. For the future, we strongly recommend high quality cost-effective studies to provide more evidence for practitioners and policymakers whether or not they should implement e-health interventions in perioperative care. | 2018-04-03T05:59:29.034Z | 2016-07-06T00:00:00.000 | {
"year": 2016,
"sha1": "db9d4f97fb3a8712b86a5e813615b83e060e7282",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0158612&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "db9d4f97fb3a8712b86a5e813615b83e060e7282",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253259526 | pes2o/s2orc | v3-fos-license | The importance of early diagnosis and views on newborn screening in metachromatic leukodystrophy: results of a Caregiver Survey in the UK and Republic of Ireland
Metachromatic Leukodystrophy (MLD) is a rare, autosomal recessive lysosomal storage disorder caused by a deficiency of the enzyme arylsulfatase A (ARSA). MLD causes progressive loss of motor function and severe decline in cognitive function, leading to premature death. Early diagnosis of MLD provides the opportunity to begin treatment before the disease progresses and causes severe disability. MLD is not currently included in newborn screening (NBS) in the UK. This study consisted of an online survey, and follow-up semi-structured interviews open to MLD patients or caregivers, aged 18 years and over. The aims of the study were to understand the importance of early diagnosis and to establish the views of families and caregivers of patients with MLD on NBS. A total of 24 patients took part in the survey, representing 20 families (two families had two children with MLD, one family had three children with MLD). Following on from the survey, six parents participated in the interviews. Our data showed diagnostic delay from first symptoms was between 0 and 3 years, with a median of 1 year (n = 18); during this time deterioration was rapid, especially in earlier onset MLD. In patients with late infantile MLD (n = 10), 50% were wheelchair dependent, 30% were unable to speak, and 50% were tube fed when a diagnosis of MLD was confirmed. In patients with early juvenile MLD (n = 5), over half used a wheelchair some of the time, had uncontrollable crying, and difficulty speaking (all 60%) before or at the time of diagnosis. A high degree of support was expressed for NBS among caregivers, 95% described it as very or extremely important and 86% believed detection of MLD at birth would have changed their child’s future. One parent expressed their gratitude for an early diagnosis as a result of familial MLD screening offered at birth and how it had changed their child’s future: “It did and it absolutely has I will be forever grateful for his early diagnosis thanks to his older sister.” The rapid rate of deterioration in MLD makes it an essential candidate for NBS, particularly now the first gene therapy (Libmeldy™) has been approved by the European Medicines Agency. Libmeldy™ has also been recommended as a treatment option in England and Wales by the National Institute for Health and Care Excellence (NICE) and is being made available to patients in Scotland via the Scottish Medicines Consortium’s ultra-orphan pathway.
Background
Metachromatic Leukodystrophy (MLD) is a rare, autosomal recessive lysosomal storage disorder caused by a deficiency in the enzyme arylsulfatase A (ARSA) [1,2], which leads to the accumulation of sulfatides in both the central nervous system and peripheral nervous system [3]. This build-up of storage material causes a progressive loss of gross motor function, severe decline in cognitive function, loss of speech, seizures, muscle spasms, incontinence and ultimately leads to premature death [1].
At present, NBS is offered to every newborn baby in the UK. The heel prick blood spot test is performed when the infant is 5 days old and tests for nine rare and serious health conditions. These are sickle cell disease, cystic fibrosis, congenital hypothyroidism, and inherited metabolic diseases: phenylketonuria (PKU), mediumchain acyl-CoA dehydrogenase deficiency (MCADD), maple syrup urine disease (MSUD), isovaleric acidaemia (IVA), glutaric aciduria type 1 (GA1) and homocystinuria (pyridoxine unresponsive) (HCU) [4]. In some areas of England, NBS for severe combined immunodeficiency (SCID) is also being trialed [4]. Early diagnosis of these disorders within the first weeks of a child's life provides the opportunity to start treatment at an early age and can prevent disease progression, severe disability or even death, and for some conditions, it allows children access to treatments for which they only qualify at an early age. Currently, MLD is not included in NBS in the UK, and there is a lack of information on the acceptability of NBS from the perspective of those directly affected by the disorder.
The prevalence of MLD is estimated at 1.1 cases per 100,000 live births in the EU [5]. In the UK, the incidence rate is estimated at 1 in 40,000 live births [6]. The clinical phenotype of MLD is due to the global and progressive loss of myelin throughout the nervous system, which leads to a broad range of neurological symptoms [3]. MLD is heterogenous in terms of age of onset, initial symptoms, and progression of symptoms [2]. The exact definition of the different subtypes may vary slightly between sources, although broadly speaking, the most common subtype, late infantile MLD, occurs in the first two years of life [2,7,8], and accounts for 40-60% of cases [9]. Children develop symptoms after an initial period of normal development [2]. Symptoms include difficulty walking, loss of speech, muscle weakness and cognitive decline, with the disorder progressing rapidly and death usually occurring between the ages of 5 and 8 years old [6,9]. Juvenile MLD is often divided into early juvenile and late juvenile forms. Early juvenile MLD accounts for 20-35% of cases [9], children develop symptoms from approximately 3 years of age and disorder progression is less rapid than late infantile [2,7,8]. Children develop tremor and muscle rigidity as the disorder progresses, ultimately losing the ability to walk, with death occurring within 10 to 20 years [8,9]. Late juvenile MLD presents at a later age [2,7,8], onset is typically around the age of puberty, with behavioural issues such as aggressiveness, loss of inhibition, lack of judgment and disorientation developing first [2]. The adult subtype is the rarest form of the disorder, and the decline in cognitive abilities may be slow and difficult to recognize [8]. In both the late juvenile and adult forms, cognitive and behavioural issues often prevail before loss of motor function [10].
Diagnosis of MLD is challenging due to the broad spectrum of symptoms and their overlap with other diseases and conditions [2]. MLD is currently diagnosed by biochemical testing using mass spectrometry to quantify sulfatides in dried blood and urine spots [11][12][13]. Magnetic resonance imaging (MRI) further provides evidence of MLD by a characteristic tigroid pattern in the central white matter [2,14,15]. Genetic testing finally establishes the diagnosis of MLD by distinguishing between one of three ARSA alleles that result in low ARSA enzyme activity. This is important as both pathogenic ARSA variants and ARSA variants that cause ARSA pseudodeficiency exist and low residual enzyme activity is not always indicative of MLD [2]. Therefore, to avoid misdiagnosis, a culmination of biochemical, MRI and genetic testing is required to verify a diagnosis of MLD.
Currently, most management is focused on palliative care, although haematopoietic stem cell transplantation (HSCT) is available [7]. HSCT remains controversial and clinical data suggests that it may only be beneficial in the early stages of disease in late-onset patients [16,17]. A phase 2 trial investigating the efficacy of intrathecal replacement of recombinant ARSA has been completed (Clinical trials identifier: NCT01303146) and a gene therapy (Libmeldy™) for the treatment of late infantile or early juvenile forms, without clinical manifestations of MLD, was approved by the European Medicines Agency in 2020. The US Food and Drug Administration also granted Regenerative Medicine Advanced Therapy (RMAT) designation to Libmeldy™ in 2020 [18]. Libmeldy™ has been recommended as a treatment option by NICE for eligible children in England, Wales and Scotland.
(NICE) and is being made available to patients in Scotland via the Scottish Medicines Consortium's ultra-orphan pathway.
Keywords Metachromatic leukodystrophy (MLD), Newborn screening (NBS), Inborn errors of metabolism, Diagnostic delay, Gene therapy The current study was a collaboration between three patient organisations that support patients and their families with MLD: the Society for Mucopolysaccharide Diseases (MPS Society), the MLD Support Association UK and the ArchAngel MLD Trust. The aims of the study were to understand the importance of early diagnosis by establishing the progression of disease from first symptoms to diagnosis, and to determine the views of families/ caregivers of patients with MLD on newborn screening for this disorder.
Recruitment
Members of three patient organisations, the MPS Society, the MLD Support Association UK and the ArchAngel MLD Trust were invited by email and telephone to participate. To be eligible, parents/carergivers/patients had to be aged ≥ 18 years and resident of the UK or Republic of Ireland. Participants had to be the parent or caregiver of a live or deceased person with a confirmed diagnosis of MLD and be able to provide informed consent to participate. Parents or a caregiver with more than one child with MLD, were asked to complete a separate questionnaire (and interview, if applicable) for each child. The results reported here are part of a larger study, examining the burden of disease and the patient and caregiver experience in MLD, that will be reported elsewhere.
Survey questionnaire and interviews
The online survey used a specifically designed questionnaire covering demographics, diagnosis, symptoms and disease progression, burden of illness, treatment, and NBS. Consent was sought at the start of the online survey and additional consent was sought for participation in the follow-up interviews. Questions were presented as multiple choice where possible, with free text to include additional information not covered by the answer options. The online surveys were completed between 28 August and 18 October 2020.
Respondents who had completed the online survey were eligible to take part in the in-depth interviews. A semi-structured interview guide was developed by patient organisations in collaboration with RDRP, designed to explore further the items raised in the online survey. Interviews were all conducted over the telephone with the same member of the MPS Society's patient services team and took place between 29 September and 21 October 2020. Responses were analysed applying an inductive thematic content approach and using the computer Qualitative Data Analysis (QDA) software NVivo. Data were aggregated and remained anonymous and no personally identifiable data were collected. Participants could decline to answer any question and were able to stop the interview at any point. Permission specifically to use quotes from the recordings was sought, and participants could indicate any content that could not be cited. Analysis of the online survey results and interview transcripts was undertaken by RDRP. This research was conducted in accordance with the British Healthcare Business Intelligence Association's Legal & Ethical Guidelines for Market Research [19].
Disease progression
In the online survey, respondents were asked to indicate the presence or absence of symptoms at various time points to gain an understanding of the progression of MLD. Time points used to capture symptoms included: first symptoms, at time of diagnosis, current symptoms or in the final stage of disease if deceased. Symptoms were presented as multiple-choice lists under the following headings with options to add other symptoms as free text: mobility; skeleton, muscles, joints; eyesight and hearing; behaviour; learning and understanding (cognitive); neurological; speech and communication; nutrition and eating; chest and respiratory, and bowels and bladder. The rate of progression was explored further in the interviews.
The definitions used for the MLD clinical subtypes were: • Late infantile (symptom onset ≤ 2.5 years of age).
NBS
In the online survey, respondents were asked for their views on NBS. Respondents were asked questions on information they had received on current NBS tests, availability, interpretation, and outcome of results. Respondents were also asked what the effect of a positive result would have on their future reproductive choices and if they would be willing to support an application for MLD to be added to the UK NBS programme.
Patient demographics
A total of 24 patients were included in the study, representing 20 families. This represents around half of all patients known to the patient organisations. Respondents were mostly parents of patients that were alive at the time of the survey (n = 21), with the remaining comprising bereaved parents (n = 2) and bereaved carergivers (n = 1). The median age of patients was 7.3 years, and three patients were deceased. One third (n = 8) of patients had a sibling with a confirmed diagnosis of MLD. Of the patients in the study, thirteen had late infantile MLD, six had early juvenile MLD, two had late juvenile MLD and three had adult onset MLD. In total, 58% (n = 14) were female and 88% (n = 21) were from England, with the remaining patients from Republic of Ireland (8%, n = 2) and Northern Ireland (4%, n = 1).
Diagnostic delay and disease progression
Diagnostic delay was defined as time from the first symptom to diagnosis of MLD. This was a qualitative measure reported by the parent or caregiver. Six patients were excluded from the analysis of diagnostic delay, of these, four patients were diagnosed before symptoms due to the diagnosis of an older sibling and two patients failed to provide the age at which first symptoms appeared. The median age of patients when symptoms first appeared was 2.8 years, and the median age of patients when diagnosed was 4.3 years (Table 1). Diagnostic delay was between 0 and 3 years, with a median of 1 year (n = 18), during this time deterioration was rapid, especially in earlier onset MLD (Table 2).
Late infantile MLD
Three patients with late infantile MLD were diagnosed before symptoms appeared due to diagnosis of an older sibling, and inconsistent answers regarding age at symptom onset and diagnosis were reported for one patient. These four patients were therefore excluded from the analysis of diagnostic delay. The median age of patients when symptoms first appeared was 1.3 years (n = 9). The median age of patients when diagnosed was 2.5 years (n = 9, Table 1). Diagnostic delay was between 0 and 2.3 years, with a median of 0.8 years (n = 9, Table 1), and deterioration was rapid ( Table 3, Late infantile).
While most children had met their early developmental milestones for speech and learning around two thirds had not achieved their walking milestones. The most common first symptoms included issues with walking (n = 9, Table 2), difficulty swallowing (n = 4, Table 2), hypotonia (n = 5, Table 4), and hypertonia (n = 4, Table 4). One parent described the many issues that were present from an early age: As a baby [Name] was floppy, breathing was a concern and had a poor suck and swallow making feeding hard. This was suggested to be due to an asymmetric jaw. He had a small fontanelle, which is what got us the initial appointment with the paediatrician. His head size was a concern and his fingers didn't always open. As time went on [Name] had pronated feet, making it difficult to stand and this opened up the door to physiotherapy, before he was two.
[Name] struggled to eat food, this was a long journey right from the beginning and alongside this speech was delayed.
Another parent described how issues with walking had been one of the first signs that something was wrong: She'd started to walk, but she wouldn't progress from walking holding onto things to walking independently. So, when she got to two years old, that's when we first went to the doctors, thinking that maybe something wasn't quite right, because she just wasn't going past that next stage.
Rapid disease progression was seen in the time taken to reach diagnosis. By the time of diagnosis, 50% (5/10) were wheelchair dependent, 30% were unable to speak, and 50% were tube fed ( Table 2). One parent describes how their child went from crawling up and down stairs to being completely bedbound over a period of six months from diagnosis: …we noticed that her walking even when holding onto things was then becoming more difficult for her. Mobility issues were followed by a rapid decline in speech and cognition for another child in a 9-12-month period, between the age of 2-3 years. As physical decline occurred before cognitive decline, the rapid loss of skills was particularly distressing for the child and parents. The child had deteriorated from being able to eat and drink independently, to total dependence on their parents. By the time of diagnosis, he was being fed by a naso-gastric tube and needed a gastrostomy at age 3 years. Another parent described the loss of skills that occurred before her child was diagnosed with MLD. The child had been pulling themselves up to stand, was crawling, and was quick at going upstairs. By the age of 2, the parent started to notice a decline and the child started using a walker. In less than a year, the child had stopped crawling, walking, speaking and eating.
Speech and communication Lost ability to speak Lost ability to articulate pain or discomfort Lost ability for non-verbal communication Difficulties with speech (dysphasia, dysarthria, speech deteriorating) None of the above symptoms
Continence
Urgency/frequent accidents Urinary incontinence Bowel incontinence Double incontinence None of the above symptoms a Includes one respondent who provided no answer for symptom at diagnosis, therefore assumed symptom reported as first symptom was also present at diagnosis • First symptoms were observed at 2 years old. • The child did not progress from walking holding on to things to walking independently. • The child saw various doctors and hip dysplasia was suspected. During this time the child was finding it more difficult to walk and was referred to a community paediatrician who sent for blood tests and MRI. • Late infantile MLD was diagnosed at 2 years and 8 months old. Interviewer: "So, between the first symptoms and when you got the diagnosis, had your daughter deteriorated any further?" Parent: "Yes. She had deteriorated more by that point. Her speech had slurred quite significantly, and she was dribbling excessively. And her sleep was really disturbed as well. " Late infantile: Case study 2 • First symptoms were observed at 1 year old. • The child did not progress from walking holding on to things to walking independently. • The child saw various doctors and parents noticed a decline at 18 months old.
• First referred to a community physiotherapist for walking issues, child was getting worse, after a long time was referred to a paediatrician.
• The child was misdiagnosed with hyperkalaemia and neuropathy.
• Genetic tests and initial MRI were inconclusive.
• MLD was suspected after the second MRI and genetic tests confirmed diagnosis three months later. • Late infantile MLD was diagnosed at 2 years and 6 months old. Interviewer: "What led them to do the MRI?" Parent: "We knew that he was getting worse, and we had always kind of had to speak up for [name] and had some disagreements with the team and what they thought, and we demanded that he be seen again, that he was regressing. And so, they did the MRI the second day and they discovered that there was white matter accumulating in the brain and in that respect when they looked at the first MRI, they also discovered that they should have seen it back then too. 18 months lost. " Late infantile: Case study 3 • First symptoms were observed at 3 months old.
• Parents were concerned that the child had a degenerative condition from an early age.
• The child first went to the GP with feeding issues where they saw a breastfeeding specialist who noted the child had an asymmetric jaw.
• Numerous visits to the GP for frequent chest infections, concerns over breathing at night and floppy baby were recorded. • This led to referral to paediatrician 1, who felt that feeding issues were due to reflux. • Further visits to the GP were made, the child was very ill for 6 weeks, sick at every feed, and had a temperature. • The child did not pass urine for 24 h and was then hospitalised with pneumonia. The parents asked for help as they felt that the first paediatrician was not listening to them. • This led to referral to paediatrician 2, who referred the child to physiotherapy. • The physiotherapist made some progress, but the child then started to regress. • The midwife noted delayed growth and motor skills at the "2-year check". Referred the child to a community paediatrician.
• The parents of the child talked through all their concerns with the community paediatrician, assessments were done, and the parents were told child was just delayed. • The mother persevered and asked for a test for muscular dystrophy, the child was referred to a neurologist. • The neurologist was concerned, tests were conducted, and a diagnosis was reached. • Late infantile MLD was diagnosed at 2 years and 6 months old. Parent: "We got rushed into the hospital and that' s where I met the second consultant, where I just broke down and said, I know he' s got a chest infection, but I think there' s more than this. I think there' s more to it than this and I feel like nobody' s listening to me. Like the doctor' s not listened to me, the other paediatrician didn't listen to me. And I feel like we just need some help. " Early juvenile: Case study 1 • First symptoms were observed at 5 years old. • The child had previously been bright, but had lost interest in reading, was becoming clumsy and had wet themselves a few times. Behaviour issues were also reported at school. • The GP thought the child might be having petit mal seizures. • The child's nursery teacher offered to assess them and could see there had been a significant change -she spoke to the doctor. • The doctor referred the child for a CT scan and a brain degenerative condition was confirmed. • The child had an MRI in July and by September was unable to walk. • Early juvenile MLD was diagnosed at 5 years and 6 months old. Parent: "I was quite often just shrugged off as a neurotic mother, I think. There was various things that just weren't adding up to me. Just little things. And our initial thoughts were that she wasn't settling in very well for school. She' d just started reception. And I had approached the school for help many times weekly. And probably on a weekly basis, I was in asking for her to be referred somewhere. And I was just constantly met with… Just made to be obviously neurotic, really. And she was just a naughty, difficult child. " In most cases, the diagnostic journey was long, with multiple referrals, doctors, and specialists required to eventually confirm MLD disease, often referred to as "diagnostic odyssey". In two patients, the deterioration between the first symptoms and diagnosis was extremely apparent (Table 3, Late infantile: case study 1 and 2). In the case of one child, first symptoms were observed at 3 months old, and the parents were concerned that the child had a degenerative condition from an early age. The child was seen multiple times by the GP with feeding issues, chest infections and concerns over breathing at night. The child was referred to a paediatrician who according to the parents, ultimately disregarded their concerns. Subsequent visits to the GP and hospitalisation led to a second paediatrician referral: The child was seen by a physiotherapist and then a midwife, who referred the child to a community paediatrician. Assessments were carried out and the parents were told that the child was just delayed. The mother persevered, and the child was referred to a neurologist. Finally, late infantile MLD was diagnosed at 2 years and 6 months ( Table 3 Late infantile: case study 3).
Early juvenile MLD
One child with early juvenile MLD was diagnosed before symptoms appeared due to diagnosis of an older sibling and were excluded from the analysis of diagnostic delay leaving a total of five children who were symptomatic before diagnosis. The median age of patients when symptoms first appeared was 5.0 years (n = 5), and the median age of patients when diagnosed was 6.0 years (n = 5, Table 1). Diagnostic delay was between 0 and 3 years, with a median of 1.2 years (n = 5) and during this time deterioration was rapid ( Table 3, Early juvenile). All children had met their early developmental milestones for speech, learning, and walking. Initial symptoms included issues with walking, toileting, and learning/behavioural problems (Tables 2 and 4). At diagnosis, 60% (n = 3) were starting to use a wheelchair, 60% (n = 3) had difficulty speaking (Table 2), and 60% (n = 4) had uncontrollable crying (Table 4). One parent described the rapid progression from first symptoms that
MLD subtype
Diagnostic journey Early juvenile: Case study 2 • First symptoms were observed at 3 years old. • The child started tripping up. • By age 4, the child would get frustrated trying to pull up a zip or put a lid on a pen. • The GP reassured the parents that children just develop at different rates and by age 5 all children have caught up. • The parents went back to the GP with more symptoms, which were getting worse, including constant frustration and behavioural issues. GP referred child to a psychologist and a child development unit.
• A series of assessments were done, the school noted that the child's hands would shake when they picked up a pen. Dyspraxia was diagnosed and occupational therapy given.
• Parents were concerned about the hand tremor and had researched it and felt there could be a neurological issue. The GP referred them back to the child development unit. • The child development unit were resistant but agreed to do an MRI and blood tests. Some underdevelopment in myelination was found but they were told this was nothing to worry about. • The parents pushed for further investigation and MRI was sent to neurologist for review towards the end of the year. • Parents could not get in touch with the paediatrician to find out results, calls and emails were not answered. Diagnosis was finally given; paediatrician did not know about the disease and suggested the parents research it.
• Parents found out that it was metabolic and approached Great Ormond Street Hospital where confirmatory diagnostic testing was done. • Early juvenile MLD was diagnosed at 6 years old. Adult onset: Case study 1 • First symptoms were observed at 20-21 years old. • The patient was at university and had become forgetful. However, in hindsight with an understanding of MLD, there were some early signs from 17 years old. The patient achieved lower grades than expected in A-levels and showed signs of aggression.
• The patient was referred to a psychiatrist and investigated for schizophrenia and other possible causes. The trigger for diagnosis was when the patient could no longer tell the time.
• The Parents pushed for further investigation and an MRI was done. • Adult onset MLD was diagnosed at 23 years old. walking.
Parent: "If we' d got the diagnosis a year earlier, he would probably have been living independent life still, albeit supported. Because it was that last year, was really when the symptoms started to manifest. And it was obvious we couldn't leave him alone for any length of time. We had to monitor what was happening. He' d put a meal in the oven to cook and then go out. "
In one case, the first symptoms appeared at 5 years old. The child had previously been bright but had lost interest in reading, become clumsy, and had wet themselves a few times. The school had also reported behaviour issues. The mother of the child recounted how she felt: I was quite often just shrugged off as a neurotic mother, I think. There was various things that just weren't adding up to me. Just little things. And our initial thoughts were that she wasn't settling in very well for school. She'd just started reception. And I had approached the school for help many times weekly. And probably on a weekly basis, I was in asking for her to be referred somewhere. And I was just constantly met with… Just made to be obviously neurotic, really. And she was just a naughty, difficult child.
After the child's nursery teacher spoke to the doctor, the child had a CT scan, and a brain degenerative condition was confirmed. The child had an MRI in July and by September was unable to walk. Early juvenile MLD was diagnosed at 5 years and 6 months old (Table 3, Early juvenile: case study 1). In another case, the first symptoms appeared at 3 years old when the child began to fall over. A decline in motor function and issues with behaviour followed and after much perseverance from the parents to achieve a diagnosis, early juvenile MLD was finally diagnosed 3 years later (Table 3, Early juvenile: case study 2).
Late juvenile MLD
The two patients were symptomatic before diagnosis and some disease progression was observed in this period. The median age of patients when symptoms first appeared was 10.5 years and the median age of patients when diagnosed was 11.5 years, with a median diagnostic delay of 1 year (n = 2, Table 1). In late juvenile MLD, both patients had met all their early developmental milestones. Initially, 50% (n = 1) reported issues with walking (Table 2), 50% (n = 1) presented with hypertonia, and 50% (n = 1) had learning difficulties (Table 4). At diagnosis, both patients had started to lose the ability to walk and had learning issues. One patient had hypertonia, and one patient had memory and concentration issues (Tables 2 and 4).
Adult onset MLD
All three patients were symptomatic before diagnosis, however, age at first symptoms, diagnosis, and subsequent delay in diagnosis were only recorded for two patients. The median age of patients when symptoms first appeared was 22.5 years, and the median age of patients when diagnosed was 24.5 years, with a median diagnostic delay of 2 years (Table 1). One patient had not met their early developmental milestones, initial symptoms were a change in behaviour and cognitive deterioration (Table 4). In the case of one patient with adult onset MLD, the first symptoms were observed at 20-21 years old. The parent described how the first symptoms were appearing while their child was at university. Doctors initially thought the problem was psychiatric and it was not until the patient lost the ability to tell the time that further tests were done. Adult onset MLD was diagnosed at 23 years old.
NBS: family views
Responses were received from all 20 families taking part in the survey. In one family, both the father and mother replied, giving 21 responses in total. The questions and responses are summarised in Fig. 1.
Information about NBS and the heel prick test
The majority (79%) of parents received information from healthcare professionals about the purpose of NBS. Only 2 parents (10%) were not able to recall their child's NBS heel prick test.
Interpretation of screening results
Approximately half of respondents were informed of the meaning of positive and negative results (53%), and just under half understood the possibility of obtaining a "false positive" result (47%).
Outcome of NBS results and effect upon reproductive choices
Most respondents (80%) considered an undetected case of MLD at birth as more harmful than a false positive screening result. When respondents were asked if NBS for MLD would have helped to inform their reproductive choices, 86% said that it would have helped to make an educated decision, whilst the remaining respondents said that it would not have affected their reproductive choices or they were too old by the time of diagnosis. The majority of respondents (86%) believed detection at birth would have changed their child's future. One parent described the torment of realising they were too late for medical intervention: Another respondent expressed their gratitude for an early diagnosis as a result of familial MLD screening offered at birth and how it had changed their child's future: It did and it absolutely has I will be forever grateful for his early diagnosis thanks to his older sister.
Three respondents with offspring who were diagnosed with adult onset MLD thought that detecting MLD at birth would not have changed their child's future. One respondent said: No because they lived a good life, went to school, got jobs, married and had families.
One parent felt that NBS for MLD would not have influenced their child's future as treatments were not yet available: Probably not as treatments to delay or prevent symptoms were not available until after his condition was already significantly degenerated.
Support for NBS
Overall, there was a high degree of support for NBS among caregivers, with 95% describing it as very or extremely important and 5% describing it as not at all important. Twenty out of 21 respondents were willing to support an application for MLD to be added to the UK NBS programme.
Discussion
Due to the rarity and severity of the disease, limited data on patients with MLD are available. Our qualitative study involving parents and caregivers of patients with MLD, collected information on first symptoms, age of diagnosis and views on NBS. The variability of symptoms in MLD, coupled with the very low incidence rate, often mean that the disease is misdiagnosed or diagnosed too late for patients to be considered for treatment [8]. Our survey and interviews revealed that it can take up to three years from the first symptoms to diagnosis and were similar to those reported in a recent study, which reported a mean time from first symptom to diagnosis of 1.2 (0.3-7.1) years for late infantile MLD and 3.7 (0.2-6.8) years for juvenile MLD [1]. During this time patients often experience a rapid deterioration and loss of skills. Our study showed that this is particularly evident in the earlier onset forms of MLD, where substantial irreversible damage occurs within a period of months. In a recent study of 97 patients with MLD, all patients with motor involvement exhibited rapid disease progression regardless of the subtype [20]. The rate of progression was greater when motor symptoms were present at disease onset. In late juvenile and adult-onset patients, the course of the disease was as rapid as in the early onset forms, when motor symptoms were present at disease onset [20]. Our study reported that patients with adult-onset MLD displayed cognitive and behavioural issues prior to loss of motor skills, in agreement with other studies [2,10]. Parents and caregivers expressed their frustration that their early concerns were not always taken seriously, and many visited several specialists before appropriate testing was performed. For the majority in this study, early diagnosis prior to symptoms appearing was only achieved due to the diagnosis of MLD in an older sibling. The benefits of early detection reach far beyond the patient and early diagnosis would greatly reduce the emotional and mental toil on the family caused by the long and tumultuous diagnostic process [21]. Moreover, our data shows that the knowledge NBS would provide would allow parents to make informed reproductive decisions in the future. Most respondents also felt that the benefits of an early diagnosis, such as early treatment and the choice to be included in clinical trials, far outweighed the potential impacts of receiving a false positive screening result.
PKU was the first inherited condition to be screened for in the UK. If diagnosed shortly after birth, irreversible damage can be avoided by prescribing a phenylalanine-restricted diet [22]. The health benefits were clearly apparent and NBS for PKU was implemented in many countries worldwide [23]. Due to the success of PKU screening, and the availability of novel treatments, more inherited conditions have been added to NBS programmes over the years. Although NBS is available for six other inborn errors of metabolism in the UK, NBS for MLD disease is not included. 90% (n = 19) of respondents in our survey felt that NBS was extremely important, and 86% (n = 18) thought that it would have helped to inform their reproductive choices. A recent systematic review of 36 studies, of which 12 were from the UK, suggested that NBS was poorly understood and that the potential impact of receiving a positive result was not considered by parents. In fact, most parents were unaware screening had taken place [24]. For a disease to be part of an NBS programme, a set of ten screening criteria must be met [25]. These criteria have been in use since NBS began more than 50 years ago and have subsequently been modified due to advances in technology [26]. For NBS, "an accepted treatment for patients" is a criterion for diseases to be included. NBS would dramatically reduce the burden on patients, families and healthcare clinicians through the diagnosis period and recent modelling has demonstrated the cost-effectiveness of NBS in other inborn errors of metabolism [27].
Recent breakthroughs in potential treatments offer some hope to patients and their families but despite this progress, therapies are only beneficial in pre-symptomatic patients or those at very early stages of disease, emphasising the need for a rapid diagnosis [15]. Substantial progress in gene therapy provides much optimism for the treatment and management of MLD and its availability will offer patients and families a vastly improved quality of life. Libmeldy ™ is the first gene therapy approved for eligible patients with early-onset MLD. Eligible patients are characterised by biallelic mutations in the ARSA gene leading to a reduction of ARSA enzymatic activity in children with late infantile or early juvenile subtypes, without clinical manifestations or with early clinical manifestations of the disease. Results demonstrated high levels of reconstituted ARSA activity in cerebrospinal fluid, arrested neurodegeneration, and a favourable safety profile [28]. Other treatments under investigation include intrathecal replacement of recombinant ARSA (Clinical trials identifier: NCT01303146), and AAV-mediated gene therapy, based on the direct multiple injection of ARSA expressing viral vectors into the brain of patients (Clinical trials identifier: NCT01801709) [2,27,29]. Although there are reasons for optimism, the need for early diagnosis is apparent. The advent of gene therapy and enzyme replacement therapy for the treatment of several rare diseases, such as MLD, have opened the door for their inclusion on NBS panels.
Qualitative research provides real-world insight from the perspective of the subject. Although our study was small, due to the rarity of MLD, important insights on first symptoms, disease progression and views on NBS were ascertained through open-ended questioning. This allowed issues to be explored in detail and new ones identified. The inherent nature of this methodology highlights potential limitations, such as the requirement for parents and caregivers to remember when first symptoms developed retrospectively. This descriptive study relied upon individual memory and was not able to validate findings against medical records. This may be particularly important when reporting diagnostic delay, as the timings of when symptoms appeared can be subjective and should therefore be considered as approximations only. Finally, some respondents left gaps in the online survey when reporting symptoms at different timepoints, which in turn led to some variability in the data available for each patient/respondent. Despite these limitations, the results of our study provide a strong case for MLD to be included in the UK NBS panel.
Conclusion
Our data highlight the considerable delay from the appearance of first symptoms to MLD diagnosis and demonstrates the rapid deterioration of both motor and cognitive function during this time. The rapid rate of disease progression MLD makes it an essential candidate for NBS, particularly now as the first gene therapy has been approved. | 2022-11-03T17:48:39.747Z | 2022-11-03T00:00:00.000 | {
"year": 2022,
"sha1": "e77bfbb8401440782956bb138f357ee175e72dba",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "4da4a0afd19605cd7d7b2ddeb622101e5f08c201",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248098096 | pes2o/s2orc | v3-fos-license | Interactions between the Nociceptin and Toll-like Receptor Systems
Nociceptin and the nociceptin receptor (NOP) have been described as targets for treatment of pain and inflammation, whereas toll-like receptors (TLRs) play key roles in inflammation and impact opioid receptors and endogenous opioids expression. In this study, interactions between the nociceptin and TLR systems were investigated. Human THP-1 cells were cultured with or without phorbol myristate acetate (PMA 5 ng/mL), agonists specific for TLR2 (lipoteichoic acid, LTA 10 µg/mL), TLR4 (lipopolysaccharide, LPS 100 ng/mL), TLR7 (imiquimod, IMQ 10 µg/mL), TLR9 (oligonucleotide (ODN) 2216 1 µM), PMA+TLR agonists, or nociceptin (0.01–100 nM). Prepronociceptin (ppNOC), NOP, and TLR mRNAs were quantified by RT-qPCR. Proteins were measured using flow cytometry. PMA upregulated ppNOC mRNA, intracellular nociceptin, and cell membrane NOP proteins (all p < 0.05). LTA and LPS prevented PMA’s upregulating effects on ppNOC mRNA and nociceptin protein (both p < 0.05). IMQ and ODN 2216 attenuated PMA’s effects on ppNOC mRNA. PMA, LPS, IMQ, and ODN 2216 increased NOP protein levels (all p < 0.05). PMA+TLR agonists had no effects on NOP compared to PMA controls. Nociceptin dose-dependently suppressed TLR2, TLR4, TLR7, and TLR9 proteins (all p < 0.01). Antagonistic effects observed between the nociceptin and TLR systems suggest that the nociceptin system plays an anti-inflammatory role in monocytes under inflammatory conditions.
Introduction
Nociceptin and the nociceptin receptor (NOP) share high homology with opioid ligands and the classical opioid receptors, respectively, but have distinct profiles. Nociceptin has diverse effects in different species and pain models, depending on site of injection and dosage, with hyperalgesic effects at the supraspinal level and antinociceptive effects in the periphery [1][2][3][4]. An immunomodulatory role of nociceptin and NOP in circulating blood was described [1,[5][6][7][8][9]. However, how the nociceptin system is regulated and which mechanisms are involved has not been fully elucidated.
Toll-like receptors (TLRs) are a family of pattern recognition receptors identified as important mediators during inflammation and pain processing [10,11]. TLR signaling provoked pain, and specific TLR-knockout mice showed attenuation of nociception compared to the wild type in various neuropathic pain models [11,12]. TLRs are abundantly expressed in human peripheral blood leukocytes and play a central role in immune response. Peripheral blood mononuclear cells (PBMC) of patients suffering from chronic pain demonstrated increased responsiveness to TLR ligands in vitro [13].
There is evidence that activation of µ-opioid receptors modulates TLR signaling, and opioids activate TLR2 or TLR4 signaling [14][15][16]. Furthermore, LPS stimulation increased the release of endogenous opioids from human monocytes in vitro [17]. Whereas many studies have focused on the crosstalk between the TLR system and opioids, no data on the interactions between nociceptin and TLRs are currently available.
Human monocyte-like THP-1 cells is a cell line derived from peripheral blood of a childhood case of acute monocytic leukemia. In our preliminary experiments, constitutive expression of nociceptin, the nociceptin receptor, and TLR (TLR2, TLR4, TLR7, and TLR9) proteins were determined in THP-1 cells. In addition, higher basal intracellular TLR7 and TLR9 protein levels were detected in THP-1 cells compared to human monocytic MM6 cells. We hypothesized that specific TLR signaling contributes to the regulation of nociceptin and the nociceptin receptor under inflammatory conditions, and that the nociceptin system has an influence on TLR expression.
Cell Line Screening and Dose-Response Experiments
In our previous studies, phorbol myristate acetate (PMA), a potent proinflammatory activator, was the only immune activator with an upregulating effect on nociceptin in human monocytic MM6 cells and peripheral blood leukocytes [18,19]. To examine the impact of PMA on mRNA expression of nociceptin and NOP in different cell lines, MM6, THP-1, U937, and HL-60 cells were screened. Flow cytometry analysis showed that both nociceptin and NOP proteins could be detected in each cell line. To investigate the impact of PMA on nociceptin and NOP expression in these cell lines, cells were cultured with or without PMA 5 ng/mL for 24 h (Sigma-Aldrich, Buchs, Switzerland), and mRNA expression of nociceptin precursor (prepronociceptin, ppNOC) and NOP were detected. The PMA concentration used for screening was based on our previous studies [18,19].
As a result of the screening experiment, THP-1 cells were chosen and then cultured with or without various concentrations of PMA (0.01-100 ng/mL) for 24 h, and ppNOC and NOP mRNA levels were quantified. Based on these dose-response experiments, PMA 5 ng/mL was used in the subsequent experiments.
Co-Stimulation of THP-1 with PMA and TLR Agonists
To examine the contribution of TLR signaling to the regulation of nociceptin and NOP under inflammatory conditions, cells were cultured with or without 10 µg/mL of LTA, 100 ng/mL of LPS, 10 µg/mL of IMQ, or 1 µM of ODN 2216, with or without 5 ng/mL of PMA, for 24 h. The concentrations of the TLR agonists were based on previous studies [18][19][20][21] and the current cell viability assay. Samples were collected af-Cells 2022, 11, 1085 3 of 14 ter 24 h, and cells were prepared for RNA isolation or quantification of protein levels by flow cytometry.
Stimulation of THP-1 with Nociceptin
To investigate the effects of nociceptin on TLRs, cells were cultured with or without 0.01-100 nM of exogenous nociceptin. After 24 h, cell surface TLR2 and TLR4 proteins, as well as intracellular TLR7 and TLR9 proteins, were measured.
RNA Isolation, cDNA Synthesis and Relative Quantification
Total RNA was isolated using a high pure RNA isolation kit following the manufacturer's protocol (Roche, Rotkreuz, Switzerland). RNA concentrations and purity were measured using a NanoDrop 2000 (Thermo Scientific, Reinach, Switzerland). Subsequently, cDNA was synthesized (Transcriptor High Fidelity cDNA Synthesis Kit, Roche, Rotkreuz, Switzerland), and ppNOC, NOP, and TLR mRNAs were detected (Table 1). Reference genes hypoxanthine phosphoribosyl-transferase 1 (HPRT1) and glyceraldehyde-3-phosphate dehydrogenase (GAPDH) were selected as internal controls. RT-qPCR reactions were performed in duplicate in 384-well plates by a LightCycler ® 480 (Roche, Rotkreuz, Switzerland) using 5 µL of 2× LightCycler ® 480 Probes MasterMix and 0.5 µL RealTime ready Assay in a final volume of 10 µL. cDNA prepared from human SK-N-DZ cells served as a calibrator.
Standard curves were generated separately for each target gene and reference gene, using serial dilutions of cDNA template known to express the gene of interest in high abundance. mRNA levels were analyzed using the advanced relative quantification module of the LightCycler ® 480 software (Version 1.5, Roche, Rotkreuz, Switzerland). Target gene mRNA levels were computed based on the qPCR amplification efficiency and the crossing point difference, and calculated as the ratio of the target gene/HPRT1 and GAPDH of each sample normalized to the calibrator used in the PCR reactions (normalized ratio).
Measurement of Cell Membrane Proteins
Cells were collected by centrifugation (300× g, 5 min) at 4 • C and washed with ice cold PBS. To reduce the possible non-specific binding background, cells were suspended with 10% of human serum and incubated for 20 min on ice (Sigma-Aldrich, Buchs, Switzerland).
For NOP staining, 10 µL of cell suspension was transferred to a 96-well U-bottom plate (TPP, Trasadingen, Switzerland) and treated with anti-NOP mAb (Sigma-Aldrich, Buchs, Switzerland) or isotype-control antibody (BD Biosciences, Allschwil, Switzerland) at a final concentration of 5 µg/mL with 50 µL hypotonic saponin solution (50 µg/mL of saponin, 130 mM of sucrose, 50 mM of KCL, 50 mM of sodium acetate, 20 mM of HEPES with DI water at 3:2 ratios, all from Sigma-Aldrich) for 5 min on ice [22]. After washing with permeabilization buffer (PBS with 1% FBS, 1% saponin, 1% sodium azide, all from Sigma-Aldrich), cells were incubated with anti-NOP mAb or isotype-control antibody (BD Biosciences, Allschwil, Switzerland) at a final concentration of 5 µg/mL for 1 h on ice. Subsequently, cells were washed three times with permeabilization buffer and stained with 1 µg/mL of PE-conjugated secondary antibody (Thermo Scientific, Reinach, Switzerland) for 1 h on ice in the dark. After the staining, cells were washed and fixed with 1% paraformaldehyde (PFA, Sigma-Aldrich, Buchs, Switzerland).
To stain cell surface TLR, cells were incubated with PE-labelled anti-TLR2, anti-TLR4 mAb, or isotype-control antibody (BD Biosciences, Allschwil, Switzerland) at a final concentration of 1 µg/mL for 1 h on ice in the dark, washed and fixed in 1% PFA.
Measurement of Intracellular Proteins
Intracellular staining of nociceptin was performed as previously described [18,19]. Briefly, cells were fixed with 1.5% PFA, permeabilized (BD Cytofix/Cytoperm™ Kit, BD Biosciences, Allschwil, Switzerland), and stained with anti-nociceptin antibody (Phoenix Pharmaceuticals, Karlsruhe, Germany) or isotype-control antibody (Abcam, Cambridge, UK) at a final concentration of 5 µg/mL for 1 h at room temperature (RT). Samples were washed three times and stained with 1 µg/mL of PE-conjugated secondary antibody (BD Biosciences, Allschwil, Switzerland) for 1 h at RT in the dark. Cells were then washed and suspended in staining buffer.
As for intracellular TLR7 and TLR9, cells were prepared using the BD Cytofix/Cytoperm TM Kit according to the protocol, and stained with PE-conjugated anti-TLR7, anti-TLR9 mAb, or the respective isotype-control antibodies at a final concentration of 1 µg/mL for 1 h at RT in the dark (Thermo Scientific, Reinach, Switzerland). Subsequently, they were washed and suspended in staining buffer. Cells with the same preparation but without any staining were used as negative controls to determine the autofluorescence. All flow cytometric measurements were performed using a CytoFLEX S Flow Cytometer (Beckman Coulter Life Sciences, Krefeld, Germany). A total of 10,000 cells in the gated population were recorded per sample. Mean fluorescence intensity, which represents the expression level of the target proteins, was calculated using the FlowJo V10 software (TreeStar Inc., Ashland, OR, USA).
Statistical Analysis
Statistical analysis was performed using STATISTICA 10.0 (StatSoft, Inc., Tulsa, OK, USA). Data are presented as box-and-whisker plots showing medians, interquartile range (IQR), 10-90 percentiles, and mean. Kruskal-Wallis with post hoc test (Dunn's) and Wilcoxon test with correction for multiple testing where it applies. p < 0.05 was considered statistically significant.
Nociceptin and NOP Expression in Different Cell Lines
The basal levels of ppNOC mRNA in MM6, THP-1, U937, and HL-60 were below the detection limit, whereas intracellular nociceptin proteins could be detected. NOP was constitutively expressed in these cell lines at mRNA and protein levels ( Figure 1A,B). PMA at 5 ng/mL significantly upregulated ppNOC mRNA in MM6, THP-1, and HL-60 cells after 24 h, compared to untreated controls. The highest upregulating effect of PMA on ppNOC mRNA was observed in THP-1 cells ( Figure 2A). As for NOP, PMA upregulated its mRNA expression in MM6 and U937 cells after 24 h, compared to controls ( Figure 2B). According to these preliminary results, THP-1 was chosen for use in the subsequent experiments.
Nociceptin and NOP Expression in Different Cell Lines
The basal levels of ppNOC mRNA in MM6, THP-1, U937, and HL-60 were below the detection limit, whereas intracellular nociceptin proteins could be detected. NOP was constitutively expressed in these cell lines at mRNA and protein levels ( Figure 1A,B). PMA at 5 ng/mL significantly upregulated ppNOC mRNA in MM6, THP-1, and HL-60 cells after 24 h, compared to untreated controls. The highest upregulating effect of PMA on ppNOC mRNA was observed in THP-1 cells ( Figure 2A). As for NOP, PMA upregulated its mRNA expression in MM6 and U937 cells after 24 h, compared to controls ( Figure 2B). According to these preliminary results, THP-1 was chosen for use in the subsequent experiments. The basal levels of ppNOC mRNA in MM6, THP-1, U937, and HL-60 were below the detection limit, whereas intracellular nociceptin proteins could be detected. NOP was constitutively expressed in these cell lines at mRNA and protein levels ( Figure 1A,B). PMA at 5 ng/mL significantly upregulated ppNOC mRNA in MM6, THP-1, and HL-60 cells after 24 h, compared to untreated controls. The highest upregulating effect of PMA on ppNOC mRNA was observed in THP-1 cells (Figure 2A). As for NOP, PMA upregulated its mRNA expression in MM6 and U937 cells after 24 h, compared to controls ( Figure 2B). According to these preliminary results, THP-1 was chosen for use in the subsequent experiments.
Dose-Dependent Effects of PMA
PMA dose-dependently upregulated ppNOC mRNA in THP-1 cells after 24 h. Compared to untreated controls, ppNOC mRNA levels were upregulated either by PMA 1 ng/mL or PMA 10 ng/mL. Maximum ppNOC mRNA expression was seen in THP-1 cells stimulated with 10 ng/mL PMA ( Figure 3A). As for NOP, a slight upregulation of NOP mRNA expression was observed only in the samples treated with PMA 1 ng/mL for 24 h ( Figure 3B). Based on these results, PMA 5 ng/mL was used in the subsequent experiments.
PMA dose-dependently upregulated ppNOC mRNA in THP-1 cells after 24 h. Compared to untreated controls, ppNOC mRNA levels were upregulated either by PMA 1 ng/mL or PMA 10 ng/mL. Maximum ppNOC mRNA expression was seen in THP-1 cells stimulated with 10 ng/mL PMA ( Figure 3A). As for NOP, a slight upregulation of NOP mRNA expression was observed only in the samples treated with PMA 1 ng/mL for 24 h ( Figure 3B). Based on these results, PMA 5 ng/mL was used in the subsequent experiments.
Effects of TLR Agonists on the Nociceptin System
To examine effects of TLR signaling on nociceptin and NOP expression, TLR agonists specific for TLR2 (LTA), TLR4 (LPS), TLR7 (IMQ), or TLR9 (ODN 2216) were employed. RT-qPCR data showed that none of these TLR agonists had an impact on ppNOC mRNA level. However, an increase of intracellular nociceptin was measured in all TLR agonist groups, compared to controls (all p < 0.05). PMA upregulated ppNOC mRNA ( Figure 6) as well as intracellular nociceptin protein levels ( Figure 7A,B) after 24 h, compared to controls (both p < 0.0001). The TLR2 agonist, LTA, and the TLR4 agonist, LPS, completely abolished PMA's upregulating effects on ppNOC mRNA and intracellular nociceptin proteins, compared to the samples treated with PMA only (all p < 0.05). In addition, ppNOC mRNA expression in PMA+IMQ and PMA+ODN 2216 were decreased to 24.2 (20.3-29.1)% and 82.9 (70.1-95.8)%, compared to the PMA group (both p < 0.05) ( Figure 6 and Table 2). In contrast, IMQ and ODN 2216 had no antagonistic effects on intracellular nociceptin upregulation by PMA ( Figure 7B). TLR agonists, PMA, and PMA+TLR agonists had no impact on NOP mRNA (Table 2). An increase in cell membrane NOP proteins was detected in the cells stimulated with LPS, IMQ, ODN 2216, or PMA, compared to controls ( Figure 7C). No changes of cell membrane NOP proteins were observed in the samples co-stimulated with PMA+TLR agonists, compared to the PMA group ( Figure 7C). groups, compared to controls (all p < 0.05). PMA upregulated ppNOC mRNA ( Figure 6) as well as intracellular nociceptin protein levels ( Figure 7A,B) after 24 h, compared to controls (both p < 0.0001). The TLR2 agonist, LTA, and the TLR4 agonist, LPS, completely abolished PMA's upregulating effects on ppNOC mRNA and intracellular nociceptin proteins, compared to the samples treated with PMA only (all p < 0.05). In addition, ppNOC mRNA expression in PMA+IMQ and PMA+ODN 2216 were decreased to 24.2 (20.3-29.1)% and 82.9 (70.1-95.8)%, compared to the PMA group (both p < 0.05) ( Figure 6 and Table 2). In contrast, IMQ and ODN 2216 had no antagonistic effects on intracellular nociceptin upregulation by PMA ( Figure 7B). TLR agonists, PMA, and PMA+TLR agonists had no impact on NOP mRNA ( Table 2). An increase in cell membrane NOP proteins was detected in the cells stimulated with LPS, IMQ, ODN 2216, or PMA, compared to controls ( Figure 7C). No changes of cell membrane NOP proteins were observed in the samples co-stimulated with PMA+TLR agonists, compared to the PMA group ( Figure 7C). Flow cytometric analysis of intracellular nociceptin (B) and membrane NOP protein levels (C). Flow cytometry data are presented as mean fluorescence intensity (MFI) related to the respective untreated groups. Boxplots with individual data points, median, IQR, and mean "+"; n = 18. Wilcoxon test with correction for multiple testing. *, p < 0.05; **, p < 0.01; ***, p < 0.001. Flow cytometric analysis of intracellular nociceptin (B) and membrane NOP protein levels (C). Flow cytometry data are presented as mean fluorescence intensity (MFI) related to the respective untreated groups. Boxplots with individual data points, median, IQR, and mean "+"; n = 18. Wilcoxon test with correction for multiple testing. *, p < 0.05; **, p < 0.01; ***, p < 0.001. Table 2. Effects of TLR agonists on the regulation of ppNOC and NOP mRNA expression by PMA in THP-1 cells.
Effects of Activation of the Nociceptin System on TLR Expression
To investigate the contribution of the nociceptin system to TLRs, cells were cultured with or without different concentrations of nociceptin (0.01-100 nM) for 24 h. Flow cytometry analysis revealed that nociceptin dose-dependently suppressed TLR2, TLR4, TLR7, and TLR9 proteins. Cell surface TLR2 was attenuated by the highest concentration of nociceptin, compared to controls (p < 0.05 Figure 8A). Suppression of TLR4 was observed in the cells cultured with nociceptin within a larger range of concentrations (0.1-100 nM) ( Figure 8B). Nociceptin at amounts of 1-100 nM downregulated intracellular TLR7 and TLR9 proteins ( Figure 8C,D).
Effects of Activation of the Nociceptin System on TLR Expression
To investigate the contribution of the nociceptin system to TLRs, cells were cultured with or without different concentrations of nociceptin (0.01-100 nM) for 24 h. Flow cytometry analysis revealed that nociceptin dose-dependently suppressed TLR2, TLR4, TLR7, and TLR9 proteins. Cell surface TLR2 was attenuated by the highest concentration of nociceptin, compared to controls (p < 0.05 Figure 8A). Suppression of TLR4 was observed in the cells cultured with nociceptin within a larger range of concentrations (0.1-100 nM) ( Figure 8B). Nociceptin at amounts of 1-100 nM downregulated intracellular TLR7 and TLR9 proteins ( Figure 8C,D). Flow cytometry data are presented as mean fluorescence intensity (MFI) related to the corresponding untreated groups (controls). Boxplots with individual data points, median, IQR, and mean "+"; n = 12. Wilcoxon test with correction for multiple testing. *, p < 0.05; **, p < 0.01, compared to the respective controls.
Discussion
This study focuses on the interactions between the nociceptin and TLR systems in human monocytic THP-1 cells. The results show that TLR signaling prevents the upregulation of nociceptin by PMA, and nociceptin suppresses TLR protein expression. Flow cytometry data are presented as mean fluorescence intensity (MFI) related to the corresponding untreated groups (controls). Boxplots with individual data points, median, IQR, and mean "+"; n = 12. Wilcoxon test with correction for multiple testing. *, p < 0.05; **, p < 0.01, compared to the respective controls.
Discussion
This study focuses on the interactions between the nociceptin and TLR systems in human monocytic THP-1 cells. The results show that TLR signaling prevents the upregulation of nociceptin by PMA, and nociceptin suppresses TLR protein expression.
Nociceptin and the nociceptin receptor proteins are constitutively expressed in THP-1 cells and regulated either by PMA or TLR agonists in the present study. Specific TLR agonists prevented PMA's upregulating effects on ppNOC mRNA and intracellular nociceptin protein. Nociceptin dose-dependently decreased cell surface TLR2 and TLR4 as well as intracellular TLR7 and TLR9 proteins. These results support previous findings that nociceptin and NOP are regulated in blood leukocytes under inflammatory conditions and play a regulatory role during immune response [6][7][8]19,23,24].
Published evidence suggests that nociceptin plays an immune regulatory role. However, detailed information on mediator/receptor systems involved is lacking. Clinical data indicated that nociceptin can be detected in human synovial fluid and plasma, with lower levels in the synovial fluid. However, only extracellular nociceptin levels were measured [25]. Another study showed that nociceptin mRNA was expressed in human peripheral mononuclear neutrophils (PMN), with nociceptin evoking PMN chemotaxis and recruitment [26]. The effects of nociceptin on monocytes/macrophages still have to be elucidated. In our previous study, regulation of nociceptin and the nociceptin receptor by inflammatory mediators (LPS, cytokines) in human peripheral blood cells was observed [8].
In addition, PMA significantly upregulated nociceptin at mRNA and protein levels in human monocytic MM6 cells as well as in peripheral blood leukocytes [18,19]. Moreover, solid evidence from preclinical and clinical studies confirms that compounds targeting the nociceptin system are effective therapeutic approaches for substance abuse and potential candidates for pain management [27][28][29][30][31]. Previous studies mainly focused on the role of nociceptin and NOP in neural tissues; much less is known about their functions in blood immune cells. To the best of our knowledge, no data on the interactions between TLRs and the nociceptin system in a monocytic cell line have been published up to now.
PMA-Induced THP-1 Model
THP-1 cells have been widely used to study immune response and signaling pathways, and activated THP-1 cells provide an alternative to peripheral blood monocyte models [32]. TPH-1 cells were chosen in the present study because constitutive expression of nociceptin, NOP, and TLR proteins could be detected with higher basal TLR7 and TLR9 protein levels, compared to MM6 cells. Moreover, a more pronounced upregulation of ppNOC mRNA by PMA was observed in THP-1 cells, compared to MM6 cells.
In contrast to cell lines, ex vivo whole blood cells can demonstrate cross-talk between different blood cells, can interact with blood components, and may only represent a single blood donor, which may lead to misinterpretation of results. Therefore, PMA-induced THP-1 cells seem to be a suitable model to study the effects of the nociceptin system on TLRs, and conversely the effects of TLR's on nociceptin and the nociceptin receptor.
Effects of TLR Signaling on the Nociceptin System
Clinical data have shown aberrant expression of nociceptin and NOP in blood of patients suffering from pain and inflammatory disease [1,[5][6][7]23,24]. However, mechanisms underlying their regulation still need to be identified.
The cross-talk between opioids and TLRs has been discussed previously [11,14,33,34]. Suppression of TLR4 mRNA by morphine in mouse RAW cells and peritoneal macrophages were reported [35]. In another study, TLR4 signaling acted as a transient counter-regulator for inflammatory pain in vivo and increased the release of endogenous opioids from human monocytes in vitro [17]. In addition, there was evidence that TLR-antagonistic drugs may attenuate opioid-induced side effects [36].
The present data demonstrate that the basal level of ppNOC mRNA in THP-1 cells was below the detection limit, whereas intracellular nociceptin protein could be detected. These results are in line with previous findings, indicating low nociceptin mRNA expression in resting human peripheral blood neutrophils and storage of preformed nociceptin protein in the cells [37].
In the current model, PMA significantly upregulated nociceptin, both at mRNA and protein levels. Activation of TLR2 or TLR4 signaling completely blocked the PMA-mediated increase in ppNOC mRNA as well as intracellular nociceptin protein levels. In addition, TLR7 and TLR9 agonists partially prevented PMA's upregulating effects on ppNOC mRNA expression.
Interestingly, in a rat model of neuropathic pain, TLR2 and TLR4 antagonists produced analgesia and improved the analgesic effects of buprenorphine [33]. In addition to the wellknown dual interaction with mu and kappa opioid receptors [38,39], buprenorphine also seems to be a partial agonist for the nociceptin receptor and antagonist for the delta opioid receptor [4,29,[40][41][42]. This suggests that TLRs may indeed play a role in the regulation of endogenous nociceptin during immune response in vivo.
In the present study, the upregulation of nociceptin by PMA was more pronounced for mRNA than for intracellular proteins. Extracellular secretion of nociceptin may be the reason for this difference, as nociceptin is secreted by cells under inflammatory conditions [18,19,37].
Although TLR agonists had no effects on ppNOC mRNA expression, an increase in intracellular nociceptin protein was detected in all TLR agonist-treated samples. One possible explanation might be poor correlation of nociceptin mRNA and protein levels. In addition, proteolytic processing of the nociceptin precursor protein might be enhanced under inflammatory conditions.
In contrast, neither PMA nor TLR agonists affected NOP mRNA expression in THP-1 cells. However, cell membrane NOP protein levels were upregulated in the cells stimulated with these mediators. This is consistent with the results from a previous study which found that LPS/PepG decreased NOP mRNA but increased NOP protein in human umbilical vein endothelial cells [43]. As intracellular nociceptin was upregulated and secreted by THP-1 cells after PMA stimulation, the released nociceptin may bind to NOP and participate in autoregulation of the cells [44,45].
Nociceptin Effects on TLRs
There is growing evidence of the involvement of nociceptin and NOP in pain and sepsis [1,5,6]. Whereas the regulation of the nociceptin system and related mechanisms have been well characterized, information on the interactions between the nociceptin system and TLRs is still lacking.
The inhibitory effects of nociceptin on TLRs suggest that the nociceptin system may play an anti-inflammatory role in blood during immune response. In a rat model of colitis, peripheral injection of low-dose nociceptin had protective effects, while higher dose nociceptin worsened colitis [46]. In a mouse model of inflammatory bowel disease, oral administration of a NOP agonist showed anti-inflammatory and antinociceptive effects [47]. In contrast, inhibition of NOP decreased the severity of symptoms in a mouse colitis model [48], and systemic administration of nociceptin increased mortality in a rat sepsis model [49]. Furthermore, increased plasma nociceptin concentrations in septic patients and higher nociceptin levels in non-survivors have been reported [23,24]. In a previous study, increased NOP and decreased ppNOC mRNAs were detected in peripheral blood leukocytes from end-stage cancer patients and septic patients [7]. However, TLR expression levels were not assessed in these studies.
TLRs were regulated in blood leukocytes of patients suffering from pain or infectious diseases [50,51]. Increased responsiveness of PBMCs to in vitro TLR2, TLR4, and TLR7 activation has been reported in chronic pain patients [52]. Moreover, mortality in sepsis was associated with downregulation of TLR2 levels in blood monocytes [53]. Thus, increased TLR activation may affect the upregulation of nociceptin in blood cells under inflammatory conditions, and conversely, the increased nociceptin may contribute to the downregulation of TLR expression.
The present study has some limitations. First, only intracellular nociceptin proteins were measured. Extracellular nociceptin proteins will be addressed in a future project, as nociceptin may be secreted by the cells after stimulation. Second, the translational value of the present cell culture model needs to be considered. Although THP-1 cells are a suitable alternative to monocytes, the cultures may not accurately reflect the modulation of nociceptin and NOP in blood cells under pathophysiological conditions in vivo. Yet, this cell line provides a stable in vitro model [54], which enables the study of mechanisms of nociceptin and TLR regulation. Nevertheless, the reciprocal negative regulation observed between the nociceptin and TLR systems in THP-1 cells emphasizes the translational potential of new therapeutic targets in the treatment of pain and/or inflammation. Future studies should investigate interactions between these two systems in blood leukocyte subsets.
Conclusions
The present investigation highlights antagonistic effects observed in the nociceptin and TLR systems, suggesting that the nociceptin system may play an anti-inflammatory role in leukocytes during the immune response. Fundamental insights into the crosstalk between the nociceptin system and TLRs may shed new light on the treatment of pain and/or inflammatory disease. | 2022-04-13T05:23:15.662Z | 2022-03-23T00:00:00.000 | {
"year": 2022,
"sha1": "da0cb55e03ab70ab1a0d406002d64483dee8baf0",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4409/11/7/1085/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "da0cb55e03ab70ab1a0d406002d64483dee8baf0",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
2186194 | pes2o/s2orc | v3-fos-license | Uniqueness of Ginzburg-Rallis models: the Archimedean case
In this paper, we prove the uniqueness of Ginzburg-Rallis models in the archimedean case. As a key ingredient, we introduce a new descent argument based on two geometric notions attached to submanifolds, which we call metrical properness and unipotent $\chi$-incompatibility.
Introduction and main results
In year 2000, Ginzburg and Rallis formulated a conjecture to characterize the nonvanishing of central values of partial exterior cube L-functions attached to irreducible cuspidal automorphic representations of GL 6 in terms of certain periods ( [GR00]). This is analogous to the Jacquet conjecture for the triple product L-functions for GL 2 (established in full by Harris and Kudla in [HK04]), and to the Gross-Prasad conjecture for classical groups ([GP92, GP94, GJR04, GJR05,GJR09]).
To be precise, let A be the ring of adeles of a number field k. Fix a nontrivial unitary character ψ A of k\A, and a (non-necessarily unitary) character χ A × of k × \A × . For any quaternion algebra D over k, denote G D = GL 3 (D), and S D its subgroup consisting of elements of the form Let ϕ D be an automorphic form on G D (k)\G D (A). The Ginzburg-Rallis period P χ S D (ϕ D ) of ϕ D is defined by the following integral where A × is identified with the center of G D (A). The Ginzburg-Rallis conjecture can then be stated as follows.
Conjecture 1.1. (Ginzburg-Rallis, [GR00]) Let π be an irreducible cuspidal automorphic representation of GL 6 (A) with central character χ 2 A × . For any quaternion algebra D over k, denote by π D the generalized Jacquet-Langlands correspondence of π, which is either zero or an irreducible cuspidal automorphic representation of G D (A). Consider the irreducible representation Λ 3 ⊗ C 1 of the L-group GL 6 (C) × GL 1 (C), where Λ 3 is the exterior cube product of the standard representation of GL 6 (C), and C 1 is the standard representation of GL 1 (C). The partial L-function L S (s, π ⊗ χ −1 A × , Λ 3 ⊗ C 1 ) does not vanish at s = 1 2 if and only if there exists a unique quaternion algebra D such that (a) the period P χ S D (ϕ D ) is nonzero for some ϕ D ∈ π D ; and (b) for any quaternion algebra D ′ which is not isomorphic to D, the period See [GR00] and [GJ] for some partial results on the conjecture.
We consider the corresponding local theory. Let K be a local field of characteristic zero. Fix a nontrivial unitary character ψ K of K, and an arbitrary character χ K × of K × . For any quaternion algebra D over K, denote G D = GL 3 (D) and define its subgroup S D as in the number field case. We also define the local analogy χ S D of χ S D , by the same formula in terms of the characters ψ K and χ K × .
If K is nonarchimedean, we let V D be an irreducible smooth representation of G D , and if K is archimedean, let V D be an irreducible representation of G D in the class F H. The notion of representations in the class F H will be explained in Section 10.
As in the proof of Jacquet conjecture, in order to tackle the Ginzburg-Rallis Conjecture, the first basic property that we should establish is Conjecture 1.2. The Ginzburg-Rallis models on V D is unique up to scalar, i.e., where C χ S D is the one dimensional representation of S D given by the character χ S D .
This conjecture has been expected since the work [GR00] and was first discussed with details in [J08]. In her Minnesota thesis (directed by the first named author), Nien proved Conjecture 1.2 in the nonarchimedean case ( [N06]). We remark that there is a generalization of the Ginzburg-Rallis models to GL 3n , which may be viewed as the "three block" version of the Whittaker models for GL n . As noted in [N06], the local uniqueness property is not expected to hold for the generalized Ginzburg-Rallis models for GL 3n with n > 2.
The first main purpose of this paper is to prove the archimedean case of Conjecture 1.2, which requires substantially more delicate analysis than the nonarchimedean case.
From now on, we will assume that K is the archimedean local field R or C. Note that the notion of representations in the class F H includes the requirement of moderate growth. This has the implication that if one replaces the additive character ψ K with one which is not unitary.
Ginzburg-Rallis models are so called "mixed models", as the group S D is neither unipotent nor reductive. On the other hand, we have the Whittaker models and linear models, where the subgroup involved is unipotent or reductive, respectively. By now we know that uniqueness of Whittaker models is relatively easy to establish (see Section 11.4 for a short proof). The study of uniqueness of linear models was initiated by Jacquet-Rallis in [JR96], and there have been a number of recent advances in this direction (see [AGRS,AG3,SZ], for example). We remark that in each case, a good understanding of algebraic and geometric structure of the orbital decomposition is required. (The task is made easier by geometric invariant theory, see [AG2].) Although in some special cases, one may reduce uniqueness of mixed models to that of linear models (c.f. [JR96,AGJ09] and Remark 1.6 of this section), there is still a lack of general techniques to treat the mixed model problems (save for a few low rank cases; see for example [BR07]). Besides a proof of Theorem 1.3, another main purpose of this paper is to introduce a descent method in the archimedean case that reduces uniqueness of mixed models to that of linear models. We carry out the descent process for the Ginzburg-Rallis model, which is considered as an exceptional model, and is also sufficiently complicated to reveal difficulties in general archimedean mixed model problems.
We introduce some notations. For any natural number n, denote by gl n (K) the space of n × n matrices with entries in K. When the quaternion algebra D is split, we fix an identification of D with gl 2 (K), and then G D is identified with GL 6 (K). For a square matrix x, if its entries are from K, denote by x τ its transpose. If D is not split and x ∈ G D = GL 3 (D), set where "¯" denotes the (element-wise) quaternionic conjugation.
Define the real trace form , R on the Lie algebra gl 3 (D) of G D by (1.3) x, y R = the real part of the trace of xy, if D is split, the reduced trace of xy, othewise.
Denote by ∆ D the Casimir element with respect to , R , which is viewed as a biinvariant differential operator on G D . We will see in Section 10 that by (a general form of) the Gelfand-Kazhdan criterion, Theorem 1.3 is implied by the following Theorem 1.4. Let f be a tempered generalized function on G D , which is an eigenvector of ∆ D . If f satisfies The notion of tempered generalized functions will be explained in Section 2.3. We remark that the equalities in the theorem are to be understood as equalities of generalized functions, and f (sx) denotes the left translate of f by s −1 . Similar notations apply throughout the article.
Assume now that D is split. Thus G = GL 6 . (We drop the subscript D, and the coefficient field K in all notations.) The non-split case, which is simpler, will be investigated at the end of Section 9.
Following a well-known scheme of Bruhat, we first decompose The proof of Theorem 1.4 will consist of three steps and will involve three types of arguments: (a) the transversality of certain vector fields to all except four G R 's, among the twenty one P -P τ double cosets of G. The technique is due to Shalika [S74]. This allows us to focus the attention to the open submanifold G ′ of G consisting of the four exceptional double cosets.
(b) a descent argument based on two new notions attached to submanifolds, which we call metrical properness (Definition 3.1) and unipotent χ-incompatibility (Definition 3.3), as well a synthesis of these two notions which we call U χ M property (Definition 3.6). This lies at the heart of our approach and forms the main part of our argument. It leads us eventually to two linear model problems: the uniqueness of trilinear models for GL 2 , and the multiplicity one property for the pair (GL 2 , GL 1 ).
(c) use of the oscillator representation to conclude the uniqueness of the two aforementioned linear models.
For
Step (c) which is relatively easy, we just appeal to the following Proposition 1.5. ([Pr89, Theorem C.7]) Let E be a finite dimensional non-degenerate quadratic space over K, and let the orthogonal The above proposition may also be stated as that the determinant character of O(E) does not occur in Howe duality correspondence of (O(E), Sp(2k)) if k < dim E. The descent process reveals a very interesting interplay between the Ginzburg-Rallis model and other (smaller) models. The first model occurring is as follows. Take the maximal Levi subgroup G 4,2 = GL 4 × GL 2 of G, and write In the course of proof of Theorem 1.4, we find that for any irreducible representation π of G 4,2 in the class F H, where χ S 4,2 is the restriction of the character χ S to S 4,2 . A proof of (1.6) will be given in Section 11.3.
Remark 1.6. This model may be viewed as the Bessel model for the orthogonal group pair (O 6 , O 3 ), via the (incidental) identification of low rank algebraic groups. In the p-adic case, for a general pair (O m , O n ) with m > n and having different parity (and its analog for unitary groups), Gan, Gross and Prasad reduce the uniqueness of Bessel models to the Multiplicity One Theorems proved by Aizenbud, Gourevitch, Rallis and Schiffmann ( [AGRS,GGP09]). In the archimedean case, the uniqueness of Bessel models for general linear groups, unitary groups and orthogonal groups was proved by the authors ([JSZ09]), using a different reduction technique (from the padic case) and the archimedean Multiplicity One Theorems proved in [SZ]. Note that the latter for general linear groups is independently due to Aizenbud and Gourevitch ( [AG3]).
An interesting phenomenon here is that in order to complete the proof for the case (G 4,2 , S 4,2 ), one must also consider the maximal Levi subgroup G 3,1,2 = GL 3 × GL 1 × GL 2 of G 4,2 = GL 4 × GL 2 . This case reduces essentially to the case (GL 3 , S 3 ), where The corresponding character of S 3 is given by This is a mixed model. It should come as no surprise that the pair (GL 3 , S 3 ) is a special case of the model introduced by Jacquet-Shalika ( [JS90]) to construct the exterior square L-functions for GL 2n+1 . The uniqueness for this case was not known for any n. In the course of our proof for Theorem 1.4, we shall prove the uniqueness for the pair (GL 3 , S 3 ) over archimedean local fields. (The p-adic case follows similarly.) See Section 11.2.
We now describe the contents and the organization of this paper. In Section 2, we review some generalities on differential operators, generalized and invariant generalized functions, basics of Nash manifolds and the associated notion of tempered-ness. In Section 3, we define the notions of metrical properness, unipotent χ-incompatibility, and their synthesis, U χ M property. Based on these three new notions, we give respectively three vanishing results on certain spaces of generalized functions (Lemmas 3.2, 3.4, 3.7). In Section 4, we prove the transversality of certain vector fields to all but four of the P -P τ double cosets, which as mentioned allows us to focus our attention to an open submanifold G ′ only. In Section 5 and Section 6, we show (through lengthy but straightforward computations) that a certain submanifold Z 4 of G 4,2 , and a certain submanifold Z 6 of G ′ has U χ M properties, respectively. This eventually reduces our problem to the submanifolds GL 2 × GL 2 and GL 3 × GL 1 of G. In Sections 7 and 8, we show that certain spaces of quasi-invariant tempered generalized functions on GL 2 × GL 2 and GL 3 × GL 1 vanishes.
The complete proof of Theorem 1.4 will be given in Section 9. In Section 10, we derive Theorem 1.3 from Theorem 1.4. Finally, in Section 11, we record uniqueness of models occurring in the process of descent. In addition and as further evidence for the relevance of the notion of unipotent χ-incompatibility (for mixed models, as opposed to linear models), we give a quick proof of the uniqueness of the Whittaker models based on this notion.
Generalities
We emphasize that materials of this section are all known. In particular nothing is due to the authors. If φ : M → M ′ is a smooth map of smooth manifolds, then the pushing forward sends compactly supported distributions on M to compactly supported distributions on M ′ . If furthermore φ is a submersion, then the pushing forward induces a continuous linear map φ * : . We define the pulling back as the transpose of φ * , which extends the usual pulling back of smooth functions. The map φ * is injective if φ is a surjective submersion.
Remark: Pulling back is not canonically defined for distributions. For this reason, we work with generalized functions instead of distributions.
For k ∈ Z, denote by DO(M) k the Fréchet space of differential operators on M of order at most k, which by convention is 0 if k < 0. It is well-known that every differential operator D : Recall that we have the principal symbol map where T(M) is the real tangent bundle of M, S k stands for the k-th symmetric power, and Γ ∞ stands for smooth sections. The continuous linear map σ k is specified by the following rule: σ k (X 1 X 2 · · · X k )(x) = X 1 (x)X 2 (x) · · · X k (x), and for all x ∈ M and all (smooth real) vector fields X 1 , X 2 , · · · , X k on M.
Let Z be a (locally closed) submanifold of M. Write for the normal bundle of Z in M. Denote by the map formed by composing σ k with the restriction map to Z, and followed by the quotient map Definition 2.1.
(a) A vector field X on M is said to be tangential to Z if X(z) is in the tangent space T z (Z) for all z ∈ Z, and transversal to Z if X(z) / ∈ T z (Z) for all z ∈ Z; more generally (b) a differential operator D is said to be tangential to Z if for every point z ∈ Z there is an open neighborhood U z in M such that D| Uz is a finite sum of differential operators of the form ϕX 1 X 2 · · · X r , where ϕ is a smooth function on U z , r ≥ 0, and X 1 , X 2 , · · · , X r are vector fields on U z which are tangential to U z ∩ Z. For D ∈ DO(M) k , it is said to be transversal to Z if σ k,Z (D) does not vanish at any point of Z.
We introduce some notations. For a locally closed subset Z of M, denote where U is any open subset of M containing Z as a closed subset. This definition is independent of U. For any differential operator D on M, denote We record the following lemma, which is due to Shalika (c.f. proof of Proposition 2.10 in [S74]).
Lemma 2.2. Let D 1 be a differential operator on M of order k ≥ 1, which is transversal to a submanifold Z of M. Let D 2 be a differential operator on M which is tangential to Z. Then Note that the relative stable condition amounts to saying that H × Z is a union of fibres of the action map ρ M . We first prove the following two lemmas in a general setting.
and therefore it suffices to show that φ ′ is submersive at every point z ∈ Z ∩ U.
Since φ is submersive, we have that Lemma 2.5. Let ρ : M 1 → M 2 be a surjective submersion of smooth manifolds. Let Z 1 be a submanifold of M 1 which is a union of fibres of ρ. Then Z 2 := ρ(Z 1 ) is submanifold of M 2 , and the restriction ρ 0 : Take two open embeddings i 1 : R n 1 ֒→ M 1 and i 2 : R n 2 ֒→ M 2 such that the diagram 1 is a submanifold of R n 1 which is a union of fibres of ρ ′ . The condition that Z 1 is a union of fibres of ρ implies that . By the local triviality of submersions, it suffices to prove the lemma for ρ ′ and Z ′ 1 . The latter is now immediate in view of Lemma 2.4 and the fact that By setting Then the submersion ρ M is H intertwining as well as H M intertwining. Therefore the pulling back yields a linear map By the Schwartz Kernel Theorem and the fact that every invariant distribution on a Lie group is a scalar multiple of the Haar measure ([W88, . We shall record this as Lemma 2.7. There is a well-defined map which is called the restriction to M: 2.3. Nash manifolds and tempered generalized functions. We begin with a review of basic concepts and properties of Nash manifolds, in which the notion of tempered generalized functions is defined. Our main reference on Nash manifolds is [S87], and temperedness is discussed in [C91,AG1].
Remark: We will use Fourier transforms implicitly in Section 7, and explicitly in Section 8. Fourier transforms are only defined for tempered generalized functions. This is the main reason that we work with tempered generalized functions instead of arbitrary generalized functions.
Recall that the collection SA n of semialgebraic subsets of R n is the smallest set with the following properties: (a) every element of SA n is a subset of R n ; (b) for every real polynomial function p on R n , we have (c) SA n is closed under the operation of taking intersection, and taking complement in R n . A Nash manifold of dimension n is a manifold M, together with a collection N , whose members are called Nash charts, such that the followings hold: (a) every Nash chart has the form (φ, U, U ′ ), where U is an open semialgebraic subset of R n , U ′ is an open subset of M, and φ : U → U ′ is a diffeomorphism; (b) every two Nash charts (φ 1 , U 1 , U ′ 1 ) and (φ 2 , U 2 , U ′ 2 ) are Nash compatible, i.e., the graph of the diffeomorphism if it is Nash compatible with all Nash charts, then itself is a Nash chart; (d) there are finitely many Nash charts A Nash manifold is either the empty set or a nonempty Nash manifold of dimension n ≥ 0. A submanifold of a Nash manifold which is semialgebraic is called a Nash submanifold, which is automatically a Nash manifold. The product of two Nash manifolds is again a Nash manifold. A smooth map φ : M 1 → M 2 of Nash manifolds is called a Nash map if its graph is semialgebraic in M 1 ×M 2 . (A Nash map always sends a semialgebraic set to a semialgebraic set.) A Nash function on a Nash manifold M is a Nash map from M to C, and a differential operator D on M is called Nash if D(f ) is Nash for every Nash function f on every Nash open submanifold of M.
A Nash group is a group as well as a Nash manifold so that the group operations are Nash maps. A Nash action of a Nash group on a Nash Manifold is defined similarly.
We proceed to our discussion on the notion of tempered generalized functions on a Nash manifold. A smooth function f on a semialgebraic open subset U of R n is called a Schwartz function if D(f ) is bounded for every Nash differential operator D on U. Denote by S(U) the Fréchet space of Schwartz functions on U. Now let M be a Nash manifold of dimension n. Pick a covering of M by Nash charts The Fréchet space of Schwartz functions on M, denoted by S(M), is then defined to be the image of the map This definition is independent of the covering we choose. One may similarly define the Fréchet space of Schwartz densities. Denote by C −ξ (M) its strong dual, whose members are called tempered generalized functions. All tempered generalized functions are generalized functions. Now let H be a Nash group, with a Nash action on a Nash manifold M. For any character χ on H, we set . Let N be a Nash manifold, and let φ : M → N be an H invariant Nash map. We record the following obvious fact as a lemma.
Let M, H M and χ M be as in Lemma 2.7. If furthermore M is a Nash submanifold of M, and H M is a Nash subgroup of H, then the restriction map sends C −ξ 3. Metrical properness and unipotent χ-incompatibility 3.1. Metrical properness. This notion requires that the manifold M is pseudo Riemannian, i.e., the tangent spaces are equipped with a smoothly varying family { , x : x ∈ M} of nondegenerate symmetric bilinear forms.
Note that a Laplacian type differential operator is transversal to any metrically proper submanifold, from its very definition. Therefore the following is a special case of Lemma 2.2. 3.2. Unipotent χ-incompatibility. As in Section 2.2, let H be a Lie group with a character χ on it, acting smoothly on a manifold M. If a locally closed subset Z of Definition 3.3. An H stable submanifold Z of M is said to be unipotently χincompatible if for every z 0 ∈ Z, there is a local H slice Z of Z, containing z 0 , and a smooth map φ : Z → H such that the followings hold for all z ∈ Z: The following lemma will be important for our later considerations.
Lemma 3.4. Let Z be an H stable submanifold of M which is unipotently χincompatible. Then C −∞ χ (M; Z) = 0. By using a well-known result of L. Schwartz on the filtration of the sheaf of generalized functions with supports in a submanifold, Lemma 3.4 is implied by the following Here and as usual, "Γ −∞ " stands for the space of generalized sections. (We omit its definition since it is a straightforward generalization of the notion of generalized functions, in Section 2. The meaning of (3.1) will be made clear in the following proof.
Proof. As in the case of generalized functions, define the pulling back , of the action map ρ Z : H × Z → Z, which continuously extends the usual pulling back of smooth sections. HereẼ is the pulling back of E via ρ Z , which is obviously an H equivariant vector bundle C.f. Lemma 2.7. Here we caution the reader due to the fact that we are dealing with generalized (as opposed to smooth) sections. The formula (3.2) is to be understood as an equality in Γ −∞ (Ẽ). The righthand side makes sense since the map of smooth sections extends continuously to a (well-defined) map of generalized sections. Similarly, all the equalities below, which are obvious when f | Z is a smooth section, make sense and hold true by a continuity argument.
where φ(z) is viewed as a linear automorphism of E z , and 1 Ez is the identity map of which implies that f = 0.
Recall the notion of a Nash group from Section 2.3. It is said to be unipotent if it is Nash isomorphic to a connected closed subgroup of some U n , where U n is the Nash group of unipotent upper triangular real matrices of size n. An element of a Nash group is said to be (Nash) unipotent if it is contained in a unipotent Nash closed subgroup. We note that the general linear group GL n (K) is Nash and an element of GL n (K) is (Nash) unipotent if and only if it is unipotent in the usual sense, i.e., is a unipotent linear transformation.
If H, M and the action of H on M are all Nash, then an H stable submanifold Z of M is unipotently χ-incompatible if the following holds: for every point z 0 ∈ Z, there is a local H slice Z of Z, containing z 0 , and a smooth map φ : Z → H such that, for all z ∈ Z, (a) φ(z)z = z, and (b) φ(z) is (Nash) unipotent; (c) χ(φ(z)) = 1.
The reason for this is that the hypothesis of Nash action ensures that the map induced by the action of the unipotent element φ(z), is unipotent. This implies condition (b) in Definition 3.3.
3.3. U χ M property. As before, let H be a Lie group acting smoothly on a manifold M, and let χ be a character on H. We further assume that M is a pseudo Riemannian manifold.
Definition 3.6. We say that an H stable locally closed subset Z of M has U χ M property if there is a finite filtration
Small submanifolds of GL 6
We return to the group G = GL 6 (K). Recall from the Introduction the subgroup S and its character χ S . From now on, we set Let H act on G by (g 1 , g 2 )x = g 1 xg τ 2 . Our main object of concern is the space C −∞ χ (G). For x ∈ G, define its rank matrix where rank i×j (x) is the rank of the lower right i × j block of x.
Let ∆ be the Casimir operator on G, as in the Introduction. The goal of this section is to prove the following Denote by X left the left invariant vector field on G whose tangent vector at x is xx left , and by X right the right invariant vector field on G whose tangent vector at x is x right x.
The key to Proposition 4.1 is the following transversality result. We shall divide it into a number of lemmas (Lemmas 4.3, 4.5, 4.6, 4.7).
Proposition 4.2. Assume that R is not one of the four matrices in (4.4). Then either X left or X right is transversal to the double coset G R .
Lemma 4.3. If the lower right entry of R is zero, then X left is transversal to G R .
Proof. Assume that there is an x 11 x 12 x 13 Note that the lower right 2 × 2 block of very element of Lie(P )x + xLie(P τ ) is 0. Therefore x 32 = 0, which further implies that the lower right 2 × 4 block of very element of Lie(P )x + xLie(P τ ) is 0. Therefore x 31 = 0. This contradicts the fact that x is invertible.
The following lemma provides a technical simplification.
Lemma 4.4. Let x, y be two matrices in G R such that P xS τ = P yS τ . Then Proof. Write y = pxq, p ∈ P, q ∈ S τ , and assume that X left (x) ∈ T x (G R ), i.e., xx left ∈ Lie(P )x + xLie(P τ ).
Lemma 4.5. If the second row of R is [1 1], then X left is transversal to G R .
Proof. Let R be as in the lemma. Then every matrix in G R is in the same P -S τ double coset with a matrix of the form x 11 x 12 x 13 x 21 x 22 x 23 Note that the middle 2 × 2 block of the last two rows of very matrix in Lie(P )x + xLie(P τ ) has the form δ 2 u, u ∈ gl 2 (K). This implies that the first row of x 31 is zero, and consequently, the fifth row of x is zero, which contradicts the fact that x is invertible.
Similarly, we have Lemma 4.6. If the second column of R is 1 1 , then X right is transversal to G R .
Lemma 4.7. If R = 2 2 2 2 , then X left is transversal to G R .
Proof. Every matrix in G R is in the same P -S τ double coset with a matrix of the form Note that the central 2 ×2 block of every matrix in Lie(P )x+ xLie(P τ ) is zero, which implies that x 21 = 0. This contradicts the fact that x is invertible.
The proof of Proposition 4.2 is now finished.
Lemma 4.8. There exists a nonzero number c, an element λ ∈ K × , and a differential operator D left on G, which is tangential to every P -P τ double coset of G, such that Here X left (λ) is the left invariant vector field on G whose tangent vector at x ∈ G is λxx left , and x left is given in (4.5). The same is true if one replaces "left" by "right" everywhere.
Proof. The Lie algebra g of G has a decomposition where n is the Lie algebra of the unipotent radical N of P , and l is the Lie algebra of the Levi factor GL 2 (K) × GL 2 (K) × GL 2 (K). Recall that g is equipped with the real trace form. Let X 1 , X 2 , · · · , X r be a basis of n, and write 1 , X ′ 2 , · · · , X ′ r is the dual basis of X 1 , X 2 , · · · , X r in n τ . Note that ∆ 1 is independent of the choice of basis of n. We identify elements of U(g) with left invariant (real) differential operators on G as usual. It is then easy to see that (4.6) ∆ − 2∆ 1 ∈ U(l).
Let dχ S be the differential of the character χ S . Write which defines a character of n τ . Then every generalized function f ∈ C −∞ χ (G) satisfies Xf = −χ n τ (X)f, for all X ∈ n τ . Now choose X 1 to be perpendicular to the kernel of χ n τ . This is unique up to a multiple in R × , and has the form X left (λ) for some λ ∈ K × . This choice of X 1 also implies that χ n τ (X ′ 2 ) = χ n τ (X ′ 3 ) = · · · = χ n τ (X ′ r ) = 0, and χ n τ (X ′ 1 ) is a nonzero number. Therefore . Equations (4.6) and (4.7) will now imply the lemma, in view of the fact that a differential operator in U(l) is tangential to every P -P τ double coset.
open will imply the same for X left (λ). Invoking Lemma 2.2, we see that f i = 0.
A submanifold Z 4 of GL 4 × GL 2
As always, we equip G = GL 6 (K) with the bi-invariant pseudo Riemannian metric whose restriction to T e (G) = gl 6 (K) is the real trace form , R , given in (1.3).
As in the Introduction, write G 4,2 = GL 4 (K) × GL 2 (K), which embeds into G in the usual way. Then G 4,2 is a nondegenerate submanifold of G, with T e (M) = gl 4 (K) × gl 2 (K). Thus G 4,2 is itself a pseudo Riemannian manifold. Denote The purpose of this section is to prove the following proposition. This will take a number of steps. The action of H 4,2 on Z 4 descents to a transitive action on the quotient manifold Therefore to show that the H 4,2 equivariant action map is a surjective submersion, it suffices to show the same for its restriction map Denote by N 4,2 the unipotent radical of S 4,2 . Then and hence it suffices to show that the action map We need to show that gxg ′ ∈ Z 4,1 , provided that gxg ′ ∈ Z 4 . The condition gxg ′ ∈ Z 4 implies that a 0 1 1 0 a ′ = 0 1 1 0 and a 0 0 0 1 a ′ = 0 0 0 1 , which is equivalent to a = α 1 0 t 1 and a ′ = α −1 1 −t 0 1 , for some α ∈ K × and t ∈ K. It is now straightforward to check that gxg ′ ∈ Z 4,1 .
Remark: In the sequel, we will skip the verification when we assert that a submanifold is relatively stable or is a slice with respect to a certain group action. Write which is a closed submanifolds of Z 4 , by Lemma 2.6.
Proof. Let x ∈ Z 4 \ Z 4,1 be as in ( Since x 13 = x 31 , for a suitably chosen t ∈ K. This proves the lemma. Write which is a relatively H 4,2 stable closed submanifold of Z 4,1 . Therefore is a closed submanifold of Z 4,1 .
Note that
xx ′ = a(e 11 − e 33 ), which spans a nondegenerate K subspace of T e (G 4,2 ). This implies that x −1 T x (Z 4,1 ) is contained in a proper nondegenerate subspace of T e (G 4,2 ). Therefore by invariance of the metric, T x (Z 4,1 ) is contained in a nondegenerate proper subspace of T x (G 4,2 ), for any x ∈ Z 4,1 \ Z 4,2 .
The lemma follows, as before.
Recall from Section 4 the P − P τ double coset G R indexed by a rank matrix R. Set (6.1) Clearly Z 6 is an H = S × S stable submanifold of G, as with each G R . The purpose of this section is to prove the following proposition. Again it will take a number of steps. Proposition 6.1. As an H submanifold of G, Z 6 has U χ M property.
Denote by Z 6 all matrices in Z 6 of the form which forms an H slice of Z 6 . Write They are both relatively H stable closed submanifolds of Z 6 . Therefore both Z 6,1 = HZ 6,1 and Z 6,2 = HZ 6,2 are closed submanifolds of Z 6 .
Proof. Let x ∈ Z 6 \ Z 6,1 be as in (6.2). Write Then u(x, t)x = xv(x, t), and the lemma follows, as before.
Proof. Every element of Z 6,1 \ Z 6,2 is in the same H-orbit as an element of the form where x ′ = e 35 − e 53 . Now x ′ x = a(e 33 − e 55 ) and we finish the proof, as before.
Proof. This is identical to the proof of Lemma 5.4 in Section 5. We omit the details.
Proof. Every matrix in Z ′ 6,3 \ Z ′ 6,4 is in the same H orbit as a matrix of the form Fix such an x. Then . Now x ′ x = a(e 11 − e 33 ) and we finish the proof, as before.
Lemma 6.7. The submanifold Z 6,5 is metrically proper in G.
In view of the proceeding lemmas, the proof of Proposition 6.1 is complete.
It will be slightly more convenient to work with the following: where the semidirect product is given by the action Denote byχ 2 the character ofH 2 such that χ 2 | GL 2 (K) = 1 andχ 2 (τ ) = −1.
Proof. First we note that the (3, 3) entry of x is invariant underH 3 . Denote by GL 3 (K) ′ the set of matrices in GL 3 (K) whose (3, 3) entry is not 1. LetH 3 act on GL 3 (K) ′ × K × by the same formula as its action on M 3 . Then the map is anH 3 -equivariant Nash diffeomorphism. Therefore . As the action ofH 3 on K × is trivial, it suffices to show that C −ξ χ 3 (GL 3 (K) ′ ) = 0. This will be implied by Lemma 2.8 and Proposition 8.2 below.
The rest of this section is devoted to the proof of Proposition 8.2. LetH 3 act on gl 3 (K) by (l, g 1 , g 2 )x = lg 1 xg τ 2 l −1 and τ x = x τ .
Lemma 8.4. The H 3 stable manifold Z 3,1 \Z 3,2 is unipotently χ 3 -incompatible, where x 11 x 12 x 13 We shall employ Fourier transform to finish the proof of Proposition 8.2. In general, let E be a finite dimensional real vector space, equipped with a nondegenerate symmetric bilinear form , E . The Fourier transform is a topological linear isomorphism of the space of Schwartz functions, given by where dy is the Lebesgue measure on E, normalized such that the volume of the cube is 1, for any orthogonal basis v 1 , v 2 , · · · , v r of E such that v i , v i E = ±1, i = 1, 2, · · · , r. The Fourier transform extends continuously to a topological linear isomorphism : which is still called the Fourier transform.
The following lemma is a form of uncertainty principle.
Lemma 8.6. Let f ∈ C −ξ (E). If both f and f are supported in a common nondegenerate proper subspace of E, then f = 0.
Proof. Let v ∈ E be a nondegenerate vector such that both f and f are supported in its perpendicular space. Denote by v * the function Due to tempered-ness, f has a finite order and therefore (v * ) k f = 0 for some k ≥ 1.
Consequently (∂/∂v) k f = 0, and we finish the proof by applying Lemma 2.2.
We continue with the proof of Proposition 8.2. Let gl 3 (K) be equipped with the real trace from as in the Introduction and define the Fourier transform accordingly. Given f ∈ C −ξ χ 3 (gl 3 (K)), it is easy to check that its Fourier transform f ∈ C −ξ (gl 3 (K)) satisfies the followings: Then as in Lemma 8.5, we conclude that f is supported in Therefore both f and f are supported in the proper nondegenerate subspace Lemma 8.6 then implies that f = 0. The proof of Proposition 8.2 is now complete.
Remark: We may view the Fourier transform argument of this section as a variation of the metrical properness argument of Sections 5 and 6. In view of Lemma 8.4 on unipotent χ 3 -incompatibility, we have in some sense used U χ M property to reduce Proposition 8.1 to the vanishing of (8.2). The latter is closely related to the multiplicity one property of the pair (GL 2 (K), GL 1 (K)).
9. Proof of Theorem 1.4 We will first examine the case where the quaternion algebra D is split, namely G = GL 6 (K). We start with the following Lemma 9.1. Recall the notations of Section 5.
(a) If Z is a unipotently χ 4,2 -incompatible H 4,2 stable submanifold of G 4,2 , then Z = HZ is a unipotently χ-incompatible submanifold of G. (b) If Z is a metrically proper H 4,2 stable submanifold of G 4,2 , then Z = HZ is a metrically proper submanifold of G.
Proof. Part (a) is clear. For Part (b), we note By invariance of the metric, we only need to show that Z is metrically proper at every point z ∈ Z, i.e., the tangent space T z (Z) is contained in a nondegenerate proper subspace of T z (G). First we assume that z is the identity matrix e. Then T e (Z) = T e (Z) + (Lie(U 4,2 ) + Lie(U τ 4,2 )) is metrically proper since T e (Z) is metrically proper in T e (GL 4 (K) × GL 2 (K)), and T e (G) = T e (GL 4 (K) × GL 2 (K)) ⊕ (Lie(U 4,2 ) + Lie(U τ 4,2 )) is an orthogonal decomposition. Now let z ∈ Z. Note that z −1 Z = U 4,2 (z −1 Z)U τ 4,2 , and z −1 Z is metrically proper in GL 4 (K) × GL 2 (K). Therefore the above argument implies that z −1 Z is metrically proper at e. Using the left multiplication by z l z : (G, z −1 Z, e) → (G, Z, z), we conclude that Z is metrically proper at z.
Recall the open submanifold G ′ of G from Section 4. Set G ′ 4,2 = (GL 4 × GL 2 ) ∩ G ′ , which is stable under H 4,2 = S 4,2 × S 4,2 . Define G ′ 2,4 and H 2,4 similarly. Recall also the submanifolds M 2 and M 3 , from Sections 7 and 8. Also define the following symmetric counterpart of M 3 : Note that where Z 4 is given in (5.2), and W 4 is given similarly by Proposition 9.2. As an H manifold, is an eigenvector of ∆, and f vanishes on G ′′ , then f = 0. Proof. It is easy to check that • Both HZ 4 and HW 4 are closed in HZ 4 HW 4 . By Proposition 6.1, the submanifold Z 6 has U χ M property. By Proposition 5.1 and Lemma 9.1, the submanifold HZ 4 has U χ M property. Similarly, HW 4 also has U χ M property. Therefore the H stable closed subset Z 6 HZ 4 HW 4 of G ′ has U χ M property. The assertion follows.
Extend χ to a characterχ ofH by requiring and extend the action on G of H toH by requiring Proposition 9.3. One has that C −ξ χ (G ′′ ) = 0.
We are now ready to prove Theorem 1.4 for the split case. Let f be as in the theorem. Write f τ (x) = f (x τ ). Then f τ still satisfies (1.4), which implies that . From Proposition 9.3, we know that f − f τ = 0 on G ′′ . Note that τ commutes with the differential operator ∆ on G. So f τ is an eigenvector of ∆, with the same eigenvalue as that of f . Therefore f − f τ is again an eigenvector of ∆. Proposition 9.2 implies that f − f τ = 0 on G ′ . By Proposition 4.1, we finally conclude that In the rest of the section, we sketch the proof of Theorem 1.4 for the case D = H (the real quaternion division algebra), which is much simpler than the split case of GL 6 (K). As in the split case, define a parabolic subgroup P H containing S H and the rank matrix R(x) (for x ∈ G H ) in the obvious way. Then R(x) takes the following 6 possible values: 2 1 1 1 = R open , 2 1 1 0 , which gives rise to 6 P -P τ double cosets {G H,R }. Let f be as in the theorem. If we replace GL 2 (K) by H × , the analog of Proposition 7.1 still holds. This will imply that f − f τ vanishes on G H,Ropen . As in the split case, we define a left invariant vector field X left on G H using x left = 0 1 0 0 0 1 0 0 0 ∈ gl 3 (H). Then as in Section 4, one checks that X left is transversal to every double coset G H,R for R = R open . We conclude as in the split case that f − f τ = 0.
Remarks:
(a) Theorem 1.4 in fact holds without the tempered-ness condition on f . But we shall not prove or exploit this fact. (b) We also expect Theorem 1.4 to hold without the assumption that f is an eigenvector of ∆ D .
10. Proof of Theorem 1.3 The argument of this section is standard, and it works for a more general real reductive group G.
By a representation of G, we mean a continuous linear action of G on a complete, locally convex, Hausdorff complex topological vector space. We say that a representation V of G is in the class F H if it is Fréchet, smooth, of moderate growth, admissible and Z(g C ) finite. Here and as usual, Z(g C ) is the center of the universal enveloping algebra U(g C ) of the complexification g C of g. The reader may consult [C89, W92] for more details about representations in the class F H.
Let V 1 and V 2 be two representations of G in the class F H. We say that they are contragredient to each other if there exists a nondegenerate continuous G invariant bilinear form , If V 1 and V 2 are contragredient to each other, then V 1 is irreducible if and only if V 2 is. Let S 1 and S 2 be two closed subgroups of G, with continuous characters (not necessarily unitary) Let τ be a continuous anti-automorphism of G (not necessarily an anti-involution).
The following is a generalization of the usual Gelfand-Kazhdan criterion. See [SZ08] for a detailed proof. Recall that U(g C ) G is identified with the space of biinvariant differential operators on G, as usual.
Proposition 10.1. Assume that for every f ∈ C −ξ (G) which is an eigenvector of U(g C ) G , the conditions Then for any two irreducible representations V 1 and V 2 of G in the class F H which are contragredient to each other, one has that dim Hom S 1 (V 1 , C χ S 1 ) dim Hom S 2 (V 2 , C χ S 2 ) ≤ 1.
Now we finish the proof of Theorem 1.3. Assume that V 1 = V is an irreducible representation of G in the class F H. Define the irreducible representation V 2 of G in the class F H as follows. The representation V 2 equals to V as a topological vector space, and the action ρ 2 of G on V 2 is given by where ρ 1 is the action of G on V 1 . Using character theory and the fact that g is always conjugate to g τ , we conclude that V 1 and V 2 are contragredient to each other [AGS07, Theorem 2.4.2]. Now let Theorem 1.4 says that the assumption of Proposition 10.1 is satisfied, and so Note that by the identification V 1 = V 2 = V as well as the explicit actions, we have Hence dim Hom S (V, C χ S ) ≤ 1, and the proof is complete.
11. Some consequences 11.1. Uniqueness of trilinear forms. The following theorem is proved in [L01] (in an exhaustive approach), and its p-adic analog was proved much earlier in [P90, Theorem 1.1].
As noted near the end of Section 9, if we replace GL 2 (K) by H × , the analog of Proposition 7.1 still holds (again by using Proposition 1.5). Thus the analog of Theorem 11.1 for H × holds. Of course this is well-known and easier. 11.2. Uniqueness of the Jacquet-Shalika model for GL 3 (K). Let L 3 and N 3 be the subgroups of GL 3 (K), as in Section 8. Write S 3 = L 3 N 3 , and Theorem 11.2. Let V be an irreducible representation of GL 3 (K) in the class F H. Then dim Hom S 3 (V, C χ S 3 ) ≤ 1.
The theorem then follows, as in Section 10.
We remark that the p-adic analog of Theorem 11.2 holds true, as the same proof goes through.
Remark: By inducing the character χ S 3 to a Heisenberg group, one may obtain uniqueness of the Fourier-Jacobi model for GL 3 (K). 11.3. Uniqueness of a certain model for GL 4 (K) × GL 2 (K). Recall from the Introduction: and χ S 4,2 = χ S | S 4,2 .
To conclude the above, we further assume that f (x τ ) = −f (x). We need to show that f = 0. Denote By using Proposition 7.1 and Proposition 8.1, we first show that f is supported in C 4,2 . Proposition 5.1 further implies that f can only be supported in Z ′ 4 .
Now set
x 4,left = 0 I 2 0 0 0 0 0 0 0 ∈ gl 4 (K) × gl 2 (K), and denote by X 4,left the left invariant vector field on GL 4 (K)×GL 2 (K) whose tangent vector at x ∈ G is xx 4,left . As in Section 4, one checks that X 4,left is transversal to Z ′ 4 . We may then conclude that f = 0, as in Section 9.
11.4. Uniqueness of Whittaker models. Let G be a quasisplit connected reductive algebraic group defined over R. Let B be a Borel subgroup of G, with unipotent radical N. Let χ N : N(R) → C × be a generic unitary character. The meaning of "generic" will be explained later in the proof.
The following theorem is fundamental and well-known. For G = GL n , this is a celebrated result of Shalika [S74]. A proof in general may be found in [CHM00, Theorem 9.2]. We shall give a short proof based on the notion of unipotent χincompatibility.
Theorem 11.4. Let V be an irreducible representation of G(R) in the class F H. Then dim Hom N(R) (V, C χ N ) ≤ 1.
Proof. We say that a representation is in the class DH if it is the strong dual of a representation in the class F H. The current theorem can then be reformulated as follows: the space U χ −1 N = {u ∈ U | gu = χ −1 N (g)u for all g ∈ N(R)} is at most one dimensional for every irreducible representation U of G(R) in the class DH.
LetB be a Borel subgroup opposite to B, with unipotent radicalN. Then T = B ∩B is a maximal torus. Let χ T : T(R) → C × be an arbitrary character. Then for all t ∈ T(R),n ∈N(R)} is the distributional version of nonunitary principal series representations. By Casselman's subrepresentation theorem (in the category of representations in the class DH), it suffices to show that (11.1) dim U(χ T ) χ −1 N ≤ 1, for any χ T . Let H G =B(R) × N(R), which acts on G(R) by (b, n)x =bxn −1 . Write which defines a character of H G . Then (11.1) is equivalent to (11.2) dim C −∞ χ G (G(R)) ≤ 1. Let W be the Weyl group of G(R) with respect to T. We have the Bruhat decomposition From this we form a H G stable filtration of G(R) by open subsets, with G 1 =B(R)N(R) and every difference G i \ G i−1 a Bruhat cell G w , for i ≥ 2. | 2009-12-23T12:57:06.000Z | 2009-03-08T00:00:00.000 | {
"year": 2009,
"sha1": "573e736a31252b67f806927df0c3630401bace41",
"oa_license": null,
"oa_url": "https://www.ams.org/tran/2011-363-05/S0002-9947-2010-05285-7/S0002-9947-2010-05285-7.pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "573e736a31252b67f806927df0c3630401bace41",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
245853580 | pes2o/s2orc | v3-fos-license | Why is Parity Restored?
While Left-Right symmetry (space parity) breaking historically appeared as a surprise, we argue that the real wonder is its restoration in long-distance interactions (at least until we find electric dipole moments!).
Introduction
In a picture taken nearby, the catamaran boat appears symmetrical. We notice a "natural" tendency to enhance this symmetry, by positioning the camera, choosing the framing; on the other hand, we tend to ignore the obviously asymmetrical elements (like the clouds), maybe taking them as "accidental".
Historically, this tendency to search for symmetry has been very strong (I don't allude here to the gauge symmetry which is not an "apparent" symmetry, but much more a reflection of the redundancy of a representation). The experimental evidence of symmetry breaking came as a surprise, and efforts were made (for instance: parity doubling) to restore it even at large cost: we will see other examples. Interestingly, CP (or T) violation , which may be more bothersome to the mind, was not such a surprise: after the discovery of P violation, it was actively searched for in Kaon decays (notably at the suggestion of Lev B. Okun).
While we may tend to believe that delving deeper to more fundamental aspects would reveal more symmetry, in a way, daily practice may teach the opposite. For instance, humans and most animals exhibit an external (approximate) symmetry, but the internal organs are strongly asymmetrical (not to mention the DNA molecule). Evolution and natural selection may of course explain that advantages of this symmetry (for locomotion for instance) have imposed the external morphology.
To some extent, this is a metaphor of what happens in (particle) physics: long distance interactions like gravity, electromagnetism and even the strong force act in a symmetrical way, which may have given the above-mentioned intuition of a fundamental character of parity, while the short-distance interactions (weak interactions) are maximally parity-violating.
While some popular lore tends to blame Left-Right symmetry breaking on some "environmental" circumstance (the de facto absence of a light right-handed neutrino), the origin is obviously deeper (as we remind that parity breaking was first encountered in purely hadronic interactions), but could still be cured at higher energies in thus-far elusive L-R symmetrical models (not to mention SO(10) and related approaches).
In an interesting twist, we might a contrario find some residual fairly long-range parity violation through electric dipole moments for instance.
It came as a surprise ...and in the hadronic sector
Although some early indications of polarization of beta rays from radium decays existed, evidence from the purely hadronic sector indicating possible parity violation were a shock. Known as the Θ − puzzle, (nothing to do with the lepton) the observation seemed to indicate the presence of 2 distinct spin-0 particles produced under the same conditions, of similar mass but decaying differently: in 2 or 3 pions (in s-wave).
Θ + → + 0 ; + → + + − Since the pion parity had been fixed to (-1) from its interactions with nucleons, this implied either 2 degenerate particles (up to the experimental precision) or a parity violation, as proposed by T.D. Lee and C.N.Yang.
The textbook C-S. Wu experiment then came as a test and a confirmation of the 2nd interpretation, rather than a discovery per se. The flux of electrons is maximum opposite to the decaying Co spin, which ensures that the average of the pseudoscalar observable < • > (which should vanish if parity were conserved) is non-zero.
A straightforward interpretation is in the nature of the weak boson W couplings. This can however be formulated in 2 slightly different but fundamentally inequivalent ways. A popular saying was that the W boson couples in a L-R symmetrical way (Vector coupling) but that the 2nd process is forbidden by say, a large Majorana mass for the . This is a typical example of the approach "fundamental laws have to be symmetrical but accidental -boundary-conditions may not". (this even applies to the expanding universe!) Alternatively but still in the same line of reasoning the might even be absent, although this attitude in fact implies the 2nd approach. Very often it is still maintained (even in teaching) that the is "beyond the Standard Model" for this reason. If it is true that it was not included in the original papers, neither were the heavy quarks: they were either not known or not needed. It can certainly be argued that having and Dirac masses for neutrinos present is the simplest way to account for massive neutrinos, and the one most in line with the Standard Model, even though Majorana masses are an elegant possibility.
Don't blame the neutrino
The 2nd approach (which proves the correct one) is to assume that, irrespective of 's presence, the W bosons only couple to the left-handed leptons. By this, we imply that not only ± but also (by gauge invariance) their SU(2) partner 0 , which makes part of the photon and the neutral Z boson, has purely chiral couplings to leptons. This approach was quickly vindicated by the discovery of atomic parity violation, which results from the photon-Z interference.(no neutrino appears in this process).
It also opens the way to explaining the Θ − puzzle, with both now identified to the ± , and the chiral couplings of the quarks explaining this purely hadronic decay parity violation (which would be impossible to account for from the sole absence of ) In passing, I would like to make a remark about "maximally violated parity symmetry" (which is the case in charged weak interactions), to stress that this is in fact to be expected. Since we are dealing with a non-abelian gauge symmetry, the coupling constant is uniquely fixed (up to trivial Clebsch-Gordan coefficients). There is no way L and R fermions can couple in slightly different ways, it is an all-or-nothing situation (and, of course, L and R spinors are our fundamental building blocks, not the resulting V or A couplings. Any "intermediary" level of parity violation could only stem from the U(1) sector, but this is strongly constrained by anomalies.
So...why is Parity restoredat long distances?
We have seen above that Parity violation is (for now) a fundamental characteristic of fundamental interactions, and not an "accidental", "environmental" or "boundary condition" issue (like the expanding Universe probably is).
For instance unified theories, like SU(5), (which would need extra particles to reach grand unification at an acceptable scale) is formulated using only left-handed multiplets (which can include CP conjugate spinors).
Despite this, it is still tempting to advocate a fundamental L-R symmetry which would be broken down to our existing observations. Such is the case of SO(10)-based models, where the 16 representation includes all fermions from one family, including , and which restores the symmetry at Lagrangian level. In this case, the group structure results in another kind of "parity doubling". Instead of doubling the "matter" objects, we are doubling the "interaction vectors" , allowing for instance for and and blaming the observed parity violation not on the Lagrangian, but on a peculiar symmetry breaking solution which yields larger masses for the latter boson. This possibility is certainly much alive, despite the problems its implementation may pose with domain walls in the evolution of the early universe. Now, let us turn to the question announced in this section: "how comes we have been abused for so long in thinking that fundamental interactions were L-R symmetrical (parity invariant)?" The fact is that all observed interactions, until the discovery of radioactivity (and decays in particular) respected parity. This is certainly the case of gravitation, but also of electromagnetism (here the use of a handedness choice to define the B field is purely conventional, since the same convention comes again in the Lorentz force, thus making sure that observable effects -at the difference of man-made tools -are parity invariant). Another (almost) macroscopic force, the strong force, which is of relatively long range, also respects parity: all was thus put in place to dupe us into believing that parity invariance was a fundamental requirement of nature, and the observed asymmetries (DNA, asymmetrical internal organization of human beings, ...) were purely accidental (as, for instance, the rotation of the Earth).
For the Strong force, obeying the gauge symmetry SU(3), a first "explanation" might come from the compensation of anomalies. As long as we only face low representations, the only compensation of the triplet 3 of left-handed colored quarks occurs through a left-handed 3, which is also the CP conjugate of the right-handed 3 : this results in a LR symmetrical matter content.
Probably more relevant is the fact that the presence of mass for the "matter" particles implies LR symmetry.
Consider indeed the mass term and the gauge transformations (written in a generic way): For abelian theories (for instance, QED, whether it originates from an abelian formulation, or is left over after breaking of a larger group), this simply implies the equality of L and R couplings, and thus the restoration of parity. If we try to apply this to non-abelian groups (think SU(3)), since we are in an all-or-nothing situation (the gauge coupling is unique), this means that the L and R partners must belong to identical representations.
Of course, this approach is rather pragmatic: all observed electrically charged particles are massive (the issue is more difficult to decide for quarks, since the determination of masses comes mostly from current algebra, but a massless u quark is generally considered as impossible, and anyway anomaly cancelation in the Standard Model would imply the same result). The argument in any case would apply to the effectively observed objects (say, nucleons).
For the more speculative minds, this begs the question: is it possible to have massless charged particles? We know that such theories are difficult to formulate in a consistent way. Longitudinal divergences are a major concern when no low-energy cut-off is present. For instance, in formulating a scattering experiment, care should be taken that an initial "massless " would be degenerate with a "massless + L-polarized photon". For the time being (but weakly coupled sectors like dark photons might bring surprises) Nature seems to have spared us this chore!
A remaining possibility for long-distance P violation
One possibility still exists to find long-distance (or at least macroscopic) P violation. If indeed the currently searched-for electric dipole moments are found (for the neutron, electron or proton), they would induce parity-violating observations, as exemplified in the "gedanken" experiment below. We leave it to the reader to apply a mirror transformation to the apparatus and compare to the mirror image of the drawing. In the present context of course, we would not consider this as a fundamental interaction, but rather as a modification of the electromagnetic sources originating from a (parity violating, possibly spontaneously broken) theory.
Acknowledgements
This work is supported by IISN (Belgium) , the Brout-Englert-Lemaître Center (Brussels) funded in part by Innoviris. I wish to thank Cesar Gomez (uam) for discussions and the organizers of the Corfu meetings for the occasion to evoke those issues. | 2022-01-12T02:15:48.626Z | 2022-01-09T00:00:00.000 | {
"year": 2022,
"sha1": "fd47af162a6d3200954d23f3092639e4839d3c74",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "fd47af162a6d3200954d23f3092639e4839d3c74",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
3909246 | pes2o/s2orc | v3-fos-license | Genetic variability among isolates of Coconut lethal yellowing phytoplasmas determined by Heteroduplex Mobility Assay ( HMA )
*Present Address: Embrapa Recursos Genéticos e Biotecnologia, Parque Estação Biológica, 70770-900, Brasília, DF, Brazil. ABSTRACT Heteroduplex mobility assay (HMA) was used to determine genomic diversity among African isolates of coconut lethal yellowing phytoplasmas causing Cape St. Paul wilt disease (CSPD, Ghana), lethal disease (LD, Tanzania), and lethal yellowing (LYM, Mozambique). They were also compared to the Caribbean phytoplasma associated with coconut lethal yellowing (LY). A DNA fragment of 1850 bp covering the 16S rRNA gene and 16/23S intergenic spacer region of each isolate was amplified with primers P1 and P7 and subsequently submitted to HMA analysis for sequence variation. A PCR product amplified from GH5D (CSPD isolate) as a reference was combined with each PCR product and electrophoresed on polyacrylamide gels. Three groups of phytoplasmas associated with various coconut lethal yellowing diseases were identified by HMA. The samples from Mozambique (LYM) and Ghana (CSPD) formed one group, which was different from the second group, LD from Tanzania. These two groups were different from the third group of Caribbean isolates. This grouping was consistent with the genetic diversity described in the coconut yellowing-associated phytoplasmas detected after cloning, sequencing, and phylogenetic analyses. The HMA technique described here has the potential to provide a simple and rapid means to identify and to establish the diversity of isolates within the coconut lethal yellowing disease group.
Phytoplasmas are known to be the causal agents of lethal yellowing diseases (LYD) of coconut (Cocos nucifera), which are thought to have been endemic to the Caribbean since the end of the 19 th century and to West Africa since 1930.However, diagnosis of phytoplasma diseases by electron microscopy has only been possible since the 1970s.Polymerase chain reaction (PCR) diagnosis is now routinely used for coconut lethal yellowing (LY) in the Caribbean.Two strategies allow the study of genetic variability in PCR products: restriction fragment length polymorphisms (RFLP) or sequencing, both of which require considerable time and expense.Heteroduplex mobility assay (HMA) is a fast and inexpensive method for determining relatedness between DNA sequences.It was developed by Delwart et al.. (1993) to evaluate viral heterogeneity and for genetic typing of human immunodeficiency virus (HIV).Wang & Griffith (1991) studied the effect of a single base deletion on the electrophoresis of heteroduplex DNA in cross-linked gels.The DNAs containing a single base deletion in one strand resulted in a bulge in the other strand and were electrophoretically retarded in comparison to DNAs with no bulges (Arens, 1999).Heteroduplexes are formed when two non-identical but closely related single-stranded DNA fragments anneal.Such molecules will have structural distortions at mismatched base pairs and at unpaired bases where an insertion or a deletion in the nucleotide sequence has occurred (Upchurch et al., 2000).Heteroduplexes migrate more slowly than a homoduplex in polyacrylamide gel electrophoresis.The extent of this retardation has been shown to be proportional to the degree of divergence between the two DNA sequences.The presence of an unpaired base is known to influence the mobility of a heteroduplex more than a mismatched nucleotide (Wang & Griffith, 1991, Upchurch et al., 2000).
The HMA method has been used to characterize the variability of plant virus and phytoplasma diseases.HMA has been used for differentiation of phytoplasmas in the aster yellows group and clover proliferation group (Wang & Hiruki, 2001), determination of genetic variability among isolates of Australian grapevine phytoplasmas (Constable and Symons, 2004), study of the genetic diversity of 62 phytoplasma isolates from North America, Europe and Asia (Wang & Hiruki, 2005), and for phylogenetic relationships among flavescence dorée strains and related phytoplasmas belonging to the elm yellows group (Angelini et al., 2003).We used HMA to investigate the genetic variability of various isolates of African LYD phytoplasmas associated with Cape St. Paul wilt disease (CSPD, Ghana), lethal disease (LD, Tanzania), and lethal yellowing (LYM, Mozambique).They were also compared to the Caribbean phytoplasma associated with LY.
Thirty-six isolates from coconut trees infected with lethal yellowing diseases were used in this study: 15 from Ghana (GH1D -GH15D, Western Region), 14 from Tanzania (Tanz 1 -Tanz 14, Bagamoyo district, Pwani Region), three from Mozambique (LYM 3, LYM12, LYM18, from Zambezia province), one from Mexico (Yucatan), one from Cuba (Cuba 166, Granma State), one from Honduras (Atlantic coast), and one from Jamaica.The DNA from an isolate of the clover phyllody phytoplasma (D62 Dijon, France) was used as the experimental control.Healthy coconut plants were also used as controls.Total DNA wasextracted from each sample using a DNeasy plant DNA extraction Kit (Qiagen) according to the manufacturer's instructions.
Phytoplasma infection was investigated by PCR employing phytoplasma "universal primers" P1 (Deng & Hiruki, 1991) and P7 (Smart et al., 1996), designed to amplify the large DNA fragment comprising the entire 16S rRNA and 16/23S spacer region.The PCR reaction was performed in a final volume of 50 µL reaction containing about 200 µM mixed deoxynucleotide triphosphates (dNTPs), 100 ng of each primer, 1.25 units of Taq DNA polymerase (Taq PCR Core Kit, Qiagen), 5X PCR buffer supplied with the enzyme, and 50 ng of template DNA.
Using the universal primer set, PCR was carried out for 35 cycles under the following conditions: denaturation for 30 s (1 min 30s for first cycle) at 94°C, annealing for 50 s at 56°C, and extension for 1 min 30 s at 72°C.Reactions were terminated after the 35 cycles with a 10 min extension step at 72°C and cooled to 4°C.After amplification, 8 µL from each sample was subjected to electrophoresis in 1% agarose gel using 1X TBE (0.089 M Tris, 0.089 M borate, 2mM EDTA) running buffer and visualized by UV light after staining with ethidium bromide.
The 16/23S spacer region and 16S rDNA gene amplified by PCR from different LYD phytoplasmas were analyzed by HMA.A 5 µl aliquot of the PCR product amplified from GH5D (from Ghana) as a reference was combined with 5 µl of each PCR product amplified from the thirty-six isolates from coconut trees infected with the respective phytoplasmas.For each combination, 2µl of annealing buffer (100 mM Tris-HCl at pH 8.0, 20 mM EDTA, and 1 M NaCl) was added.One drop of mineral oil was overlaid on the reaction mixture.Samples were then denatured at 98°C for 4 min, rapidly cooled to 4°C, and then placed on ice for 20 min.Samples were electrophoresed on 5% non-denaturing polyacrylamide gels (acrylamide:bis 29:1) in 1X TBE buffer at 230 V for 4 h at room temperature.The migration of heteroduplexes was also verified on 2% agarose gel in 1XTBE.DNA bands were stained in ethidium bromide and visualized under a UV transilluminator.
When the primers P1/P7 were used, DNA fragments of approximately 1.8 kb, the expected size, were obtained from all DNA samples from diseased plants, but not from healthy coconuts (data not shown).The PCR product obtained with the isolate GH 5 D (LYD from Ghana: Cape St. Paul wilt disease -CSPW) was used as a reference and combined with those amplified from each of the other phytoplasma samples.The DNA heteroduplexes were formed by pairwise combination of PCR-amplified fragment from GH5D and each of the other LY phytoplasmas.PCR products contain many heteroduplexes which are not resolved in agarose gel but may be resolved and appear as distinct bands in polyacrylamide gels.
When the heterologous DNA fragments were analysed by HMA in polyacrylamide gel, some bands migrated more slowly and were considered heteroduplexes between divergent DNA molecules during the process of denaturing and reannealing.These heteroduplexes were not observed when the reference PCR products were reannealed in homologous combinations (homoduplexes).A single band of ~1800 bp, corresponding to the homoduplex, was observed when the PCR products of GH5D were combined with products from other CSPW isolates from Ghana or isolates of LYM (Mozambique) were tested.Heteroduplexes Genetic variability among isolates of Coconut lethal... were formed between GH5D and LD-Tanzania (East Africa) isolates, LY-Cuba, LY-Mexico, LY-Jamaica (Caribbean region) and Clover phyllody phytoplasma from France (D62-Dijon).
A number of distinct homoduplex and heteroduplex patterns were observed (Figure 1).The isolates from Ghana and Mozambique (lanes 1-5) produced no heteroduplex, signifying that they are very closely related.The isolates from Tanzania (lanes 6-7) produced heteroduplexes with different mobilities, showing different profile from Ghana isolate, suggesting that these isolates are different.The Caribbean isolates (lanes: 8-Mexico, 9-Cuba, 10-Jamaica) produced heteroduplexes that differed in size from those formed between the Ghana and Tanzania isolates, showing different profile, suggesting that these isolates were different from Africa isolates.The clover phyllody phytoplasma (lane 11) differed substantially when paired with the PCR product of the Ghana isolate.
Post-amplification methods for rapidly analyzing variation in PCR amplicons include restriction fragment length polymorphism (RFLP) analysis and HMAs.RFLP assays of 16 rDNA PCR products have been described for the classification, identification and differentiation of phytoplasmas (Lee et al., 1993, Tymon et al., 1997;Mpunami et al., 1999;Harrison et al., 2002).A limitation of the method is that the presence of a mutation cannot be detected unless that mutation happens to fall within the recognition sequence of the restriction enzyme being used for digestion of the PCR products (Arens, 1999).Ultimately, HMA could be used for initial screening among a large number of isolates and rapid identification of virus, phytoplasmas and other organisms.
In this work, we used a combined PCR-HMA method to study genetic variability among isolates of coconut lethal yellowing diseases.Three groups of phytoplasmas associated with various LYD were identified.The samples from Mozambique (LYM) and Ghana (CSPW) formed one group, which was surprisingly different from the second group, LD from Tanzania, a country adjacent to Mozambique.These two groups were different from the third group of Caribbean isolates.This result confirmed, in one experiment, the genetic diversity described in the coconut yellowing-associated phytoplasmas detected after cloning, sequencing, and phylogenetic analyses (Tymon et al., 1998).The difference between the Tanzania and Mozambique isolates is worth further investigation.
In conclusion, the results show that combined 16S rRNA gene PCR-HMA is a powerful tool for the identification and genetic characterization of coconut lethal yellowing phytoplasmas, and the test takes only 24 to 36 h to perform.The technique could also be applied to the other genes of coconut lethal yellowing phytoplasmas.This approach could be further developed to facilitate phylogenetic study and diagnosis of many other phytoplasmas and development of a comprehensive PCR-based classification system. | 2018-03-18T07:50:43.451Z | 2008-10-01T00:00:00.000 | {
"year": 2008,
"sha1": "6c4bcc77842f0d6e605e93c31ce352e6b74ef3de",
"oa_license": "CCBYNC",
"oa_url": "http://www.scielo.br/pdf/tpp/v33n5/v33n5a06.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6c4bcc77842f0d6e605e93c31ce352e6b74ef3de",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
225062588 | pes2o/s2orc | v3-fos-license | Chaos and Ergodicity in Extended Quantum Systems with Noisy Driving
We study the time evolution operator in a family of local quantum circuits with random fields in a fixed direction. We argue that the presence of quantum chaos implies that at large times the time evolution operator becomes effectively a random matrix in the many-body Hilbert space. To quantify this phenomenon we compute analytically the squared magnitude of the trace of the evolution operator -- the generalised spectral form factor -- and compare it with the prediction of Random Matrix Theory (RMT). We show that for the systems under consideration the generalised spectral form factor can be expressed in terms of dynamical correlation functions of local observables in the infinite temperature state, linking chaotic and ergodic properties of the systems. This also provides a connection between the many-body Thouless time $\tau_{\rm th}$ -- the time at which the generalised spectral form factor starts following the random matrix theory prediction -- and the conservation laws of the system. Moreover, we explain different scalings of $\tau_{\rm th}$ with the system size, observed for systems with and without the conservation laws.
We study the time evolution operator in a family of local quantum circuits with random fields in a fixed direction. We argue that the presence of quantum chaos implies that at large times the time evolution operator becomes effectively a random matrix in the many-body Hilbert space. To quantify this phenomenon we compute analytically the squared magnitude of the trace of the evolution operator -the generalised spectral form factor -and compare it with the prediction of Random Matrix Theory (RMT). We show that for the systems under consideration the generalised spectral form factor can be expressed in terms of dynamical correlation functions of local observables in the infinite temperature state, linking chaotic and ergodic properties of the systems. This also provides a connection between the many-body Thouless time τ th -the time at which the generalised spectral form factor starts following the random matrix theory prediction -and the conservation laws of the system. Moreover, we explain different scalings of τ th with the system size, observed for systems with and without the conservation laws.
The concept of chaos is very natural in classical systems. Its naive formulation in terms of strong sensitivity of the trajectory to the initial conditions, the "butterfly effect", is so simple and powerful that has long become an element of the popular culture. During the second half of the twentieth century this concept has been refined, from both the physical and mathematical points of view, leading to a complete theory of chaos in classical dynamical systems [1-4] that can be regarded as one of the greatest achievements of mathematical physics.
In the quantum realm the situation is much less intuitive due to the absence of well defined trajectories and the linear structure of the unitary evolution. In this context, a key role is played by the spectral correlations of the time-evolution operator. Indeed, as established in a series of seminal works [5][6][7], systems with a well defined chaotic classical limit have a spectrum with correlations that coincide with those of an ensemble of random matrices with the same symmetries. The latter property remains well defined also away from the classical limit and has then been taken as a definition of quantum chaos. However, the problem of connecting the spectral statistics with more intuitive dynamical properties of the system remained open.
Over the last decade the problem of characterising chaos in quantum systems received a renewed interest due to seminal results coming for the study of black holes [8,9] and connecting quantum many-body chaos with the scrambling of quantum information. In turn, this renaissance also produced new discoveries concerning chaos in extended quantum many-body systems on the lattice [10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26] and lead to the introduction of useful minimal models like local random unitary circuits [11,27] and dual-unitary circuits [28]. For some of these systems it has been possible to compute measures of the spectral statistics [10,[12][13][14]18], proving that they indeed follow the predictions of Random Matrix Theory. Importantly, however, it has been realised that in generic extended systems with local interactions this happens only for energy levels smaller than a certain scale E th -known as Thouless Energy -which bares information on the spatial structure. This energy scale (or the associated Thouless time τ th = /E th ) is believed to display different scalings with the system size depending on the conservation laws of the system.
In the recent comeback of quantum chaos an important role has been played by driven systems, as they furnish a simpler modelisation of many interesting dynamical phenomena [23][24][25][26][27]. For these systems, in the generic instance of aperiodic driving, the spectral statistics is not well defined (their time-evolution operator is timedependent) and the chaotic regime has been identified by looking at some features of the quantum many-body dynamics -seeking a quantum many-body analogue of the butterfly effect. Some of the most studied features have been the spreading of support of local operators (measured, e.g., by out-of-time-ordered correlators [29][30][31]), the growth of complexity in the classical simulations of the dynamics [32], and the scrambling of quantum information [33]. However, even though all these features are connected to an idea of 'dynamical complexity', they provide different information. It is unclear what is the minimal set of these features, if any, that a system has to display to be considered chaotic.
In this Letter we follow a different route and regard as "chaotic" those driven systems where the time-evolution operator acquires random matrix spectral correlations after a certain initial transient [34,35]. This is a direct generalisation of the traditional definition of quantum chaos and the transient is naturally interpreted as the Thouless time [35]. We present a family of local quantum circuits with random fields in a fixed direction. In these systems -that have no semi-classical limit -the timedependent spectral correlations can be characterised ex-actly. In particular, we compute the squared magnitude of the trace of the evolution operator -which we dub Generalised Spectral Form Factor (GSFF) -and show that at the leading order in time it is fully specified by the two-point dynamical correlation functions of local operators in the infinite temperature state. This provides an unprecedented direct link between spectral properties and local physics. We use this result to show that the regime of quantum chaos coincides with the ergodic and mixing one (where all dynamical correlations decay in time), and to elucidate the connection between conservation laws and scaling of the Thouless time with the system size.
More specifically, we consider a chain of length L with 2L qubits placed at integer and half-integer indexed sites. Thus, the Hilbert space of the system is H = (C 2 ) ⊗2L . The time evolution is governed by a brickwork-like local quantum circuit, consisting of unitary matrices (gates) acting on two neighbouring spins (with periodic boundary conditions). We consider the case where the gates are different at each space-time point and represent the time evolution as where we depicted two-site gates as and different colours denote different matrices. Note that we adopt the convention of time running upwards. We remark that this setting is in fact quite general. It can be thought of as generated by a disordered local Hamiltonian, which changes at each half-integer time due to some external driving. This formulation of quantum evolution is widely used, for instance, in the context of quantum simulators [36]. The main quantity of interest for this paper is the GSFF defined as where the trace is reduced to a common eigenspace of U(t) and all its commuting symmetries, and · denotes an average of some sort (either a moving time-average or an average over an ensemble of similar systems). Such an average is necessary because the distribution of |tr U(t)| 2 in an ensemble of systems does not generically become infinitely peaked even in the limit of infinitely many degrees of freedom [37]. From the definition (3), we see that K g (t) with unrestricted trace can be interpreted as the survival probability (or Loschmidt echo) for a random initial state, which is another chaos indicator [38,39]. As mentioned before, here we regard U(t) as "chaotic" if there exist a scale τ th such that where denotes asymptotic equality in the leading and possibly subleading order in τ th /t and, since our system is not time-reversal invariant, we considered the average over the circular unitary ensemble (CUE) [40,41]. Eq. (4) allows us to illustrate that (3) is very different from the (conventional) spectral form factor considered in periodically driven systems [40][41][42]. Indeed, here the full time-evolution operator U(t) is expected to behave like a random matrix at large times, while in the periodically driven case U(t) behaves as the t-th power of a random matrix. This means that K g (t) should be compared with the conventional spectral form factor at time t = 1 and that the asymptotic value in (4) has nothing to do with the asymptotic value (exponentially large in the volume) at which the conventional spectral form factor relaxes for times larger than the inverse level spacing (which are, again, exponentially large in the volume). In our setting (1), a natural way to produce ensembles of different systems is to introduce noise in the local gate U x,t . Since we are interested in generic drivings, we look at time-dependent noise and, to avoid any bias, we choose it to be independently distributed in space and time. Specifically, following [43], we take random gates of the form where U is a fixed U (4) matrix, φ x,t are independent random variables uniformly distributed over [−π, π] and ⊗ denotes the tensor product between two neighbouring sites. From the physical point of view the choice (5) describes a homogeneous spin-1/2 chain where the time evolution is periodic but each spin is subject to white noise produced by a random magnetic field in the z-direction [44]. For gates of the form (5), the average · in (3) can be implemented locally by placing U(t) † on top of U(t) in such a way that each gate lies on top of its conjugate (i.e. 'folding' the circuit, see, e.g., Ref. [43]). Specifically, the average projects to a subspace spanned by diagonal operators (| ≡ |1 and | = |σ z ) and allows us to write (3) as where top and bottom wires at the same position are connected because of the trace. Above we introduced the non-unitary 'averaged gate', written in the local basis {| , | }: with 9 real parameters in [−1, 1] depending on the choice of U in (5) [45]. Remarkably, for any U , w becomes bistochastic after a Hadamard transformation on the single wires [43]. This means that for the choice (5) of noise, K g (t, L) can be interpreted as the state-averaged return probability in a classical stochastic Markov process built as a brickwork circuit with the gate w.
Let us now evaluate the first two orders in the asymptotic expansion of (6) for large times. Conceptually, this will parallel similar derivations carried out in the periodically driven case, in both single-particle [46,47] and many-body [10,12,13,16,18,35,48] contexts. Indeed, even though K g (t) will generically relax to 1 and not to t (cf. Eq. (4)), in both cases the leading correction is exponential and the relaxation timescale can be interpreted as a Thouless time.
To proceed we now expand the trace (6) in the computational basis {|e m i } where m = 0, . . . , 2L denotes the particle number (number of quasiparticles ) and i = 1, . . . , 2L m labels states in a fixed m sector. Assuming that there are no conserved charges we have where we defined, in '1st quantization notation': and used that K is expressed as the sum of the averaged autocorrelation functions of the extended operators σ z x1 · · · σ z xm (with x 1 < x 2 · · · < x m , and x j ∈ Z 2L /2) in finite volume L (cf. Ref. [43]).
Let us now focus on a special family of reduced gates (7): those with either no splittings (f = e = 0) or no mergers (b = d = 0) and with non-negative weights. For this family of gates we can invoke the following property (proven in Sec. II of the Supplemental Material (SM) [45]: Property 1. The averaged dynamical correlations x1 · · · xm | y1 · · · ym (t) L are bounded from above by where S m is the permutation group of m elements.
Moreover, we also have: Property 2. The two-point functions have the following asymptotic expansion in t while C ηx,ηy are constant amplitudes (C 0,0 and C 1,1 are reported in Sec. III of the SM).
An instructive way to obtain the expansion (11) is to note that the correlations in finite volume can be written as where x | y (t) ∞ are the infinite volume correlations known exactly from Ref. [43]. This form follows from the observation that for no splittings (mergers) the only contributions to the correlation come from continuous paths (the skeleton diagrams [43]) connecting the endpoints, and wrapping around the cylindrical worldsheet along the space direction an arbitrary number of times. The maximal number of wrappings is restricted by the maximal speed of propagation. Then, Eq. (11) follows directly plugging in the asymptotic form [where the diffusion constant is given by D = [4π(C 0,0 + C 1,1 ) 2 ] −1 and the fluid velocityζ is defined in Sec. III of the SM], and turning the sum over wL/t into an integral for t L. Alternatively, Eq. (11) can also be derived by diagonalising an effective Markov operator, see Sec. IV of the SM.
Using the asymptotic form (11) for two-point correlations and Property 1 we find (see Sec. V of the SM) This leads us to our first main result: for large times and λ max[1, g/(ε 1 ε 2 + ac)] < 1, the GSFF is fully determined by correlation functions of local observables In particular, since λ > max(a 2 , c 2 ), we find Note that in this case τ th is the exponent governing the decay of two-point correlations in infinite volume. Note 9) and (13). Notice that the τ th does not scale with L. The solid black lines show the asymptotic from Eq. (16). We wrote the gate's parameters in also that there is no L dependence in τ th , in contrast to log L dependence found in several examples of extended systems, see e.g. [10,12,35].
Eq. (16) shows excellent agreement with the exact numerical evaluation of K g (t), see Fig. 1 for a representative example. Moreover, our numerical observations suggest that the bound (15) is too conservative and Eq. (16) holds whenever λ < 1, namely whenever the averaged two-point correlations decay exponentially.
When some of the gate's parameters (7) are negative, the Gaussian asymptotic form (14) is not valid. We calculate K (1) g (t) λ t by diagonalising an effective Markov operator, see Sec. IV of the SM (λ can be different from the one in (12)). Moreover, we again bound the other contributions as in (15) (with a minor modification, see Sec. V of the SM).
Let us now consider a special case for which (15) does not provide a useful bound (because λ = 1). Namely, the case of averaged gates with a conservation law. This situation has been extensively studied in the recent literature [13,35,49,50] and can be realised in our setting by considering a gate U (and hence U x,τ in Eq. (5)) that conserves the magnetisation in the z direction. This leads to the following averaged gate [45] Note that the time-evolution operator generated by this gate is integrable: it is an example of Floquet XXX model at a non-unitary point [51]. Interestingly, a similar Floquet XXX model was obtained in Ref. [13] after averaging a U(1)-symmetric Floquet Haar random circuit. Finally, we remark that a similar reduced gate for driven systems has been studied in Ref. [35]. Since the magnetisation is conserved, the trace in (3) is reduced to a single magnetisation sector. This means that instead of K g (t) in Eq. (8) we should consider a single term K (t) decay to one with the same exponent, see Fig. 2. This can be understood directly from the Bethe-Ansatz solution (see, e.g., the supplemental material of Ref. [13]). Indeed, by looking at the finite volume eigenstates one finds that the lowest excitations (those with eigenvalue of the Markov operator which is the closest to one), are onemagnon excitations (as opposed to bound states or scattering states of many magnons). Since the one-magnon states are highest weight states of the representation of SU(2) with S z = L − 1, their descendants (obtained by multiple applications of the lowering operators S − ) appear in all sectors m = 1, ..., 2L − 1. Therefore all sectors have the same Thouless time, which can be deduced from the m = 1 sector.
For large times, the averaged two-point function for m = 1 takes a simple diffusive form where D = (tan 2 2J)/4 is the diffusion constant and we neglected exponentially small corrections with Lindependent exponents because we expect a L-dependent Thouless time. Using again Eq. (13) we have Extending the summation to ±∞ [52] and utilising the Poisson summation formula we get (12) and the true decay of the two-point correlation function λ t max . In the case 2, the two-point correlation function is well described by the skeleton contributions (see [43] for when this holds). In contrast, the case 1 exhibits a slower decay of the deviations than given by 12, agreeing with the slower decay of the correlation functions. We obtained λmax from direct numerical evaluation of the two-point correlation functions in infinite volume. The data shown is for the last two gates in Table sm Note that the Thouless time depends on L 2 /D, in agreement with previous observations in chaotic systems with diffusive conservation laws in both single-particle [46,47] and many-body [13,16,18,35,48] contexts. Our derivation gives a straightforward illustration of the origin of this scaling.
Another interesting limiting case is when, in addition to no splittings (or merges), at least one of ε 1 and ε 2 vanishes (note that ε 1 = ε 2 = 0 if and only if the gate U is dual-unitary [28,43]). In this case K g (t) = 1 + (a 2t + c 2t )Lδ t mod L + ..., and the GSFF admits a closed-form expression (see Sec. VI of the SM). The model is chaotic when all a, c, g differ from ±1. In contrast, if the above conditions does not hold, K g (t) with unrestricted trace does not decay to the RMT result. This signals new commuting symmetries and possibly non-chaotic behaviour. For instance, for a = c = g = 1 (corresponding to the SWAP gate) and unrestricted trace we find K g (t)| SWAP = 4 gcd(t,L) . Here gcd(t, L) is the greatest common divisor of L and t. This result is manifestly larger than the RMT result.
In the general case, when both mergers and split-tings are allowed, there is a phase transition in the decay exponent of infinite volume correlations [43]. In particular, there is a region in parameter space (see Eq. (41) in Ref. [43]) where the decay of quasiparticles is still governed by λ in Eq. (12), while for parameters out of this region the exponent changes. Moreover, all K (m) g (t) will decay with the same exponent (since the number of particles can change during the time evolution, all K (m) g (t) contain the slow-decaying configurations). However, this means that the decay exponent can again be determined from two-point functions of local operators and that τ th = −1/ log λ max , where λ max = lim t→∞ (max x x | 0 (t) ∞ ) 1/t . This is in agreement with our numerical experiments, as shown Fig. 3 for a representative example.
In conclusion, we studied the GSFF in a class of local quantum circuits with random fields, expressing it in terms of (averaged) dynamical correlations of local observables. By means of this correspondence we showed that in the regime where the correlations decay exponentially in time (known as ergodic and mixing in ergodicity theory) the GSFF approaches the prediction of random matrix theory over the same time-scale. Moreover, we proved that the GSFF approaches the prediction of random matrix theory also in the presence of a conservation law, if the correlations take a diffusive form. In this case the timescale is proportional to the system size squared divided by the diffusion constant. Finally, we showed that when the correlations do not decay, the GSFF does not approach the random matrix theory prediction. The correspondence between quantum chaotic and quantum ergodic and mixing regimes is expected on general grounds [39,[53][54][55] and provides an intuitive understanding of quantum chaos. Our results in a specific setting provide a rigorous proof of such a correspondence, and pave the way for its quantitative understanding in more general settings. Moreover, interpreting the U(1)noise averaged GSFF as a state-averaged return probability for a general bistochastic brickwork Markov circuit provides an analogous correspondence in classical stochastic systems.
Supplemental Material for "Chaos and Ergodicity in Extended Quantum Systems with Noisy Driving"
Here we report some useful information complementing the main text. In particular -In Section I we show how to derive the representation (6) for the Generalised Spectral Form Factor; -In Section II we prove Property 1; -In Section III we compute the asymptotics of the averaged two-point functions of local operators in infinite volume; -In Section IV we compute K (1) g (t) using an effective Markov operator; -In Section V we establish the bound in Eq. (15); -In Section VI we compute K g (t) with unrestricted trace for f = e = ε 1 = 0; -In Section VII we report the parameters of the gates used in our numerical experiments; In this appendix we show how to write the GSFF in terms of reduced gates, both in the presence (18) and in the absence (6) of conservation laws.
We begin by writing the GSFF K g (t) = trU(t) trU † (t) in the circuit representation Bending the upper part (blue gates) underneath the lower one (red gates) and using that the average factorises at each space-time point (positions of the gates in the two copies match) we find where we introduced the averaged "double gates" Here the tensor product ⊗ c denotes the tensor product between the two copies of the systems corresponding to the backward and forward time evolution (respectively the one given by red and blue gates in (sm-1)).
Let us now consider the local gate given by Eq. (5). Noting that each single site gate with random field can be split into two random gates e iφx,τ σ z = e iφ x,τ σ z · e iφx,τ σ z (φ x,τ denotes the random strength of the original field in Eq. (5) and φ x,τ , φ x,τ are new random fields) we can write where ⊗ s denotes the tensor product between neighboring sites. Plugging in (sm-4) we find We omitted the subscript x,τ in W , as the gate does not depend on the position for our choice of U x,τ . This follows from the fact that we average over φ x,τ , φ x,τ , φ x+1/2,τ , φ x+1/2,τ . Defining z(φ x,τ ) := e −iφx,τ σ z ⊗ c e iφx,τ σ z and simplifying the tensor products we get Next we use that average factorizes and that is a projector. In particular P z projects the 4-dimensional space of states of a wire onto a 2-dimensional subspace, which dub 'reduced local space'. Concretely, it projects the single site basis [O] s1,s2 |s 1 ⊗ c |s 2 , (sm-11) where {|0 , |1 } is a basis of the local Hilbert space. Therefore, we define a reduced folded gate w, which is W written in this reduced Hilbert space where red denoteds an operator that acts on the reduced Hilbert space of diagonal operators. w thus acts on two sites with two degrees of freedom and is a four by four matrix. Next, we use an explicit parametrization of general four by four unitary matrix for our gate U , resulting in explicitly parametrized reduced gate w from Eq. (7) of the main text. This is done explicitly in Appendix B of [43], which we do not repeat here.
Let us comment what happens in the presence of a conservation law. Specifically, by demanding that the local gate U conserve magnetization (in the z-direction), we can explicitly parametrize it as In this appendix we prove Property 1, namely we show that for no splittings e = f = 0 (or no mergers b = d = 0) and non-negative weights the 2m-point correlation functions x1 · · · xm | y1 · · · ym (t) L are bounded by x1 · · · xm | y1 · · · ym (t) L < max(1, where S m is the permutation group of m elements. We start by expressing the correlation functions as the sum of contributions from allowed configurations, as discussed in Section V A of [43]. Inserting a resolution of the identity 1 = | | + | | at each reduced-operator wire, we can explicitly decompose each into the sum of 2 4Lt terms. Configurations are expressed in terms of "tiles" where we connect the particles with solid lines and ignore the vacancies [43]. For example, (sm-16) The complete set of allowed tiles (corresponding to non-zero coefficients of the gate (7)), when there are no splittings or mergers, is given by: (sm-17) Note that, if there are no splittings(mergers), and we need to end up with the same number of , we can not have any mergers(splittings). We start with m = 2. The four-point correlation function can be expressed as the sum of the weights of configurations C 4 where configurations C 4 i have four fixed boundary conditions at x 1 , x 2 ; y 1 , y 2 . For example: where x 1 = y 1 = 3/2 and x 2 = y 2 = 5/2. The w(C) is the weight of the configuration, which is a product of the weights of all tiles. For instance, the weight of the tile w(g tile) = g and w(C 4 e ) = gacε 2 1 ε 2 2 .
Next, we define a map F, which maps configuration to a set of configurations where n is the number of g tiles in C 4 i . It maps each tile g to either tile ac or tile ε 1 ε 2 resulting in 2 n different configurations (sm-21) Configurations denoted by tilde˜have "new tiles" ac and ε 1 ε 2 and no g tiles.
For this map we can prove the following Proof. Since x = y, the configurations differ in at least one tile. Moreover, because a tile g is the only tile with two incoming lines, they need to differ in a tile different from g. F does not change tiles different from g, therefore the resulting configurations from different configurations differ in the initially different tile. The configurations from the same set are different by construction.
Next, notice that we can express the sum of the weights of tilde configurations with weights of configurations without the tilde where we used the factorisation of the weights. To see that, notice that we can factor out from the sum the weights of all tiles different from g in C i and tiles at the same positions in tilde configurations. Then we are left only with a product of g tiles, with each of them maped to (ε 1 ε 2 + ac). For instance, a configuration with of two g tiles w(C 4 i ) = gg (sm-24) maps to 4 k=1 w(C 4 i,k ) = (ε 1 ε 2 + ac)(ε 1 ε 2 + ac) . (sm-25) Since each tilingC 4 i,k is different and corresponds to a different term in the diagrams of Finally, using that the maximal possible number of g tiles is t (for L > 1), we see x1 x2 | y1 y2 (t) L ≤ max 1, (sm-27) Similarly, for 2 < m < 2L we use that the maximal possible number of g tiles is (m − 1)t to find (sm-14).
Asymptotics between integer indexed points
Let us start considering the correlation functions between integer indexed points. First we note that they can be expressed in terms of the ordinary hypergeometric function and the Jacobi polynomials.
where 2 F 1 is Gaussian or ordinary hypergeometric function. The expression can be expressed using Jacobi polynomials P where we introduced z = ε 1 ε 2 /ac. We now proceed to develop the asymptotic expansion of (sm-28). We begin by writing it in terms of the ray variable and z = ε 1 ε 2 /ac. We obtain the asymptotic form by firstly using the Stirling's approximation for P n . Then we write the Taylor expansion for log P n in n around the maximum, substitute the sum over n by the integral and integrate over n. Finally, we write the Taylor expansion for log ζt | 0 (t) ∞ in terms of ζ to second order around the maximum. Therefore, we obtain Gaussian form of the correlations, which accurately describes the asymptotic behaviour. Stirling's approximation for P n yields (sm-32) Next we find the maximumn, by demanding that the derivative of log P n (ζ, t) vanishes. We obtain the equation which we solve up to 1/t order. We obtain where we took the relevant root of the quadratic equation, for which 0 ≥ v ≥ 1. The expression for n 0 contains the first sub-leading terms from LHS of Eq. (sm-33) and the leading order from RHS.
Next, we derive Sn = − ∂ 2 log Pn The simplified value at the saddle point Pn(ζ, t) reads (sm-37) Next, we approximate the sum with the integral and obtain dn Pn(ζ, t)e −Sn(n−n) 2 /2 = δ ζ−1 a 2t + 2π/SnPn(ζ, t) . (sm-38) The asymptotic form is: The expression is rather involved, but it is well approximated by a Gaussian function of ζ. To find it we first determine the maximum versus ζ. Taking the first derivative equal to 0 leads tō We introduced ∆ = 4acz + (a − c) 2 . To get the correct normalization of the final result, we need to obtain the O(1/t) term. It reads The second derivative atζ up to first order We obtain the final form (sm- 44) and with v 0 = v(ζ 0 ). A comparison with the exact evaluation is shown in Fig. sm-1.
Asymptotics between integer and half-integer indexed points
The calculation of the asymptotics for correlations with different endpoints follows the same lines. The final results read as where x ∈ Z L , C 1,1 = Sζ 2π − C 0,0 , and we do not report an explicit form of C 1,0 and C 0,1 as it is not needed in the main text.
where we introduced the following 2 × 2 matrices A = ε 1 ε 2 cε 2 aε 1 ε 1 ε 2 , B = c 2 0 cε 1 0 , C = 0 aε 2 0 a 2 . (sm-50) The eigenvectors of M can then be written as where v ± m are the two eigenvectors of The eigenvalues are given by (sm-53) In particular, the eigenvalue with largest absolute magnitude is one of the four choices: (sm-54) λ + 0 is the relevant solution when all parameters are positive, and coincides with λ in Eq. (12) of the main text. In this case we immediately get asymptotic form for the effective gate, where the parameters of the gate are substituted by their absolute values. In particular, we have λ eff = 1 4 |a| + |c| + (|a| − |c|) 2 + 4|ε 1 ||ε 2 | 2 .
VI. Kg(t) FOR f = e = ε1 = 0 Here it is more convenient to look at the following representation (L = 4, t = 2) K g (t) = . (sm-65) Since we do not allow splittings (e = f = 0) the merges will not appear in the evaluation of (sm-65). Otherwise there would be a different number of operators at the bottom and the top. Furthermore, since ε 1 = 0, we cannot convert left mover to right mover. Since the number of right movers at the bottom and the top is the same, also ε 2 = 0. Therefore, the only allowed tiles are (see discussion around (sm-16) for the introduction of tiles) = 1 = a = c = g . (sm-66) We start by considering on a site and follow the evolution of a state on a single wire until the line closes on itself, which we call an orbit. For example , (sm-67) where we used periodic boundary conditions in space and time (from trace). The state on a given wire travels straight along the light-cone. When it reaches the top, the trace acts as a periodic boundary condition in time. Therefore it continues to travel until it reaches the starting point. When it reaches the starting point, it has traveled through 2 gates, with = lcm(t, L) (lcm stands for least common multiplier). There are o = tL/ of distinct left(right)-moving orbits. Each orbit can either be occupied by the operator or , which has weights 1 or c 2 (a 2 for right moving orbits), respectively.
Consider first the case g = 0. Here, we can have only left or right moving orbits filled with , or else they cross and contribute 0. Therefore Direct numerical evaluations have been performed by exactly computing K(t). We achieved that by exactly contracting the diagram (sm-65), using some basic functionalities of ITensor Library [56]. The parameters of the gates used in numerical experiments are given in Table sm | 2020-10-26T01:00:15.184Z | 2020-10-23T00:00:00.000 | {
"year": 2020,
"sha1": "7f9fa1e4ea1df4571d016d79cbfcadd8c8d67592",
"oa_license": "CCBY",
"oa_url": "http://link.aps.org/pdf/10.1103/PhysRevLett.126.190601",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "e07fd9db5ae161063d77c5d899d69b2dccf91149",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics",
"Medicine"
]
} |
254246959 | pes2o/s2orc | v3-fos-license | Applications of human activity recognition in industrial processes -- Synergy of human and technology
Human-technology collaboration relies on verbal and non-verbal communication. Machines must be able to detect and understand the movements of humans to facilitate non-verbal communication. In this article, we introduce ongoing research on human activity recognition in intralogistics, and show how it can be applied in industrial settings. We show how semantic attributes can be used to describe human activities flexibly and how context informantion increases the performance of classifiers to recognise them automatically. Beyond that, we present a concept based on a cyber-physical twin that can reduce the effort and time necessary to create a training dataset for human activity recognition. In the future, it will be possible to train a classifier solely with realistic simulation data, while maintaining or even increasing the classification performance.
INCREASING THE EFFICIENCY OF MANUAL PROCESSES USING AUTOMATIC HUMAN ACTIVITY RECOGNITION
Increasing automation in production and logistics with simultaneous increases in the complexity of manual processes is leading to more and more interaction between humans and machines. Synergetic collaboration relies on communication, including verbal and nonverbal interactions. In material handling systems, the human factor is still a crucial variable, which is wrongly assumed to be deterministic in planning and simulation models. The time data is necessary to implement a datadriven simulation that considers the non-deterministic motion behaviour of humans. Machines must be able to detect and understand the movement of humans to facilitate non-verbal communication. One way to do this is through sensor-based human activity recognition (HAR).
HAR assign sequences of human movements recorded by sensors into a machine-readable format to predefined activities. The main advantage of this approach is its automation and scalability. In contrast to traditional methods, where movements are only recognised manually, movements can be recognised automatically. Not only simple activities but also complex applications can be detected.
HAR has already found its way into our everyday life. For example, smart watches or fitness trackers use human activity recognition to count steps, recognise types of sport, or analyse our sleep patterns. They record movements using an inertial measurement unit (IMU) and evaluate them in real-time. In addition to everyday use, HAR can be found in other domains of science and industry [1]: In health care, for example, nursing staff are automatically informed about falls of patients [2]. Another field of application is the detection of hand movements. This not only facilitates human-machine interaction [3] but also communication in the context of recognising sign language. Of the many other fields of application, the industry deserves special mention. Research is focused on production [4] and services such as intralogistics [6], [7]. While fitness trackers use HAR to distinguish between step and no step, the movements to be analysed in the industry are more complex. As movements become more complex and more detailed, automatic recognition of activities will become less accurate. The methods can be extended to include additional data streams -the so-called context [5]. A context includes information about human entities, objects and tools that do not directly involve human movements [6]. This information may relate to their condition, identity or location [7]. For example, if an employee is standing next to a shelf (object) with a scanner in hand, this data can be used to improve activity recognition [8].
In order for HAR methods to recognise movements from sensor data, they first need to be trained. Using annotated data in which the movements are already assigned to defined activities (labels), HAR methods learn to recognise different patterns in movement and context data [9], [10]. Unknown movements can then be assigned to defined activities based on these patterns. The required data is developed in the Innovationlab of the Chair of Materials Handling and Warehousing (FLW) and in cooperation with industrial partners such as MotionMiners© GmbH. The aim of recordings in such a laboratory is to examine intralogistic system at the planning phase and to record data. HAR methods are trained using this data before a system is put into operation.
INNOVATIONLAB FOR MOVEMENT ANALYSIS AND DATA ACQUISITION
The Innovationlab Hybrid Services in Logistics at FLW is a laboratory for testing innovative technologies in real-life conditions. Here, new forms of interaction between human and technology can be tested within the framework of modern logistics systems. It is the preferred environment for many research approaches due to the fact that logistics systems are physically observable in a laboratory.
First of all, even such scenarios can be reproduced in the Innovationlab without any risk of danger, which in the development phase have potentially safety-critical elements for humans. For example, the interaction of employees with transport drones, which they are supposed to perceive autonomously via various sensor interfaces, may result the risk of collision. In a real laboratory, the necessary safety precautions can be taken and additional sensor systems can be used for monitoring that would not be available in the real industrial system, such as a warehouse. Using reference sensors of the laboratory environment, solutions can be improved in a targeted manner toward practical suitability. The Innovationlab approach is fundamentally not dependent on the existence of a real existing, operational logistics system. As a result, realistic testing of technologicallydriven changes is already possible during planning. For example, activity detection sensors and classifiers can be developed even for logistic systems that were not yet operational at the time of the laboratory experiments. Human activity recognition solutions that are tailored to the manual activities and processes in a newly designed or redesigned logistics system in terms of time and quantity have been available from day one. In addition, the experiments and recordings in the laboratory environment already allow conclusions to be drawn about potential improvements of the real system.
A laboratory environment provides the opportunity to assess various sensor technologies according to their applicability in specific applications of activity recognition. IMUs, video cameras and an optical motion capturing (MoCap) system are used in the Innovationlab to capture human movements. The MoCap system uses infrared cameras to capture markers worn on the body or attached to objects. Because of its high accuracy, the system serves as a reference. In this reference environment, this refers to less precise and more suitable industrial sensors. Determine the position of employees, objects example, radio frequency identification, ultrawideband, Bluetooth low energy, WLAN or indoor global positioning systems. MoCap determines the optimal attachment of the sensors for each technology, which leads to the most meaningful context data.
ONGOING RESEARCH WORK IN THE INNOVATIONLAB
The Innovationlab is successfully used in various research projects on human activity recognition. Following is an overview that outlines their respective visions and points of contact for industrial transfer projects.
Attribute-based representation of classes
According to previous activity recognition research, activities such as locomotion and handling are automatically recognised by a classifier as specific activity classes. Modern Material flow systems of the vision of Industry 4.0 adapt intelligently to changing circumstances. Thus the assumption that all activity classes are known while designing a HAR solution and that their number, delimitation and definition remain the same at all times is increasingly far from reality. Consideration of new classes for each variant of the new activity complicates annotation, i.e., the labeling of the recorded data and reduces the number of examples per class. An association between a sensor pattern and a specific activity class label does not accurately project the diversity of human movements.
Semantic approaches offer a way out of the dilemma between the need for the greatest detail in the activity definition and the resulting decrease in transferability between different industrial applications on the other hand. This approach comes from image recognition [11]. Researchers assigned labels to animal images that included a semantic description of the animals, e.g., whether it was a white or black animal, and what habitat it lived in. With these attributes, a classifier is able to recognise unknown concepts or classes (in this example animals) based on their semantic description. Transferred to human activities, the idea is to represent activity classes by semantic descriptions. These descriptions are in a figurative sense letters from which words, i.e., activity classes can be flexibly formed. In [12] attributes such as standing, step, walking, handling upwards, handling centered, handling downwards, left hand, right hand as well as poses for objects such as a cart or bulky and handy unit are distinguished. Their combination allows the unambiguous description of arbitrarily defined classes [13]- [17].
This approach has already been successfully validated with MotionMiners©. Movement data recorded in the Innovationlab were used to train a convolutional neural network that achieved comparable activity recognition performance to a classifier trained on real systems data. In the real system, the data recording could have been used immediately for analysis. The heterogeneous activity definitions in the various systems and the laboratory data could be bridged by semantic attributes.
Context-aware activity recognition
As described in the previous sub chapter, words (activity classes) can be formed from semantic descriptions (attributes). But words can have different meanings, depending on how they are used in a sentence. In conclusion, the activity classes need to be put in context so that the information and its implications can be understood by humans. Through further data streams the recognised attributes can be expanded with, among other things, the identity and position of people, tools and objects and beyond that with process knowledge, state and transition logic and assigned to activity classes by a context-aware classifier.
The approach of an attribute-based representation of classes was extended by adding a random forest, which uses context information such as the position of objects (see Figure 1). The purpose of context information is to improve the performance of the convolutional neural network and to understand the captured process comprehensively. Through context information, we are able to expand the recognised activities and identify when, where and why the activity is carried out. This also allows us to deduce mistakes, such as picking the wrong item. [8] The process knowledge is used to derive the process (picking) and the sub-process (delivery confirmation). The execution time of the respective sub-processes can be collected from the data. The status and transition logic can even be used to determine previous sub-processes. Before delivery confirmation, the picker should have put at least one or more items of an order into the cart. In this step, if the result deviates from the previously determined sub-processes, the activity class can also be adjusted. The sub-process following delivery confirmation can be roughly estimated due to limitations in the transition logic. Next, two sub-processes come into question: Either the employee picks other items in his vicinity or they move forward with the picking cart, e.g. to consolidation.
The performance of activity recognition can be significantly enhanced by using context data from the Innovationlab. Furthermore, individual context features were analysed for their relevance to activity recognition [18]. By using this method, data from a warehouse can be collected in a cost-and effort-efficient manner. But context data cannot only be used for activity recognition. Process analysis and the cyber-physical twin can be conducted using the activity classes, attributes and context data.
Generation of human movement data -cyberphysical twin
The creation of a training dataset is associated with high effort. Data has to be recorded, annotated and then manually revised. In addition, the necessary data volume increases with an increase in complex movements, as is the case, for example, in production or intralogistics. As a result, HAR methods are mainly tested on simple everyday situations that involve the least effort in the preparation and execution of recordings. Simulations (see Figure 2) which are based on human movements, can be generated in order to reduce the necessary effort for collecting data from complex environments, such as intralogistics. Human movements are modified using sensor data from logistics and other domains resulting in synthetic and thus new movements. [19] With the simulation environment, movements can be exported as different sensor outputs (e.g., acceleration of body regions like in an IMU or a point cloud like in a MoCap system). Depending on the data basis, the modification can even take age, gender, physical limitations or signs of fatigue. Based on these variations, person-and situation-specific simulations can be performed. The data generation approach makes it possible to transfer planned systems to a simulation environment and to generate artificial movement data. Without the expensive and time-consuming implementation of the system in the real environment or in the laboratory, training data can be generated and HAR methods can be trained.
THE INNOVATIONLAB AS AN INTERFACE BETWEEN INDUSTRY AND RESEARCH
The conclusion of scientific research work is the empirical validation within the scope of its application. The Innovationlab with it's high-precision measuring instruments serves is an ideal reference environment. Methods, technologies, and software are analysed for their suitability for industrial application without changing ongoing operations within the company. The emphasis will be on the connection between new methods and technologies, as well as the consequences for the overall system. A variety of sensor technologies, such as the MoCap system and IMUs, were linked to simulation software and HAR methods. The research work presented in this article has been or is being validated in cooperation with various industrial partners in real warehouses. The positive experiences have confirmed the Innovationlab as an interface between industry and research. | 2022-12-06T06:43:09.436Z | 2022-12-05T00:00:00.000 | {
"year": 2022,
"sha1": "6f91d2d043bd62f1d4ec67a93efa2fcbf02e8f83",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "6f91d2d043bd62f1d4ec67a93efa2fcbf02e8f83",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
51628273 | pes2o/s2orc | v3-fos-license | Geospatial Clustering of Opioid-Related Emergency Medical Services Runs for Public Deployment of Naloxone
Introduction The epidemic of opioid use disorder and opioid overdose carries extensive morbidity and mortality and necessitates a multi-pronged, community-level response. Bystander administration of the opioid overdose antidote naloxone is effective, but it is not universally available and requires consistent effort on the part of citizens to proactively carry naloxone. An alternate approach would be to position naloxone kits where they are most needed in a community, in a manner analogous to automated external defibrillators. We hypothesized that opioid overdoses would show geospatial clustering within a community, leading to potential target sites for such publicly deployed naloxone (PDN). Methods We performed a retrospective chart review of 700 emergency medical service (EMS) runs that involved opioid overdose or naloxone administration in Cambridge, Massachusetts, between October 16, 2016 and May 10, 2017. We used geospatial analysis to examine for clustering in general, and to identify specific clusters amenable to PDN sites. Results Opioid-related emergency medical services (EMS) runs in Cambridge, Massachusetts (MA), exhibit significant geospatial clustering, and we identified three clusters of opioid-related EMS runs in Cambridge, MA, with distinct characteristics. Models of PDN sites at these clusters show that approximately 40% of all opioid-related EMS runs in Cambridge, MA, would be accessible within 200 meters of PDN sites placed at cluster centroids. Conclusion Identifying clusters of opioid-related EMS runs within a community may help to improve community coverage of naloxone, and strongly suggests that PDN could be a useful adjunct to bystander-administered naloxone in stemming the tide of opioid-related death.
INTRODUCTION
Opioid-associated overdose and death continues at epidemic levels throughout the United States (U.S.) with mortality from opioid use the leading cause of accidental death in the U.S. 1,2 In Massachusetts (MA) there were 1,990 confirmed opioid-related deaths in 2016, an all-time high and administered by the non-medically trained lay public, has been shown to reverse opioid overdoses and save lives; however, it requires an individual carrying naloxone at the same place and time as an overdose occurs. 2,6 Efforts to improve community prevalence of naloxone have focused on increasing prescribing and improving availability in pharmacies, and naloxone is now available in many areas either as an over-the-counter substance or under a standing order. 7 However, barriers to obtaining and carrying bystander naloxone still exist, and bystander naloxone is not currently available everywhere it is needed. [8][9][10] Unlike bystander-carried naloxone, the public deployment of automated external defibrillators (AEDs) in pre-determined, easyto-access locations for use by bystanders in cases of witnessed arrest requires no single individual in particular to obtain or carry the life-saving device and shifts the burden of providing potentially life-saving equipment from individuals to the community. 11,12 Traditionally deployed in settings of high traffic and mass gatherings such as airports, casinos, or sports stadiums, distribution of AEDs has recently been guided by geospatial analyses of cardiac arrest data and pedestrian traffic with encouraging results. 13,14 Publicly deploying naloxone in AEDlike kits may improve naloxone availability to overdose victims and overcome barriers associated with current bystander-carry methods. However, like AEDs and cardiac arrests, determining where to place potential PDN kits requires understanding where opioid overdoses occur. Recent work by our team and others has shown spatial clustering of opioid-related emergency department visits, opioid-related deaths, and self-reported bystander naloxone use, suggesting that opioid overdoses may also show spatial clustering amenable to PDN placement. [15][16][17] We performed a geospatial analysis of emergency medical services (EMS) runs involving suspected or confirmed opioid overdose in the community of Cambridge, MA. We hypothesized that opioid overdoses do not occur randomly but instead show spatial clustering, and that identifying these clusters would both support the concept of publicly deployed naloxone and help identify locations where naloxone could be stationed for maximum potential effect.
Study Design and Selection of Participants
This was a retrospective analysis of EMS runs that occurred in Cambridge, MA, between October 16, 2016 and May 10, 2017. Cambridge, MA, is a community of approximately 110,000 citizens spread across approximately 17 Km 2 ; EMS calls in Cambridge, MA, are served by a public-private partnership using the public fire service and a single, private EMS company, ProEMS. 18,19 As part of their standard operating procedures, EMS providers record pertinent information in an electronic medical record maintained by the EMS service. All runs for which overdose is part of the dispatch information or provider impression, or in which naloxone was administered by bystanders, first responders or EMS, are submitted to the Cambridge Department of Public Health. All cases, including the EMS patient care record and narrative were reviewed by an independent epidemiologist blinded to our study hypothesis. We included in this analysis any case for which the Cambridge Department of Public Health determined a suspected or probable opioid overdose. Data were manually reviewed for duplicate entries. We excluded any runs originating outside of Cambridge, MA. All data manipulation and statistical analysis was performed using the R programming language. 20 This study was approved by the institutional review board at Partners Healthcare Boston, MA.
Geocoding EMS Runs
Geocoding is the process of determining the exact spatial location of an address: During this process, a humanreadable address such as 795 Massachusetts Ave, Cambridge, MA 02139 (Cambridge City Hall) is transformed to spatial coordinates (e.g., X: -71.106026, Y: 42.36681), which are amenable to mapping and statistical analysis. We performed first-pass geocoding of addresses of EMS runs using
Dworkis et al.
Geospatial Clustering of Opioid-Related Emergency Medical Services Runs the U.S. Census Geocoder and address-batch geography lookups matched to U.S. Census 2010 data vintage, which provided coordinates in the North American Datum 1983 (NAD83). 21 Addresses not successfully geocoded by the U.S. census were geocoded using Google Maps, which provided coordinates in the World Geodetic System (WGS84). 22 NAD83 and WGS84 systems are equivalent to within approximately two meters over the small areas involved in this study, so the two systems were treated interchangeably for all geospatial analyses reported here. 23-25 Cambridge, MA, city boundaries were defined by the Geographic Information System of the City of Cambridge, MA. 26 Projections between latitude/longitude (degrees) and Cartesian coordinates (meters [m]) were performed using "sp" package in R. 27 Maps of EMS runs in Cambridge, MA, were produced using either the "spatstat" package in R, or QGIS software with base maps provided by OpenStreetMaps. [28][29][30] Geospatial Analysis Analysis of global spatial clustering of EMS runs, asking the question -do EMS runs cluster at all in Cambridge, MA? -was examined through calculations of Ripley's K-function [K(r)]. We performed calculations of Ripley's K function, as well as Monte Carlo estimates (MCE) of expected envelopes of K(r), using the "spatstat" package in R with Ripley's isotropic corrections at window borders. 28 Briefly, K(r) tests for clustering in a pattern of spatial points by examining observed vs. expected distributions of points around an index point within circles of various areas; in the setting of complete spatial randomness, the density of points should be uniform, so the expected number of points scales with the area of the test circle and should produce an exponential plot of K(r) vs. the circle radius. Compared to the global analysis of clustering provided by the Ripley's K function, local analysis of clustering addressing the question of where exactly within Cambridge, MA, clusters might occur was performed using density-based clustering. We used an unsupervised, spatial density-based clustering algorithmthe density-based spatial clustering of applications with noise (DBSCAN) method via the "dbscan" package in R, after projecting coordinate data to the European Petroleum Survey Group (EPSG) Projection 26986. 31,32 Epsilon neighborhood parameters for the DBSCAN algorithm (EPS) was estimated at 200 using visual inspection of k-nearest-neighbor (KNN) plots (Supplemental Figure 1) with minimum KNN cluster sizes set to three members as described in the DBSCAN vignette. 31 To maximize the potential utility of identified clusters, we only considered clusters of opioid-related EMS runs with at least 69 runs (10% of the total number of successfully geocoded runs in Cambridge, MA). We calculated distributions of distances between cluster points and cluster centroids in the EPSG 26986 projection using the "raster" package in R. 33
Characteristics of EMS Runs
Between October 16, 2016, and May 10, 2017, we identified 700 opioid-related EMS runs in the ProEMS database, spread among 359 unique addresses in Cambridge, MA. Of these addresses, 353 (98.3%) were successfully geocoded to 349 unique physical locations; the majority (327, 92.6%) were geocoded using the U.S. census, and an additional 26 addresses (7.4%) were geocoded using Google Maps. The discrepancy between address and physical locations reflects the fact that multiple distinct addresses can occur at the same coordinates, such as with a multi-unit apartment building. For the remainder of our analyses, we used a location-based, as opposed to an address-based, approach. Collectively, these 349 locations accounted for 693 (99.0%) of the initially identified 700 runs. Figure 1 shows a map of the locations of EMS runs in Cambridge, MA, during the study period. Of note during mapping, three locations (each with one run) were found to lie outside the official spatial boundary of Cambridge, MA, and were removed from further analyses, resulting in a final dataset of 690 geocoded runs. Of these 690 runs, we recorded information on patient gender for 683 runs (99.0%), and patient date of birth for 677 runs (98.1%); patients ranged from less than one year of age to 107 years old at the date of EMS service, with a median age of 36 years (interquartile range ([QR] 29-49 years), and the majority were male (422, 61.8%).
Geospatial Clustering
To test the hypothesis that opioid-related EMS runs in Cambridge, MA, show spatial clustering, we estimated Ripley's K-function for the set of 690 EMS runs that were geocoded within Cambridge, MA. Figure 2 shows an estimate of the K(r) function for the observed EMS runs, as well as a theoretically expected envelope generated by a MCE with 999 simulations of completely random spatial distributions of EMS runs within Cambridge, MA. As the observed estimate of K(r) deviates substantially from the MCE-generated expected envelope at multiple radii, there is statistically significant evidence of EMS runs clustering with an MCE approximate p-value of p ~ 0.001.
While computing K(r) shows evidence that opioidrelated EMS runs do cluster in general across the study area, understanding where to optimally place PDN sites would require more granular knowledge on locations of individual clusters within the study area. To begin to look for these clusters, we first searched for evidence of clusters of opioid-related EMS runs occurring at an individual location. Of the 346 unique locations in Cambridge, MA, 242 locations (69.9%) had a single run each; 103 locations (29.8%) had between two and 16 runs each, collectively accounting for 372 runs (53.9%); finally, a single outlier location had 76 EMS runs, individually accounting for 11.0% of all runs during the study period. This outlier location is a community-based service organization that provides recovery services and emergency shelter to homeless individuals, including those struggling with drug and alcohol addiction. 34 Compared to runs originating at other addresses, EMS runs originating at this emergency shelter involved patients who were older, with a median age of 43 years (IQR 36.5-58.5 years) for patients coming from the emergency shelter compared to 35 years (IQR 28-48 years) for patients coming from elsewhere in Cambridge, MA. No significant differences were observed in patient gender between patients coming from the service organization or from elsewhere in Cambridge, MA.
After identifying this single-location cluster, we next considered clusters of EMS runs that spanned multiple, distinct locations, using an unsupervised, density-based clustering approach. Figure 3 shows the three distinct clusters of opioid-related EMS runs identified, named clusters "A," "B," and "C." Collectively, 362 EMS runs (52.5%) were located in one of the three clusters. Cluster A includes 86 EMS runs (12.5%) from 42 separate locations covering a roughly circular area of approximately 116,948m 2 (0.05miles 2 ) centered on the Harvard Square area, a busy, mixed commercial-residential area containing a public transportation hub and parts of Harvard University. Cluster B was the largest cluster, involving 191 EMS runs (27.7%) from 81 separate locations spread over a linear / ellipsoid area covering approximately 319,630m 2 (0.12miles 2 ) along Massachusetts Avenue at the Central Square area, another large, mixed commercial-residential area containing a public transport hub. Finally, Cluster C included 85 EMS runs (12.3%) from only eight separate locations, one of which is the single-location cluster previously identified, which accounted for 76 (89.4%) of the EMS runs in Cluster C. The table summarizes geospatial and run-related details about these three clusters.
Modeling PDN Sites
For clusters A and B, which involved EMS runs spread widely across multiple locations, we modeled the potential impact of sites located at cluster centroids. For the purposes of these models, we assumed the PDN sites to be accessible within 200 meters (m) in any direction of the cluster centroid. This number was chosen to match the epsilon neighborhood parameter of the density-based scan statistic, but is an assumption of the distance a bystander would be willing to travel to access a PDN site. Figure 4 shows maps
DISCUSSION
We found that EMS runs involving opioid overdose exhibit geospatial clustering in Cambridge, MA, and identified three distinct geospatial clusters as potential targets for publicly deploying naloxone. To our knowledge, this is the first work to examine spatial clustering of opioid overdoses at the level of spatial granularity required to pinpoint potential sites of naloxone deployment. Our findings show two distinct types of spatial clusters, which may require different methods of naloxone deployment: clusters "A" and "B" are both centered at highly trafficked public areas in Cambridge, MA, while cluster "C" represents a spike of opioid-overdoses occurring at a single location.
The optimum strategy for delivering naloxone to Cluster C would likely be locating naloxone kits at or inside the identified emergency shelter. By comparison, there might be multiple strategies for PDN sites within Clusters A and B, the simplest of which would be to position PDN sites at cluster centroids we modeled here. Positioning PDN sites at the cluster centroid is an inherently naïve solution that does not account for geographic realities such as vehicle and pedestrian access to various locations within a cluster, public visibility, and accessibility at off hours. Further work would be needed to understand how to optimize PDN placement within a cluster accounting for these geographic realities, and different clusters likely have different optimal solutions. Still, using the simple models of naloxone deployment at cluster centroids, our results show that approximately 40% of the opioid-overdoses in this dataset would have occurred within 200m of a potential PDN site, suggesting that deploying naloxone at these sites would have a large impact on improving the availability of naloxone where it is needed most. Beyond simply providing targeting information for stationing naloxone kits, understanding local clustering patterns in opioid-related EMS runs could provide crucial information for a broader, multidisciplinary approach to a community's response to the opioid epidemic. Knowledge of cluster location and EMS transport patterns could be used to identify potential community partners, for example, large academic centers such as a large university located in cluster A, a second large university located close to Clusters B and C, coalitions of business owners such as those in Clusters A and B, or specific hospitals that handle large portions of EMS transport from particular clusters. Knowledge of cluster locations could also inform other efforts to respond to the opioid epidemic including potentially where to deploy ambulances or where to focus efforts on training first responders and the lay public on bystander naloxone delivery.
While these clusters represent effective potential PDN sites, future work combining these maps with spatial information about public naloxone use, deaths from opioids, or overdoses involving synthetic opioids such as fentanyl or car-fentanyl, could further optimize PDN placement within Cambridge, MA. Similarly, it might be useful to consider other sites of public access to emergency equipment that already exist and compare clusters of opioid-related EMS runs to the locations of AEDs already deployed in Cambridge, MA. Future work is also needed to consider the details of how PDN sites physically would be constructed, how the naloxone would be stored, and how they could be made most easily accessible to the public. In general, geospatial analysis of a particular subset of EMS runs, such as opioid-related runs, could be a useful tool for focusing community engagement, education, and intervention.
LIMITATIONS
This analysis of geospatial clustering of opioid-related EMS runs is limited to the underlying data captured by Cambridge's EMS services, and therefore might not include all opioid overdoses in Cambridge, MA. While the total number
Dworkis et al.
Geospatial Clustering of Opioid-Related Emergency Medical Services Runs of opioid overdoses occurring in Cambridge, MA, during the study period is therefore likely greater than the 700 EMS runs we consider here, it is not possible to determine where nonrecorded overdoses occur geospatially. While inclusion of EMS runs into our data was determined by a trained epidemiologist independent to our study who examined all EMS data, we did not have access to outcome data including toxicological testing and hospital records. Thus, it was not possible to confirm overdose in each case with certainty. Additionally, the raw data for each run were not available so it was not possible to independently verify the epidemiologist's assessment. However, these cases do likely represent the patients who would receive naloxone in a PDN program. Collectively, these facts might introduce error into our clustering, which is inherently only as good as the data it is built on. A minor limitation is the inability to independently verify the age of the one patient transported by EMS with a reported age of 107; it is not possible from available data to determine if this patient actually was 107 years old or had a default date of birth of 01/01/1910 entered. Within each cluster, the percentage of EMS runs that we label as "potentially modifiable" is dependent on our assumption of 200m as a travel distance to a PDN site. As discussed above, optimal placement of PDN sites requires further study, and bystanders might be willing to travel more or less than this distance depending on factors such as the built environment, weather, and time of day. Additionally, the analysis we performed here is limited to a single city served by a single EMS service, and more work would be needed to extend the modeling solution developed here to other cities including cities served by multiple EMS services each with partial data. Specifically, larger cities or cities with unique geographic features such as rivers or geographic boundaries that partition the city would require more robust spatial analysis. Each city considering implementation of PDN sites would need to analyze city-specific overdose data to optimize PDN positioning.
Finally, it is not yet known if placing PDN sites would improve outcomes for cases of opioid overdose or would actually offer a quicker delivery of naloxone over EMS administration when studied in real life, and significant future work would be needed to investigate if this is the case. We believe that this analysis offers the theoretical and geospatial grounding for performing an "in vivo" PDN study and determining its utility as a response to the opioid epidemic.
CONCLUSION
Opioid overdoses show spatial clustering in this geospatial analysis of EMS runs in Cambridge, MA, with three distinct clusters of opioid overdoses identified. In general, public deployment of naloxone in areas of high opioid overdose could be a useful and important adjunct to other methods of naloxone delivery including bystander naloxone and first-responder naloxone. Identifying clusters of opioid-related EMS runs within a community is a key first step. | 2018-08-01T20:18:05.953Z | 2018-05-15T00:00:00.000 | {
"year": 2018,
"sha1": "d924ec6d76bfc0a1cc4ef6738adb0f2b1137d70b",
"oa_license": "CCBY",
"oa_url": "https://escholarship.org/content/qt86s056g0/qt86s056g0.pdf?t=pbal8m",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d924ec6d76bfc0a1cc4ef6738adb0f2b1137d70b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
158436273 | pes2o/s2orc | v3-fos-license | The Validity of Theoretical Model of Rhetorical Sensitivity Based on Communication Mentality Structure in Multiethnic Context : Study for Certified Teachers in District of Simalungun , North Sumatera
Kata kunci/
Since, it provides a systematic explanation due to the appearance of difference communication observed-behavior between individuals from one cultural orientation (Ting-Toomey, 1988;Knutson et al, 2003).When activating rhetorical sensitivity, a communicator is perceived having capacity to adapt three interests (self, other, social interaction) that come simultaneously.Then, they are taken be consideration in selecting and deciding which kind of messages that convinced effective for persuading certain people and appropriate for certain context .It is plausible, when individual operating his or her rhetorical sensitivity, they can modify their communication behavior a part of the expectation of cultural tendencies.By reviewing a bulk of literature, there are five groups of rhetorical sensitivity theories.First group is called social demography-based rhetorical sensitivity theory.It is characterized by explaining the relationship between social-demography factors, such as age, education, sex, social-economic status, ethnic identification, religion identification, party identification, academic competitiveness, with rhetorical sensitivity (Hart et al, 1975(Hart et al, , 1980)), organizational factors (Reynolds, 2009) and gender role, such as androgyny (Ann et al, 1998).The second group is communication competence-based rhetorical sensitivity theory.This theory reveals that rhetorical sentivity is conceptualized as individual's competence or a trait-like of communication.Rhetorical sensitivity exists as cognitive competence (Palanca, 1982;Olson, 1985;Stacey, 1995;Knutson et al, 2003Knutson et al, , 2006)).Additional, this theory concludes that the notion of rhetorical sensitivity diverges from others constructs, such as interaction involvement, interaction management, behavioral flexibility or social style (Reardon, 1987).The third group is cross-cultural based rhetorical sensitivity theory.It provides a systematic explanation of the relationship between cultural tendencies and rhetorical sensitivity (Ting-Toomey, 1988;Knutson et al, 2003Knutson et al, , 2006)).Fourth group is culture influences on communication theory (Gudykunst & Kim, 2003).
This study isolated several notions relates to rhetorical sensitivity theories as mention above.Firstly, each of theory does not contribute to each other in extending, accumulating and organizing body of knowledge of rhetorical sensitivity concept.There is no theoretical growth by extension.Secondly, one theory does not intend to verify and elaborate the other one.There is no theoretical growth by intention.Meanwhile, from communication competence-based rhetorical sensitivity theory is identified that rhetorical sensitivity is an individual's cognitive competence or a trait-like of communication.It is an individual's internal (mental) attribute.Cross-cultural based rhetorical sensitivity theory promotes the idea that there is the relationship between cultural tendencies and rhetorical sensitivity.
Although the latter recognize the relationship between cultural tendencies and rhetorical sensitivity, but it does not cover how cultural tendencies work to influence rhetorical sensitivity as a mental process.Yet, scope of explanation of this group theory does not touch which kind of other mental instruments downplays with cultural tendencies (individualism-collectivism) simultaneously influence and create the variance commu-nication behavior.If exits, how the other internal attribute operates when cultural tendencies operates to influence rhetorical sensitivity.All these questions are not answered by this theory.
The answers come from Gudykunst et al (1988Gudykunst et al ( , 2002Gudykunst et al ( , 2003) ) that explain how cultural tendency (individualism-collectivism) influence communication observed-behavior.These theories stated that there is no longer cultural tendency (individualism-collectivism) influence communication alone.But, it works (directly or indirectly) together with other constructs, such as importance of ingroup, allocentrism-idiocentrism, individual values and self-construal.But, the explanation of these Gudykunst's theories does not impose or state the term of rhetorical sentivity explicitly.
Realizing there is a gap between two theories, this study intends to integrate cross-cultural based rhetorical sensitivity theorythat proposed by Ting-Toomey (1988) and Knutson et al (2003Knutson et al ( , 2006) ) and the theory of influence of individualism-collectivism on communication to constitute that initiated and developed by Gudykunst et al (1988Gudykunst et al ( , 2002Gudykunst et al ( , 2003)).This research calls this integrated theory as rhetorical sensitivity theory based on communication mentality structure.This theory to the individuals is useful to explain the variance of communication observed-behavior consciously as influenced by certain cultural tendencies over time.
This study is focusing to certified teachers in District of Simalungun in Province of Sumatera Utara.As directed by Law No 14/2005, all teachers in Indonesia is expected to be professional andexerted to conduct various professionalism values.They have to present four competencies in conducting their professionalism activities, such teaching in class room, preparing course outline, evaluating, etc.A certification is a legitimation way for individual to be classified as professional teacher.One of the competencies that inserted to, have to be performed by and living with new standard for certified teachers is able to communicate emphatically, effectively and appropriately to colleagues, parents and students.This Teachers and Lecturers Law also contains amount of principles of professional teacher that is functioned be facilitators or antecedent for communication competence as pursued by the law.In summary, certified teachers in implementing their professional activities are highly controlled that contradict to the other professional, such as doctor or lawyer, that are less controlled in their professional activities.To perform a competent communicator as demanded by Law will be uneasy for certified teachers.Since, at same time he or she is dictated or tempted by ethnic expectation (as consequence of a member of certain ethnic) that may difference from suggested by Law.
This study is aimed to test how fit theoretical model of rhetorical sensitivity based on communication mentality structure with the certified teachers in District of Simalungun, Province North Sumatera that are highly controlled in executing their professional activities which come from Law and ethnic expectations.
Theoretical Framework
Communication mentality structure based rhetorical sensitivity theory that is proposed by this study generated form separate notions.Competence-based rhetorical sensitivity theory contributes to this proposed theory through their ideas which revealthat rhetorical sensitivity is an individual's cognitive competence, a trait-like of communication or an individual's internal (mental) attribute (Palanca, 1982;Stacey, 1995;Knutson et al, 2003Knutson et al, , 2006)).Cross-cultural based rhetorical sensitivity theory enrich this proposed theory by providing empirical explanation, contend that there is the relationship between cultural tendencies (orientation) and rhetorical sensitivity.Individuals from individualist culture and collectivist culture tend to have differences scores on rhetorical sensitivity degree (Ting-Toomey, 1988;Knutson et al, 2003Knutson et al, , 2006)).Gudykunst, Ting Toomey & Chua (1988); Gudykunst & Lee (2002) and Gudykunst & Lee (2003) came with systematic and comprehensive explanations to show up how cultural tendency, cultural norms/ rules and individual characteristics influence communication observed-variables, such as child-rearing practises, avoiding other's feeling, clarity in conversation, intimacy of communication,syncronization of communication or difficulty of communication.They called this theory with the influence of individualism-collectivism on communication (Gudykunst & Kim, 2003 : 63) Through careful examination, thisstudy clarifies, extendsand entails Gudykunst'set al theories.This study identified cultural tendencies as individualism-collectivism.Cultural norms/ rules as importance of ingroup.Meanwhile, individual characteristics are deduced with allocentrism-idiocentrism as representation of personality orientations, individual values and self construals.There are two pathways to see the influence of individualism-collectivism on communication.
Individualism-collectivism has capacity to influence communication observed-behavior directly.But, it is able to communication observed-behavior indirectly or mediated by importance of ingroup, allocentrism-idiocentrism, individual values and self construals.
Method
This research goes by positivism (classical) paradigm which is normally called with quantitative approach.In intercultural study, this paradigm is intertwined with etic approach that characterized entails (1) seeing certain culture from outside of its system, (2) research structure is organized by researcher himself and (3) research criteria are assumed to be absolute and universal (Gudykunt, 2003 : 66).All concepts that building theoretical model are measured in interval levels and operationalized in self-report (verbal evaluation) format and 7 points Likert-type scale.Measurement scale of rhetorical sensitivity that applied in this research is taken from selected item that originated from THAIRRHETSEN 2. It is made up from 9 indicators to represent rhetorical sensitivity.Individualism-collectivism is constituted by 8 indicators that selected from measurement scale developed by Singelis, Triandis, Bhawuk, &Gelfand (1995).This research developed measurement scale of importance of ingroup that is refl ected by 6 indicators.Individual values is refl ected by 8 indicators that taken selectively from Schwartz's Motivational Type of Values (1992).Meanwhile, allocentrism-idiocentrism is measured by 8 indicators that adopted by carefully examination from measurement scale owned by Kirschner (2009).This scale is called FAIS (The Family Allocentrism-Idiocentrism Scale).Self-construal consist 8 indictorsafter selecting from Gudykunst et al (1996).All indicators are translated from English to Indonesia.
Both unit and level analysis are at individual.The target population of this study is certifi ed teachers as listed until 2013 at District of Simalungun, Province of Sumatera Utara.Sample size is 339 people that taken from by systematic random sampling with margin of error 4%.Sampling frame is available.This research design is cross-sectional survey in which data was collected by face to face interview that guided by structured questionnaire.Data is analyzed by structural equation modeling (SEM) method with LISREL 8.5 for Windows.Individualism-collectivism, allocentrism-idiocentrism, individual values, importance of in-group clearly having evidence infl uenced rhetorical sensitivity directly.But, self-construal did not.8 of 10 pattern of indirect infl uence of individualism-collectivism, allocentrism-idiocentrism, individual values, importance of in-group and self-construal on rhetorical sensitivity supported the research hypothesis.Two of rejected hypotheses originated from self-construal.
Research Findings and Discussion
Measurement fi t model focuses on validity and reliability of measurement scale that building research model.Structural fi t model focuses on how fi t causal relationship that hypothesized with obtained data.Individualism-collectivism, allocentrism-idiocentrism, individual values, importance of ingroup clearly having evidence infl uenced rhetorical sensitivity directly.But, self-construal did not.8 of 10 pattern of indirect infl uence of individualism-collectivism, allocentrism-idiocentrism, individual values, importance of in-group and self-construal on rhetorical sensitivity supported the research hypothesis.Two of rejected hypothesis originated from self-construal.As dependent variable, no clearly evidence to support that self-construal was infl uenced by individual values.As independent variable, self-construal did not infl uence allocentrism-idiocentrism. The summary of structural fi t model is summarized on the table below.
Overall fi t model tends to justify how fi t two kinds of fi t model as explained before generally.Accordingly, overall fi t model is a description of residual degree from model research that have been identifi ed and data obtained by survey.This research fi nds that there were 4 of 6 absolute fi ts measures is classifi ed good fi t.They are Goodness-of-Fit Index (GFI), Expected Cross-Validation Index (ECVI), Root Mean Square Residual Meanwhile, all incremental fi t measures, such as Tucker-Lewis Index or Non-Normed Fit Index (TLI or NNFI), Normed Fit Index (NFI), Adjusted Goodness of Fit Index (AGFI), Relative Fit Index (RFI), Incremental Fit Index (IFI) and Comparative Fit Index (CFI) are apparently classifi ed good fi t.Another kind of overall fi t measure is parsimonious fi t measures.It encompasses Parsimonious Goodness of Fit (PGFI), Normed Chi-square, Parsimonious Normed Fit Index (PNFI), Akaike Information Criterion (AIC) and Consistent Akaike Information Criterion (CAIC).This study revealed that all these measures are classifi ed good fi t.
Discussion
Rhetorical sensitivity model based on communication mentality structure as this research model is constructed by six concepts such as rhetorical sensitivity, individualism-collectivism, allocentrism-idiocentrism, individual values, importanceof in-group and self-construal, which is measured by 47 indicators.The indicators (latent variables or scale) of individualism-collectivism, allocentrism-idiocentrism, individual valuesand self-construal are undertaken from universal and standard measurement that widely applied to influential research, mostly in Western culture.Meanwhile, the indicators of importance of in-group were developed initially by this research itself, but the conceptualization and operationalization also originated from Western perspective.
Accordingly, they should be ineffective if conducted in Eastern culture.Nevertheless, the finding of this research appeared to be contrary.All indicators of this concept were valid and having high internal consistency.Several arguments can be drawn to explain these evidences.Under compelling of Law No 14/2005, certified teachers have to perform various professionalism values that will be contrary to each other.They are urged to pursue a high degree of academic qualification by their selves (reflection of individualism), but at the same time they have to increase their competencies overtime throughout supporting of others (reflection of collectivism).Certified teachers are obliged to able to communicate emphatically, effectively and appropriately to colleagues, parents and students; however, they are highly controlled in conducting their professionalism activities, such teaching in class room, preparing course outline, learning evaluation.Hence, they have to be able to synchronize their autonomy selves and group-related selves (representation of rhetorical sensitivity).
When conducting their professional, certified teachers are demanded to conduct a unique professional status (as a teacher) that difference with others (reflection of independence self-construal); nonetheless they are irresistibly to demonstrate a collegial status (representation of interdependence self-construal).Likewise, certified teachers are insisted to cooperate with others in small group discussion that exist at school or membership in professional organization (reflection of importance of in-group).Professional values that also have to be represented by certified teachers are having personality competencies (reflection of allocentrism-idiocentrism) as expected by and stated explicitly in Regulation Number 16 of Minister of National Education.Furthermore, a certified teacher was driven to integrate their professional status and the way they conduct their professional activities.In other word, they have to apply their individual values effectively and appropriately.In summary, the validity of measurement scale to certified teachers are supported by the context in which insisting to adapt professional values and principles as normal condition.
Another argument relates to multi ethnic background.Certified teachers who came from District of Simalungun, Province of North Sumatera.They are constituted on various ethnic groups, such as Batak Simalungun, Batak Toba, Batak Mandailing-Angkola, Batak Karo and Java major ethnic.Consequently, they are habitually interacted in multiethnic situation.Each ethnic group carries out its certain kinship system, function of clans, language, religion and customs.For individuals from certain ethnic, such as Batak Simalungun and Java, it is not appropriate to say something directly.But, it is something good for Batak Toba.In order to be perceived as in-group or excluding (out group), for individual from Batak Toba, Batak Mandailing-Angkola and Batak Karo, clans an homelands are key identities.Meanwhile, for Batak Simalungun, social identities are determined by how well customs is consistently undertaken over time.Therefore, individuals from Batak Simalungun is more easy to adapt other cultures than others Batak's ethnic group.However, it is also plausible if individuals from this ethnic group exclude their members if did not practice its initial custom consistently and properly.Intercultural communication events that is normally faced by certified teachers in District of Simalungun, Province of North Sumatera supported their capacity to adopt measurement scale that exposed to them, nevertheless are adopted from Western values.
Rhetorical sensitivity activates to mediate the compelling of cultural tendency, norms/ rules and individuals characteristics on communication observed variables.Cultural tendency is represented by individualism-collectivism.Norms/ rules are reflected by importance of ingroup.Individual characteristics are shown by allocentrism-idiocentrism, individual values, importance of in-group and self-construal.Individualism-collectivism, importance of in-group, allocentrism-idiocentrism and individual values having significance influence on rhetorical sensitivity.This evidence demonstrates two things.Firstly, they are strong enough to create the variance on rhetorical sensitivity.Therefore, this created condition will make up variance on communication observed-variable too.Practically then, professional values that directed by Law No 14/2005 will be easily adopted and will create communication competence criteria as demanded by Law.Second, self-construal that clearly no having evidence influence on rhetorical sensitivity explain that it is hard for certified teachers to separate them self from other or acting to be a unique one.There is a tendency for certified teachers to see themselves as an integral part of others (group).Insisting that came from individualism-collectivism, importance of in-group, allocentrism-idiocentrism and individual values are Bibliography not enough for self-construal to influence rhetorical sensitivity.
Conclusions
Rests again on the research objective stated previously thereby it can be concluded that there are several conclusions in this research which generally confirm that rhetorical sensitivity model based on communication mentality structure as this research model is valid for certified teachers in District of Simalungun, Province of North Sumatera.Through replicating and modifying scales that universally used and tested at various influential studies, this research conclude that measurement scale of rhetorical sensitivity, individualism-collectivism, allocentrism-idiocentrism, individual values, importance of in-group and self-construal that build rhetorical sensitivity model based on communication mentality structure are valid when testing to certified teachers are obliged to able to communicate emphatically, effectively and appropriately to colleagues, parents and students; however, they are highly controlled in conducting their professionalism activities, such teaching in class room, preparing course outline, learning evaluation.Individualism-collectivism, allocentrism-idiocentrism, individual values and importance of in-group have been verified as variables that create degree rhetorical sensitivity as demanded by each variable.But, the influence of rhetorical sensitivity on communication observed variables still needs further clarification.
Hart and Burks in 1972, the notion of rhetorical sensitivity have gained extensive attention from intercultural and interpersonal researchers.
There are 45 0f 47 items (indicators) that represent rhetorical sensitivity based on communication mentality structure model (this research model) were valid.It was indicated by standardized factor loadings score > 0.7 and t values > 1.96.All indicators or latent variables (9 items) that measure rhetorical sensitivity concept were valid with factor loading score (Lambda Y score) felt at interval 0.70-0.78and t values > 1.96.Range of Lambda X score of eight indicators that represent individualism-collectivism rest on 0.74-0.81and t values > 1.96.As a personality orientation, allocentrism-idiocentrism was refl ected by 8 indicators in which all of them were valid.It has Lambda Y score posit at 0.70-0.78and t values > 1.96.Similar conditions are also imposed to other concepts, such individual values and importance of in-group, that having factor loading score between 0.72-0.81and t values > 1.96.But, only 2 0f 6 indicators (latent variables) that measure self-construal were identifi ed valid.This study also fi nds that all concepts which build measurement research model having high degree of reliability score.It was shown by composite reliability score felt at 0.72-0.81and variance extracted score between 0.55-078.
(
RMR) and Non-Centrality Parameter.The other ones are classifi ed bad fi t, such as Statistic Chi-Square (), Root Mean Square Error of Approximation (RMSEA).
Table 4 .
Absolute fi ts measures
Table 5 .
Incremental Fit Measures | 2019-01-29T17:52:54.821Z | 2018-01-03T00:00:00.000 | {
"year": 2018,
"sha1": "50d64763382aaf403c9a26a599d858148d22c6dd",
"oa_license": "CCBYSA",
"oa_url": "https://doi.org/10.7454/jki.v6i1.8909",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "50d64763382aaf403c9a26a599d858148d22c6dd",
"s2fieldsofstudy": [
"Education",
"Linguistics",
"Sociology"
],
"extfieldsofstudy": [
"Sociology"
]
} |
51827450 | pes2o/s2orc | v3-fos-license | Characterization of Model-Based Uncertainties in Incompressible Turbulent Flows by Machine Learning
This work determines the inaccuracy of using Reynolds averaged Navier Stokes (RANS) turbulence models in transition to turbulent flow regimes by predicting the model-based discrepancies between RANS and large eddy simulation (LES) models and then incorporates the capabilities of machine learning algorithms to characterize the discrepancies which are defined as a function of mean flow properties of RANS simulations. First, three-dimensional CFD simulations using k-omega Shear Stress Transport (SST) and dynamic one-equation subgrid-scale models are conducted in a wall-bounded channel containing a cylinder for RANS and LES, respectively, to identify the turbulent kinetic energy discrepancy. Second, several flow features such as viscosity ratio, wall-distance based Reynolds number, and vortex stretching are calculated from the mean flow properties of RANS. Then these flow features are regressed onto the discrepancy using a Random Forests regression algorithm. Finally, the discrepancy of the test flow is predicted using the trained algorithm. The results reveal that a significant discrepancy exists between RANS and LES simulations and ML algorithm successfully predicts the increased model uncertainties caused by the employment of k-omega SST turbulence model for transitional fluid flows.
INTRODUCTION
Turbulent flows constitute most of the fluid flow encountered in many processes. It is characterized by velocity fluctuations in all directions that has an infinite number of degrees of freedom. These flows can be classified as laminar, transition to turbulence and fully developed turbulent flows. The flow in an empty bounded channel becomes laminar until the bulk Reynolds number reaches to 1860 [1]. Once the flow speed exceeds that level transition to turbulence occurs, and after a critical Reynolds number, which depends on the flow type, the flow becomes fullydeveloped turbulence. However, with the presence of an obstruc-tion, simply a cylinder, the fluid separates from both side of the cylinder and two shear layers develop. The formation of these layers causes a phenomenon called von Karman vortex street. On the vortex street, the transition from laminar to turbulent flow occurs at low Reynolds numbers due to the increase in flow instabilities [2]. Numerical simulations of such flows in a channel with high blockage ratio becomes very challenging since the interaction of vortex generated by both cylinder and wall adds more level of complexity. Resolving all scales of eddies in this type of flow with direct numerical simulation (DNS) method becomes infeasible because of the need for significant computational resources. On the other hand, although RANS turbulence models provide slightly better predictions for the transitional to turbulent vortex region, it gives misleading results due to the adverse pressure gradients, flow separation, and laminar behavior in the upstream and further downstream of the cylinder.
The recent development of machine learning based techniques emerges as a promising tool to improve capabilities of computational fluid dynamics (CFD) by integrating data-driven approach to the numerical prediction. In this context, several studies have come to exist to investigate turbulent structures and improve current turbulence models by incorporating the ML techniques more than a decade ago. Milano and Koumoutsakos [3] have developed a neural network methodology in order to reconstruct the near wall field in a turbulent flow. They reported that nonlinear neural networks provided better prediction capabilities for the near wall velocity fields. Marusic et al. [4] carried out researches on real-time feature extraction of coherent spatiotemporal structures. They successfully extended the existing pattern discovery algorithms to establish the relationship among higher order clusters. These studies initiated a new research area, however, due to limitations in computational power, and lack of data analytics knowledge, not many studies have shown up for a while.
Recently, incorporation of ML techniques in fluid mechanics has gained momentum and applied to turbulence modeling in several different contexts. Generic steps of ML algorithms include a model on training observations and then making predictions on unseen testing observations using the fitted model. Yarlanki et al. [5] used an artificial neural network base ML method to optimize the model constants of the k-ε turbulence model. They considered the experimental results of temperature distribution in a data center to be the ground truth training data. The researchers achieved to lowering the RMS error by 25% and absolute average error by 35% compared to the error obtained by using default k-ε model constants. Gorle et al. [6] proposed an approach for uncertainty quantification of turbulence mixing models. First, the range of perturbations was obtained as a function of flow features from LES simulations. Then the prediction algorithms were successfully applied to RANS results in order to assess whether the Boussinesq hypothesis is appropriate.
Tracey et al. [7] developed an ML algorithm to quantify uncertainties of low-fidelity models by using information from related high-fidelity data sets. Even with the limited data, their method managed to provide upper and lower limits to RANS errors. Duraisamy et al. [8] inferred the functional form of deficiencies in known closure models by applying inverse problems to experimental data and developed an ML model to obtain more robust and accurate closure models. Ling and Templeton [9] proposed a classification type ML algorithm to predict the regions in a flow where high RANS uncertainty may occur. They achieved to confident assessment that their method enables the evaluation of RANS uncertainty using data-driven approach and capable of generalizing the markers to flows substantially different from those on which it was trained. Wang et al. [10] proposed a data-driven approach to predict Reynolds stress discrepancies in RANS by using regression type ML technique based on Random Forests (RF). They illustrated that the ML algorithms provide noticeable improvements on the baseline RANS simulations at almost no additional computational cost. In this study, discrepancy of the turbulent kinetic energy (TKE) between k-ω SST RANS and dynamic one-equation LES turbulence models are determined by conducting threedimensional CFD simulations in a channel containing circular cylinder with high blockage ratio. The simulations are carried out with the Reynolds numbers ranging between 500 and 1250 for the training and testing purposes of the RF. The learning algorithm trained on the flow fields at different Reynolds numbers except the one used for testing. Then the trained RF is used to predict the discrepancy of the unseen testing flow. To illustrate the predictive capability of ML, contour plots and profiles obtained at different locations are presented. The results suggest that ML algorithm successfully characterizes the modelbased uncertainties in three-dimensional incompressible turbulent flows as a function of features derived from the mean flow properties of RANS.
MODEL DESCRIPTION Governing Equations
In this study, steady and unsteady incompressible turbulent flow is considered for RANS and LES simulations, respectively. The equations governing the flow field are: continuity; conservation of momentum for steady flow; conservation of momentum for unsteady flow; Here in ρ is the density, p is pressure, ν is kinematic viscosity, ν t is turbulent kinematic viscosity, u is velocity, and t is time.
In equation 3ū is the filtered velocity obtained by filtering the Navier-Stokes equation and τ i j is the subgrid-scale stress tensor.
To simulate steady-state turbulent flows, RANS equations are solved along with a two-equation eddy viscosity model proposed by Menter [11]. In the Menter's k-ω SST model the equations governing the k and ω are described as: Here β * , σ k , γ, σ ω , and σ ω2 are closure coefficients and F 1 is the blending function which takes different values at the near wall and in the bulk.
To obtain high fidelity results LES simulations are conducted for unsteady flow field using a dynamic one-equation subgrid-scale (SGS) model presented by Kim and Menon [12]. The dynamic model improves on the limitation of Smagorinsky model by adjusting the proportionality coefficient, c v , defined in the subgrid eddy viscosity, locally during computation instead of defining a priori global constant. The equation governing the k is described as: Herein τ i j is the SGS stress which is defined as a function of turbulent kinematic viscosity, ν t = c v k 0.5 ∆ where c v is the model coefficient and ∆ is the grid scale filter. The three terms on the right-hand-side of Eq. (6) represent, the production rate, the dissipation rate, and the transport rate of k respectively.
Numerical Model
In the current study, the three-dimensional CFD model of flow in a channel containing circular cylinder was developed for steady and unsteady turbulent flows. Schematic of the flow domain along with the flow direction indication is illustrated in Figure 1. Spatially, x, y, and z directions represent the normalized length, l = 20d, height, h = 2d, and width, w = 8d of the channel, respectively. The normalization factor is the diameter of the cylinder, d and the blockage ratio of the channel is 0.5. Reynolds number, Re = U a d/ν, is defined based on the averaged inlet velocity, U a and cylinder diameter d.
Regarding the boundary conditions, no-slip and nopenetration conditions are applied to the top, bottom, and cylinder walls. To better uncover the three-dimensional flow effects, transitional periodicity is considered in z direction. The intensity of TKE is set to be zero at the inlet since there is no flow disturbance resulting in a transition from laminar to fully developed turbulent flow in the upstream of the cylinder. The dimensionless wall distance, y + is achieved to be smaller than unity so that no wall functions are used along the walls for TKE.
As for the discretization and solution of the governing equations (Eqs. 1-6), OpenFOAM-v1706 -an open source finite volume method (FVM) solver -is employed. Specifically, simple-Foam and pimpleFoam algorithms are used for RANS and LES simulations, respectively. To calculate and compare the discrepancies time-averaging of LES results are considered.
Random Forests
Machine learning explores algorithms that can learn from data through fitting a model on training observations and then making predictions for unseen testing observations using the fitted model. Typical data matrix for building a learning algorithm consists of input and output features. The features can be charac-terized as either numerical or categorical. Examples of numerical features include discrepancy of turbulent kinetic energy between RANS and LES simulations, turbulence intensity, and streamline curvature. Examples of categorical features include a point's uncertainty level in RANS results (low or high) and violation of certain assumptions (yes or no). The problems with categorical output feature are referred to as the classification problems whereas the problems with numerical output feature are referred to as the regression problems. In this study, since our goal is to predict a numerical feature, we focus on the regression problem.
Over the past few decades, a variety of regression techniques have been proposed such as k-nearest neighbors [13], ridge regression [14], lasso [15], artificial neural networks [16], treebased methods (e.g., regression trees, random forests, boosting) [17], and support vector regression [18]. Among these, we employ the random forests [19] in our study since it does not suffer from the curse of dimensionality and provides good predictions with physical interpretations such as the importance of features.
RF is an ensemble of decision trees. Decision trees (DT) divide the input feature space into K distinct and non-overlapping boxes, R 1 , R 2 , . . . , R K , with the goal of minimizing the variance within regions given by where K is the total number of regions, y i is the output feature value of training observation i, andŷ R k is the mean value of the output feature of training observations in region k. As for the prediction, when a new observation enters the system, DT uses the mean value of training observations in the region in which the new observation falls. Figure 2 shows the schematic representation of partition and the corresponding decision tree. Decision trees are easy to interpret and implement and also computationally inexpensive. However, they are not robust to changes in the training data. Small changes in the training data can lead to large differences in the fitted model and corresponding predictions. Ensemble methods form a "strong learner" using a group of "weak learners." To this extent, RF produces multiple decision trees to address the issue of high variance and then combines them to yield a prediction. First, the training data is split into subsets of observations. The observations and features are chosen randomly at each split. Then a separate decision tree is fitted to each subset. Randomness enhances tree diversity, avoids trees to be very similar to each other, and diminishes the tendency of the model to overfitting. For a given new observation, it is run down all the trees, and the average of the predictions of all trees is considered a single consensus prediction for the new observation. Figure 3 shows the graphical illustration of randoms forests.
As for the training and testing of the RF algorithm, RANS and LES simulations are carried out using OpenFOAM-v1706 with different flow rates in a three-dimensional transition to turbulent regime, where the data obtained from LES simulations are averaged over time. The mean flow features, which are proposed in [9,10], are obtained from RANS simulations and used as input and the discrepancy of turbulent kinetic energy is used as the output to the RF algorithm. We use scikit-learn -an open source Python library for ML -for training and prediction purposes of the RF algorithm.
RESULTS AND DISCUSSION
Within the range of Reynolds number considered in this study, the flow remains laminar in an empty channel. However, submersion of a cylinder interrupting the upcoming stream induce momentum mixing by flow separation and vortex shedding which results in flow instabilities leading to turbulent flow. Figure 4 illustrates an instantaneous contour and iso-surface of TKE for Re = 1250. As imposed by the inlet condition, TKE intensity remains nearly zero in the upstream flow and transition from laminar to turbulent flow occurs in the downstream. The increased turbulent activity is transported until several diameter away from the cylinder and decays down towards outlet. Such behavior confirms that the flow tends to remain in laminar regime unless it is perturbed. Additionally, the flow in the wake region turns into three-dimensional flow with strong secondary flows, and oscillates which is described by von Karman phenomena, as illustrated in Figure 4b. The accurate numerical solution of this flow field, essentially, requires a turbulence model which can capture the physics in both upstream and downstream of a cylinder where different flow regimes observed.
RF Prediction of TKE discrepancies at different Reynolds numbers
The RANS and LES simulations are carried out with four different Reynold numbers -500, 750, 1000, 1250 -to obtain the train and test flows. The mean flow features are calculated for each flow using the raw flow properties such as mean pressure, velocity, turbulent kinematic viscosity, and wall distance. Then these features are used as input to the learning algorithm. The input features are normalized so that they are in the range of [−1, 1]. The log discrepancy of TKE, ∆log(k) = log(k) LES − log(k) RANS , is obtained for each flow as an output. In order to nicely illustrate the TKE discrepancy under significant deviations between RANS and LES simulations, we use the log FIGURE 4. Instantaneous contour (a) and iso-surface (b) of turbulent kinetic energy obtained by LES at Re = 1250. Contour is rendered at mid y-plane and iso-surface level is set to be 0.02. discrepancy as an output to the learning algorithm. Each flow has 2.8 million data points and 10 input features. As for the training and testing of the RF, we created four different scenarios which are shown in Table 1. For each scenario, we train and test our model on 8.4 and 2.8 million data points, respectively. We use B = 100 decision trees to ensemble for each scenario. The value of the number of trees is obtained by 10-fold cross-validation approach [17]. Figure 6 shows contours of log TKE obtained by k-ω SST RANS, RF ML prediction and dynamic one-equation LES at Re = 500 and 750. The contours are depicted at mid z-plane to illustrate the TKE distribution in stream-wise and cross-wise directions. It is observed that RANS fails to capture the ele- vated TKE inside the boundary layers developing over the walls and cylinder. The near wall behavior of TKE becomes important when it comes to accurate predictions of forces exerted on the cylinder or temperature distribution along the wall as seen in many applications. Moreover, RANS simulations tends to under-predict and over-predict the TKE in the upstream and downstream of the cylinder, respectively, regardless of different flow speeds. Particularly, in the recirculation region right behind the cylinder the severity of over-prediction becomes more pronounced. It is apparent that LES simulations offer better solution at the near wall and cylinder under presence of the adverse pressure gradients and flow separation. Regarding the TKE intensity in further downstream of the cylinder, the dissipation of TKE, as suggested by the time-averaged LES, shows the tendency of transitioning back from the unstable flow to laminar.
The RF ML algorithm can address the discrepancies mentioned above and provides satisfactory predictions, as illustrated in Figure 6. It nicely differentiates the over-prediction in the wake region, and under-prediction at near walls and then incorporates the predictive capabilities to improve the results obtained by low fidelity RANS simulations. In particular, the prediction of TKE distribution at Re = 500 is sufficient however it deviates more when compared to one at Re = 750 since the training is achieved with the flow speeds all greater than Re = 500. This indicates the necessity of representative data enough to reveal the flow physics. Figure 7 illustrates contours of log TKE at Re = 1000 and 1250. Each subfigure presents the results depicted from mid zplane for RANS, RF predictions, and LES. As expected, the TKE intensity increases with the increasing flow speed. However, as discussed in the Figure 6, RANS unacceptably over-predicts the intensity in the downstream of the cylinder, particularly within the recirculating region. Apparently, the discrepancy between kω SST RANS and dynamic one-equation LES model reaches as much as an order of magnitude. As one of the shortcomings of RANS is discussed in Figure 6, under-prediction of the increase in TKE intensity in the boundary layer persist even at higher Reynolds numbers presented in Figure 7.
ML algorithm characterizes the discrepancies and improves on the RANS solutions at both flow speed, as shown in Figure 7. In accordance with the issue discussed in Figure 6 regarding the training data variability, similar deviation in the prediction of Re = 1250 is observed. However, it is realized that the severity of poor prediction is more significant at Re = 500. To better assess the performance of prediction between Re = 500 and Re = 1250 -the minimum and the maximum Reynolds number considered in this study - Figure 5, illustrating the distribution of log discrepancy of predicted and true TKE, is presented. Figure 5 quantitatively confirms the inference made above in regard to the performance of prediction so that the algorithm predicts Re = 1250 better than 500. Such result is counterintuitive since the prediction of both case is achieved by an algorithm trained in the remaining three flow speeds. As such, it can be inferred that the TKE intensity is not that significant in the lowest Reynolds number for the flow geometry considered in this study. In other words, the flow characteristics at Re = 500 fall apart from the rest three flow speeds and some characteristic features of the lowest flow speed is not learned well since such information is not carried by training cases with Re = 750, 1000, and 1250. This is a clear indication of changing flow phenomena with respect to flow speed in somewhere between Re = 500 and Re = 750. Beyond predicting the discrepancies, the ML algorithm works as a descriptive analysis tool and reveals crucial information regarding the correlation between TKE intensity and flow speed in transitional flow regimes. Figure 8 illustrates the profiles of log TKE suggested by RANS, LES, and RF predictions at all Reynolds numbers considered in the present study. The locations of each profile in the stream-wise direction are shown in a geometry schematic at the bottom of Figure 8. It is worth noting that the profiles appear on left side of y-axis since all TKEs are smaller than unity ending up with a negative value once log of the property is considered. As discussed in Figures 6 and 7, RANS offers a lower level of TKE in the upstream of the cylinder whereas it predicts higher level in the downstream. The TKE dissipates toward the walls right around and behind the cylinder. However, regarding the LES, the TKE intensity increases in the vicinity of the walls and interacts with the vortex shedding in the wake region resulting in increased turbulent activity at near wall. Such variation in the prediction obtained by RANS and LES proves that both methods behaves considerably different in the transition from laminar to turbulent flow regime.
Profiles depicted from different locations in each flow speed confirms that RF algorithm accurately characterizes the discrep- ancies and corrects the low fidelity solutions obtained by RANS. In particular, the learning algorithm differentiates the near wall and bulk region so that it captures the phenomena spatially through the flow domain. Such precise prediction offered by the machine learning algorithm also proves the variability of features which provide sufficient information regarding the flow dynamics.
CONCLUSION
This paper aims to determine the turbulent kinetic energy discrepancies between RANS and LES and characterize them as a function of features, obtained from mean flow properties of RANS, by means of machine learning algorithms. To accomplish this, three-dimensional CFD simulations in a channel containing a cylinder are conducted for various Reynolds numbers assuring to keep the flow in transition from laminar to the turbulent regime. Then the learning algorithm is trained with different scenarios to predict TKE at different flow speeds.
It is found that (1) RANS and LES predictions significantly deviates, especially in the vicinity of walls and in the downstream of the cylinder; (2) ML successfully predicts the model-based uncertainties in transition to turbulent flow regime; (3) the prediction performance of ML algorithm slightly lower at the lowest Reynolds number due to the fact that the flow characteristic changes at around Re = 500; (4) ML approach demonstrates the capability of revealing the correlation between TKE intensity and flow speed in transitional flow regimes.
Overall, the proposed study illustrates how the turbulence models can benefit from the ML algorithms and evidently elucidates that model-based uncertainties of low fidelity approaches can be predicted without requiring a high-fidelity simulation.
ACKNOWLEDGMENT
This work used the Extreme Science and Engineering Discovery Environment (XSEDE) for computational need, which is supported by National Science Foundation grant number TG-CTS170051. Specifically, it used the Bridges system at the Pittsburgh Supercomputing Center (PSC). The authors also would like to thank Dr. Alparslan Oztekin for assistance with technical discussion. | 2018-07-25T02:39:39.150Z | 2018-07-15T00:00:00.000 | {
"year": 2018,
"sha1": "7688453d773f1c25e8b759897e72eca913408393",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1807.05605",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "7688453d773f1c25e8b759897e72eca913408393",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
235155852 | pes2o/s2orc | v3-fos-license | Equine fetal gender determination in mid- and advancedgestation by transabdominal approach – comparative study using 2D B-Mode ultrasound, Doppler sonography, 3D B-Mode and following tomographic ultrasound imaging
Gender determination of the equine fetus is of big interest for the owner of a mare, particularly when planning the breeding purposes or due to economic reasons. This study aims to evaluate the feasibility of transabdominal 3D tomographic ultrasound imaging (TUI) as an additional diagnostic tool for gender determination. Special reference should be given to the hands-on experience of the examiner in the non-invasive transabdominal approach. Pregnancy checks were performed on 669 mares on various Thoroughbred stud farms in the mid-west of Germany in 2015 and 2016. Fetal sex was determined by 2D B-Mode ultrasound, 2D Doppler sonography and 3D imaging. Fetal gender was determined in a serial examination; time for each mare was limited to a maximum of 3 minutes. Predicted gender in 2015 and 2016 was compared to the gender at birth to determine accuracy of the methods. Transabdominal sonography was performed on 386 pregnant mares in 2015 and 283 mares in 2016. The gender of the fetus could be determined in 297 (~77 %, year 2015) and 184 cases (~65 %, year 2016) respectively, within the three-minute examination time frame. 3D imaging was realized in 118 (~40 %, year 2015) and 94 cases (~51 % year 2016) respectively. Combined transabdominal examination with B-Mode, Doppler and 3D TUI analysis led to high accuracies of correct gender diagnosis (~94 % (2015) and ~96 % (2016)). 3D TUI imaging allowed a gender diagnosis in 18 cases where B-Mode and Doppler sonography showed doubtful results (2015). 3D TUI of the fetal gonads was shown to be useful to increase the accuracy of gender determination in mares during mid- and advanced-gestation
dominal approach -comparative study using 2D B-Mode ultrasound, Doppler sonography, 3D B-Mode and following tomographic ultrasound imaging. Pferdeheilkunde, 35(1):11-19. DOI: https://doi.org/10.21836/PEM0Pricking Introduction Gender diagnosis in the equine fetus has become more relevant in recent years mainly due to economic reasons (sale of a pregnant mare with a fetus of preferred sex; preferred sex in certain equestrian sport disciplines, future breeding purpose of a broodmare) (Aurich & Schneider 2014;Bucca 2011).
Fetal sexing in veterinary practice is mainly performed by ultrasound of the fetus, either transrectally at approximately 2 months or transabdominally as early as Day 90-100 of gestation with an optimal diagnostic window between Day 120 and 210. (Curran & Ginther 1989;Renaudin et al. 1997;Bucca 2005;Tönissen et al. 2015). Transrectal approach aims to identify the genital tubercle in early gestation, which is the forerunner of penis and clitoris. Previously published studies indicate that the genital tubercle can be best visualized between Day 59 and 68 of gestation. Major disadvantages of transrectal gender determination are the necessity of an experienced examiner and a restricted time frame for successful gender determination (Renaudin et al. 1997). Kotoyori et al. (2010;2012) showed that 3D ultrasound allows identifica-tion of the genital tubercle between Day 63 and 76 as well as imaging of the external genitalia organs between Day 90 and 150 of gestation. The authors hypothesized that in the near future, 3D transrectal ultrasound would be more objective than conventional 2D ultrasound for diagnosing fetal gender.
Sex determination in mid and advanced gestation by identifying fetal primary sex organs is the preferable technique for gender diagnosis because of a larger diagnostic window (Bucca 2005). The fetus is staged in posterior presentation and a combined transrectal and transabdominal examination may be preferable to increase accuracy of gender determination between Day 90 and 150 . Reaching the fetal hindquarters transrectally is almost impossible after 150 days of gestation and a transabdominal approach of gender determination is mandatory. Though transcutaneous transabdominal imaging is possible as early as Day 90-100 with an optimal diagnostic window of Day 120-210. Transabdominal ultrasound can be performed up to the end of pregnancy (Bucca 2005;Schönbom et al. 2015). The ultrasound beam is orientated towards the caudal fetal abdomen. Gonads can be visualized within the fetal caudal abdomen Summary: Gender determination of the equine fetus is of big interest for the owner of a mare, particularly when planning the breeding purposes or due to economic reasons. This study aims to evaluate the feasibility of transabdominal 3D tomographic ultrasound imaging (TUI) as an additional diagnostic tool for gender determination. Special reference should be given to the hands-on experience of the examiner in the non-invasive transabdominal approach. Pregnancy checks were performed on 669 mares on various Thoroughbred stud farms in the mid-west of Germany in 2015 and 2016. Fetal sex was determined by 2D B-Mode ultrasound, 2D Doppler sonography and 3D imaging. Fetal gender was determined in a serial examination; time for each mare was limited to a maximum of 3 minutes. Predicted gender in 2015 and 2016 was compared to the gender at birth to determine accuracy of the methods. Transabdominal sonography was performed on 386 pregnant mares in 2015 and 283 mares in 2016. The gender of the fetus could be determined in 297 (~77 %, year 2015) and 184 cases (~65 %, year 2016) respectively, within the three-minute examination time frame. 3D imaging was realized in 118 (~40 %, year 2015) and 94 cases (~51 % year 2016) respectively. Combined transabdominal examination with B-Mode, Doppler and 3D TUI analysis led to high accuracies of correct gender diagnosis (~94 % (2015) and ~96 % (2016)). 3D TUI imaging allowed a gender diagnosis in 18 cases where B-Mode and Doppler sonography showed doubtful results (2015). 3D TUI of the fetal gonads was shown to be useful to increase the accuracy of gender determination in mares during mid-and advanced-gestation.
Keywords: mare, fetal gender determination, transabdominal sonography, fetal gonads, 3D ultrasound Citation: Pricking S., Spilker K., Martinsson G., Rau J., Tönißen A., Bollwein H., Sieme H. (2019) Equine fetal gender determination in midand advanced-gestation by transabdominal approach -comparative study using 2D B-Mode ultrasound, Doppler sonography, 3D B-Mode and following tomographic ultrasound imaging. Pferdeheilkunde 35, 11-19; DOI 10.21836/PEM0Pricking Correspondence: Prof. Harald Sieme, Clinic for Horses, Unit for Reproductive Medicine, University of Veterinary Medicine Hannover, Bünteweg 15, 30559 Hannover, Germany; harald.sieme@tiho-hannover.de 12 Equine fetal gender determination in mid-and advanced-gestation by transabdominal approach close to the kidney and the bladder. The structure of the gonads is evaluated, male gonads appear uniformly echodense, while differentiation of cortex and medulla (echodense core, hyperechogenic cycle) is possible in the female gonads (Bucca 2011). In male gonads homogenous echogenicity and a typical hyperechogenic line (representing mediastinum testis) along the longitudinal axis are characteristic (Renaudin et al. 1997). Typical B-Mode features can be even better visualized by using color Doppler ultrasonography (Tönissen et al. 2016). Doppler sonography shows an intense blood flow signal along the central line in the male fetus (representing blood flow in mediastinum testis), while a strong circular signal in the outer layer is typical for females (Resende et al. 2014). In some male gonads a blood flow signal on the lateral contour represents the pampiniform plexus. Equine fetal genitalia do not change in form from the fifth month of gestation, though they change in size (Bucca 2011). Gender determination in advanced gestation can also be performed by identification of external genitalia (Holder 2011;Bucca 2011).
Various applications for 3D ultrasound exist in human medicine. It has been used for biopsy and staging of rectal cancer, breast examination and diagnosis of fetal and gynecological anomalies (Hünerbein 2003;Correa et al. 2006;Gemmeke & Ruiter 2007). 3D ultrasound has been recently established in veterinary medicine for pregnancy monitoring in dogs and cats, evaluation of gastric affections in dogs, bladder disease diagnosis and kidney ultrasound (Hildebrandt et al. 2009;Pal et al. 2015;Dinesh et al. 2015;Dehmiwal et al. 2016). Tomographic ultrasound imaging (TUI) is the sonographic equivalent of computed tomography scanning. Static or dynamic 3D volume datasets are scanned with a 3D ultrasound device and an infinite number of 2D planes of a volume are acquired. TUI allows division of a 3D volume into an individual number of slices with particular slice sickness. The process of division is similar to 3D computed tomography scanning (CT) and magnetic resonance imaging (MRI) and allows a simultaneous visualization of parallel planes in a single screen. TUI studies in human medicine were performed among others for evaluation of the normal fetal heart, as well as for evaluation of fetal cardiac defects (Viñals et al. 2003;Jeanty et al. 2007;Ahmed 2014). Color Doppler can be used in TUI to visualize blood flow in several slices. Screening exams of TUI images can be performed offline on a workstation using external software programs (Nelson & Pretorius 1998).
Fetal gender determination in the equine by experienced veterinarians achieved varying results of accuracy previously published (65 %, >90 %, 100 %), when determining fetal sex by different approaches (Mari et al. 2002;Resende et al. 2014;Tönissen 2016). This study aims to determine accuracy of evalu ation of combined sonographic techniques -B-Mode, Doppler-, and 3D TUI mode -to determine fetal sex in mares during mid-to advanced-gestation by transabdominal scanning.
Animals
Transabdominal pregnancy diagnosis and gender determination was performed in fall 2015 and 2016 on 669 mares on various Thoroughbred stud farms in the mid-west of Germany during the annual late pregnancy check-ups of the German Thoroughbred Owners' and Breeders' Association. All mares were Thoroughbreds and the age of the animals varied between 3 and 24 years (2015: 11.2 ± 4.1 years, 2016: 9.9 ± 4.6 years). Pregnancy was progressed between 85 and 231 days (2015: 160 ± 33 days; 2016 165 ± 31 days) of gestation, which was calculated from breeding records.
Equipment, surroundings and preparation of the mare
Examination gloves, at least 70 % alcohol-containing disinfectant, a sponge for moisturizing the ventral abdomen and a nose twitch were used as basic equipment. Examinations of the mares were performed in their box, while they were fixed by an assisting person using a halter. If mares were uncooperative and did not tolerate examination, a nose twitch was used for restrainment. The examiner's right shoulder was positioned close to the left flank and the ultrasound probe was held with the right hand. Examiner's viewing direction was up to the mare's head, allowing the veterinarian to be in a comfortable position for examination. The ultrasound device was held by an assisting person, close enough to be accessible to operate, yet far enough to be out of the mare's range. A dark surrounding was ensured to allow optimal diagnostic conditions. Mares were examined in a routine screening program for the German Thoroughbred Association. Therefore, time for examination and pregnancy checks/gender diagnosis was limited to a maximum of 3 minutes for each mare. Sedation was not necessary in any case.
Examinations were carried out with a portable 3D ultrasound device. A Voluson I® (GE Healthcare®, Wauwatosa, WI) was used in this study. As probes, the RealTime-4D-convex-transducers RAB4-8-RS® or RAB2-8 (GE Healthcare®, Wauwatosa, WI) with 2-8 MHz and a penetration depth of 10-30 cm were used. Alcohol containing disinfectant was applied with a sponge to the ventral abdomen between udder and xiphoid, as well as to the probe. Clipping was not performed because examinations were carried out in late summer or fall, even though clipping may be necessary in winter to allow accurate imaging.
Scanning technique
Scanning started in front of the udder and continued along the ventral abdomen. If fetal parts were identified, orientation of the fetus was established. Profound knowledge of fetal anatomy as well as of fetal anatomy ultrasound presentation were needed.
Criteria for gender determination in advanced gestation
Gender determination in our study was based on fetal gonadal structure, specific structure of blood flow in the gonad and external genitalia. As an additional feature, 3D imaging of the fetal gonad was performed and analyzed with software. Gender diagnoses performed in 2015 and 2016 13 were compared to the gender at birth in 2016 and 2017 respectively and accuracy of correct gender determination was calculated.
A fetus was diagnosed as male, if the gonads were located close to the bladder and appeared longitudinally oval in shape and homogenously echodense in B-Mode scanning. Sometimes a hyperechogenic line (representing mediastinum testis) along the longitudinal axis could be displayed. Doppler sonography showed intense blood flow along the central line, representing the blood flow in the mediastinum testis. In some male gonads a blood flow signal on the lateral contour represented the pampiniform plexus (figure 2) The fetus was diagnosed as female if gonads were located close to the kidneys and if a differentiation between cortex and medulla (echodense core, hyperechogenic cycle) was visible in B-Mode ultrasound. A strong circular Doppler signal was visible in the outer layer representing blood flow in the cortex (figure 4). In some female gonads a strong blood flow signal on the edge of the gonad represented vascularization of the ovarian artery. B-Mode videos of the fetus and Doppler sonography of the gonads were videotaped to allow further analysis. A 3D picture was taken in 3D static mode and vol- 14 Equine fetal gender determination in mid-and advanced-gestation by transabdominal approach ume data was obtained. Therefore, the fetal gonads were determined as regions of interest and pictures were stored to the internal storage. Obtaining good quality pictures for further analysis was sometimes time consuming and patience of the mare was necessary. The mare had to be stand still for a few seconds to guarantee usable pictures for further analysis. Fetal and mare's movements caused blurred images that were not appropriate for further analysis. Various pictures were stored for each gonad to allow later analysis. Evaluation of the pictures was either made with the on-board software or with the 4D-View® software (GE Medical Systems Kretztechnik, Zipf, Austria) on an external workstation. The 3D image volumes were analyzed using Tomographic Ultrasound Imaging (TUI) (4D View® Version 10.x, GE Healthcare, Austria). The gonads were cut into slices of defined thickness and slices were evaluated due to the anatomical structure.
Transabdominal gender diagnosis was performed on 386 pregnant mares in September 2015. Gender of the fetus could be determined in 297 cases (~77 %) in a three-minute exam-ination time frame (second + third month of gestation: 51 %, forth month of gestation: 71 %, fifth month of gestation: 88 %, sixth month of gestation: 80 %, seventh month of gestation: 67 %). The 3D imaging was possible in 118 cases (~40 %). 2D B-Mode videos and Doppler videos were analyzed according to the criteria stated in the chapter 'Materials and Methods'. The obtained 3D volume images were analyzed with TUI and equine fetal gonads were evaluated for their structure, form and location in the fetal abdominal captivity. Male fetal gonads showed a homogenous echostructure, a longitudinal oval form and sometimes the pampiniform plexus could be visualized in 3D TUI (figure 7+8). A hypoechogenic line representing the testicular vein could be seen centrally in some cases, if the ultrasound beam did hit the longitudinal plane (figure 7).
Female gonads were kidney-shaped in transverse section and the longitudinal section showed a longitudinal-oval form (figure 5 + 6). Echostructure of the female gonads was of bizoned echogenicity, representing cortex and medulla of the fetal gonads. Sometimes the ovarian artery could be visualized (figure 5). TUI (tomographic ultrasound imaging) cross section of a female gonad at 183 days of gestation. The 3D volume is cut into 15 slices of 1.7 mm thickness. Process of division is displayed in the sonographic picture in the figure on top on the left. Exemplarily 5 sections are displayed (-7, -6, -3, -2, 2). The ovarian artery can be seen in slices -7 and -6 as hypoechogenic branches (red arrows). The gonad is kidney-shaped and shows bizoned echogenicity. Gender predictions were compared to foaling data obtained in summer 2016 to determine the accuracy of gender determination. The postnatal gender of the foal was unknown in 28 cases due to missing feedback of mare's owners or abortion prior to full gestation. This resulted in 269 evaluable cases. The gender of the foal was diagnosed correctly in 254 cases (= ~94 %) in 2015 (second + third month of gestation: 93 %, forth month of gestation: 96 %; fifth month of gestation 93 %; sixth month of gestation 97 %; seventh month of gestation 100 %).
The transabdominal approach was performed on 283 mares in autumn 2016. Gender determination was possible in 184 cases (~65 %), no gender could be determined in 99 cases. Gender determination could be performed on highest rates in the fifth month of gestation (84 %) followed by sixth (75 %) and seventh month (64 %). Lowest rates were found in the second and third month of gestation (11 %) as well as in the fourth month of gestation (50 %). 3D imaging was possible in 94 cases (~51 %).
Gender predictions of the examination year 2016 were also compared to foaling data obtained in summer 2017. The postnatal gender of the foal was unknown in 28 cases due to missing feedback of mare's owners or abortion prior to full gestation, which though resulted in 156 evaluable cases. The gender of the foal was diagnosed correctly in 149 cases (=~96 %) in 2016 (second + third month of gestation: 66 %, forth month of gestation: 96 %; fifth month of gestation 98 %; sixth month of gestation 95 %; seventh month of gestation 100 %). (-4, -2, 1, 3). Die Gonade ist von homogener Echotextur. In Schnittebene 3 ist im Zentrum der Gonade ein hypoechogener Bereich sichtbar, der einen Gefäßquerschnitt darstellt (roter Pfeil). TUI (tomographic ultrasound imaging) longitudinal section of a female gonad at 163 days of gestation. The 3D volume is cut into 17 slices of 1.9 mm thickness. Process of division is displayed in the sonographic picture in the figure on top on the left. Exemplarily 5 sections are displayed (-3, 2, 1, 2, 3). The gonad shows a bizoned echogenicity, representing cortex and medulla of the gonad. The cortex of the gonad is marked with red dots in slice 3. (-3, 2, 1, 2, 3 Table 1.
). Die Gonade zeigt eine zweizonige Echotextur (Mark-und Rindenzone). Die Rindenschicht der Gonade ist exemplarisch mit roten Pfeilen in Schnittebe
Capturing 3D image volumes of the fetal gonads takes a few seconds. Thus repeated recordings of 3D volumes may be required to obtain images that are suitable for further analysis. Movements of either the mare or the fetus may complicate image obtainment. The duration needed for examination decreased with experience of the examiner in handling the 3D ultrasound device, as well as with increasing experience in identifying fetal structures. Reviewing 3D data and performing tomographic ultrasound analysis can take up to 5 minutes for each obtained 3D picture. Reviewing includes identifying the best planes for analysis of the fetal gonad as well as modifying image parameters to obtain high quality images. Figures 1 and 3 show image sections derived from 2D B-Mode ultrasound videos. Figures 2 and 4 show Doppler ultrasound performed on the equine fetal gonad.
Discussion
Gender determination of the equine fetus has become of bigger interest for the owners of a mare, particularly in regard to breeding purposes. Transabdominal pregnancy diagnosis and gender determination are less invasive than the transrectal approach and can be performed in a substantially bigger time frame (Bucca 2011, Schönbom et al. 2015, Tönissen et al. 2015. The well-being of the fetus can be monitored almost throughout the complete advanced gestation by transabdominal ultrasound. Factors indicating fetal well-being include heart beat rate, movement, aortic diameter and fetal size . Fetal fluids can be visualized and evaluated for their consistency and echogenicity and the combined thickness of uterus and placenta (CTUP) can be measured (Reef et al. 1995). Our study indicates that transabdominal pregnancy diagnosis can be performed as early as around day 80 of gestation and thence can be performed to term. Ultrasound devices and probes that guarantee large penetration depth may be helpful in mid gestation due to fetal positioning in the pelvic captivity. Transabdominal gender determination has recently been the focus of studies for practicability and repeatability (Tönissen et al. 2016). Videotapes of transabdominal pregnancy check-ups were shown to various vets, Fig. 8 TUI (tomographic ultrasound imaging) cross section of a male gonad at 193 days of gestation. The 3D volume is cut into 15 slices of 2.0 mm thickness. Exemplarily 4 sections are displayed (-2, 1, 2, 3). The gonad is of homogenous echostructure and of oval shape. The pampiniform plexus can be seen in all slices dorsal to the gonad as hypoechogenic branches. Branches of the pampiniform plexus are marked with red dots in slice 3. (2016). Highest rates of gender determinations were also shown in the fourth to sixth month of gestation in our study (50-88 %). Lower rates of gender determination rates were shown in the second -third as well as in the seventh month of gestation (11-67 %). Reasons for lower rate in second -third month include the small size and the intra pelvic position of the fetus. Scanning the fetus, positioned high in the abdominal cavity close to the pelvis, is often challenging. Ultrasound scanning of early advanced pregnant mares includes positioning the probe in regions of the knee fold or close to the udder. Mares tend to show defensive movement when being scanned in those regions, complicating suitable image obtainment for further analysis.
Low rates of gender determinations in advanced pregnancy (seventh month) include fetal positioning or acoustical shadows that may impair presentation. Fetal reproductive organs do not change in shape from 150 days to term, though the size changes and visibility of the structures varies. The determination of external genitalia gets more difficult in advanced gestation because of fetal growth and acoustic shadows caused by bones or the umbilical cord. With increasing size of the gonads, their visualization gets easier and characteristic vascularization can be shown in Doppler sonography.
For the 2015 examination period, 3D TUI confirmed gender diagnosis made by 2D B-Mode ultrasound and Doppler sonography in 100 cases. 3D TUI imaging allowed a gender diagnosis in 18 cases in which B-Mode and Doppler measurement showed no clear results. TUI of the gonads helps to analyze internal structures of the gonad to identify anatomical features. Female gonads were easier to identify due to their bizoned echogenicity. One always has to consider that data was obtained in a large screening and time for examination was limited to three minutes. To avoid motion artifacts in generated 3D volumes, because of fetal or maternal movement, repeated recordings of 3D images are required. This is a clear disadvantage in 3D imaging. It includes capturing a perfectly focused fetal structure in a single view to complete the examination in a short interval (Kotoyori et al. 2010;Kotoyori et al. 2012 et al. (1999) time for examination varied between 4.3 ± 1.2 and 5.5 ± 3.0 minutes for the transrectal approach of gender determination between day 50 and 90 of gestation. (Curran & Ginther 1991) showed that time needed for accurate gender determination varied between 16 seconds and 3 minutes 55 seconds for transrectal ultrasound identification of the genital tubercle between 50 and 99 days of gestation. Livini (2010) showed that average time for gender determination lowers with increasing experience of the examiner. The author showed that with increasing experience the time for examination lowered from around one minute to 30-45 seconds for the transrectal approach (Livini 2010). Examinations in our study were limited to a maximum of 3 minutes. Within these 3 minutes, we performed a 2D B-Mode ultrasound to identify fetal structures, Doppler velocity measurements to show blood flow within the gonad and obtained 3D image volumes of the fetal reproductive organs. We did not measure time needed for each mare, but time for examination lowered with increasing experience over the two years. As shown in our study, accurate gender determination can be performed within a short time frame even though additional analysis on external workstations can take additional time.
The large time frame for transabdominal gender determination allows repeated examinations if fetal gonads cannot be displayed on first hand. Storing videos into the internal storage helps to review videos and may help to increase accuracies of gender determination. If gender determination cannot be accomplished because of high activity of the fetus or fetal position, examinations can be postponed due to a large examination window.
Transabdominal examination shows high acceptance in mares with lower stress levels (Schönbom et al. 2015). It bears no risk of rectal perforation and sedation is not necessary in most of the cases because mares tend to accept the examination after a couple of minutes. Movement of mare and fetus can be easily compensated and positioning the veterinarian at the left flank guarantees a safe working position. Transabdominal approach allows gender diagnosis even in very small horses where transrectal approach of gender determination is limited because of the size of the animal.
18
Equine fetal gender determination in mid-and advanced-gestation by transabdominal approach
Conclusion
In conclusion, to our knowledge, this is the first transabdominal 3D TUI analysis for equine fetal gender determination. The 3D TUI for transabdominal approach was shown as a useful additional criterion for detailed structural analysis of the fetal gonad during advanced gestation. TUI allows the examiner to evaluate the gonad's inner structure similarly to computed tomography or magnetic resonance imaging. High rates of correct gender determinations can be achieved with an experienced examiner and a combined analysis of B-Mode videos, Doppler sonography and 3D images. 3D TUI may help to allow gender determination even in cases, where B-Mode and Doppler imaging show no clear results. This may furthermore increase correctness of diagnosis. Highest rates of possible gender determinations were shown in the fourth to sixth month of gestation indicating the best time frame for transabdominal ultrasound imaging. We assume that obtaining 3D volumes may be challenging in mares that do not tolerate examination and if fetal activity and movements impair image obtaining. Nevertheless, examinations can be postponed to a later date to perform 3D volume scanning again. | 2019-05-07T13:29:01.645Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "5da2605f4fde89dc7de0d42d4172dd9c78e1bb6e",
"oa_license": null,
"oa_url": "https://www.hippiatrika.com/download.htm?id=20190102",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "f459c7a6ad19e3b77123afa1df78c135f145dcaf",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
52298107 | pes2o/s2orc | v3-fos-license | Nursing students: A vulnerable health-care worker for needlesticks injuries in teaching hospitals
Background: Occupational exposure to bloodborne pathogen is a significant risk to health‐care workers. In any teaching hospital apart from regular health‐care workers and employees, there are significant population of students and trainee. It is important to assess the health‐care worker in hospital which has maximum chances of exposure to these pathogens. The aim of this study is to determine the most susceptible job group for needlestick injury (NSI) reported in a newly established teaching medical institute in the Western part of Rajasthan, India. Methods: This is a retrospective analysis of data of NSI occurred during September 2014 to January 2017. Results: Sixty three NSIs were reported during the study. Nursing students were the most vulnerable group who reported maximum number of NSI. Among the nursing students, 72% were completely vaccinated against hepatitis B virus. Conclusions: Nursing students are at utmost risk for NSIs, the prevention of which requires regular training and education.
Introduction
Percutaneous injuries among health-care workers (HCWs) caused by needles and sharps pose a significant risk of occupational transmission of bloodborne pathogens such as human immunodeficiency virus (HIV), hepatitis B virus (HBV), and hepatitis C virus (HCV). According to the Centers for Disease Control and Prevention, approximately 384,000 percutaneous injuries occur annually in the US hospitals, with about 236,000 of these resulting from needle sticks involving hollow-bore needles. [1] In an estimate, it was observed that every year proportions of HCWs exposed to bloodborne pathogens is 2.6% for HCV, 5.9% for HBV, and 0.5% for HIV, corresponding to about 16,000 HCV infections and 66,000 HBV infections in HCWs worldwide. [2] In developing regions, 40%-65% of HBV and HCV infections in HCWs were attributable to percutaneous occupational exposure. [2] Nurses are an important t bridge between doctors and patients. They sustain most sharp-related injuries in any hospital as observed in a multicentric study carried out in India. [3] Apart from HCWs, there are many trainees who receive their training in teaching Institutes and medical colleges. In India, the number of nursing schools and colleges has increased by more than 300% in the last 15 years, resulting in the annual capacity to train more than 216,000 nurses. [4] Nursing institutes offering the Bachelor of Science in Nursing have increased to more than 1500 in our country. More than 76,000 nursing student are passing from these institutes every year. Apart from this, more than 100,000 students pass out from Auxiliary Nursing and Midwifery and General Nursing and Midwifery courses in India annually. [4,5] The students are trained Nursing students: A vulnerable health-care worker for needlesticks injuries in teaching hospitals to provide care to people of all ages. The broad fundamental principles of nursing care are based on sound knowledge and satisfactory levels of skill; which includes standard precautions, hospital infection control (HIC) practices, safe injection practices, and others. [6] In a questionnaire-based study carried out in Brazil, it was seen that 18.1% nursing students had suffered sharp-related injury. [7] Needlestick injury (NSI) among this group is not only associated with physical issues but also psychological issues. [8] The other important group that is susceptible to sharp injuries is housekeeping staff. Although they are not involved in handling sharp directly, they directly come in contact with patients. However, mostly they suffer injuries due to failure to dispose sharp properly. They also bear the significant burden of sharp-related blood and body fluid exposures as was observed in an prospective study carried out in India. [9] This study was undertaken with an aim to determine the most susceptible job group for NSIs reported in a newly established Teaching Medical Institute in the Western part of Rajasthan, India.
Materials and Methods
This is a retrospective study with the study period of September 2014 to January 2017. Our institute is a 250-bedded newly established tertiary care hospital and medical institute in Western Rajasthan, where 75 students are admitted to the nursing college, and 100 students take admission into the MBBS course, annually.
As a routine protocol of the Health Informatics Center unit of our institute, all relevant information for NSI are collected in a pro forma, which has details of the (i) source, including his/her diagnosis, hepatitis B surface antigen (HBsAg), HIV, and HCV antibody status, (ii) health worker's designation and work experience, previous history of NSIs or blood transfusions and vaccination status for hepatitis-B including anti-HBsAg titer if done, HIV, HBsAg, and anti-HCV antibody status, and (iii) time of reporting, duration since injury, time and place of incident, mode of injury, description of the injury (mucosal exposure, spill on preexisting cut, superficial percutaneous, or deep percutaneous), type of first aid given and whether universal precautions were followed by the HCW. All HCWs are managed as per current National AIDS Control Organization (NACO) guidelines. [10] All the NSI cases which were reported to the department within study period and had complete NSI records were included in the study. Follow-up cases and cases with incomplete records were excluded from the study. The group receiving most number of NSIs was studied further regarding the type of injury and their hepatitis-B vaccination status.
Collected data were entered and analyzed using Microsoft Excel, and the results were expressed as percentages in tabular form.
Results
A total of 63 cases of needlestick injuries (NSIs) were reported to HIC unit during the study. Nursing students accounted for almost one-third of the total reported cases followed by nursing staff, housekeeping staff, senior residents, junior residents, and others [ Table 1]. Most common type of injury among nursing students was deep percutaneous, followed by superficial percutaneous [ Table 2], and most of the injuries were sustained in the hospital during their duty hours. In time duration between exposure and seeking postexposure prophylaxis, more than 50% of nursing students reported within the first 2 hrs of injury [ Table 3]. Of the total exposed nursing students, 18 (72%) students had the history of complete hepatitis B vaccination at the time of exposure, and one student was completely unvaccinated at the time of exposure [ Table 4]. Of the total 25 events of NSI, the source was known in 23 cases whereas in 2 cases, the source of injury was not traceable as injury occurred from abandoned sharps. In case of known source, none were found to be suffering from HIV, hepatitis-B, or hepatitis-C on testing.
Discussion
Every year, many NSI are reported in hospitals worldwide and are associated with exposure to bloodborne pathogens such as HIV, HCV, and HBV.
In present study, nursing students were the most common HCW who sustained NSI, followed by housekeeping staff, and nursing staff. In an Australian questionnaire-based study carried out in 2005, it was observed that 38 nursing students (13·9%) reported a needle stick or sharps injury during the study of 12 months, and 39.5% of NSIs were not reported. [11] In another questionnaire-based study done among the nursing students in India, Prasuna et al. reported that the occurrence of NSI during their training program was reported by 39.76% of nursing students. Moreover, maximum NSIs (57.6%) occurred during the first year of course. [12] High rate of injury in nursing students may be due to a large number of exposure during the procedures conducted by them as their learning protocol, combined with their inexperience. In a retrospective study done at a tertiary care medical institute in India, it was seen that HCWs with work experience of less than 1 year accounted for about 50% of reported. [13] Clarke et al. in their study found that the probability of ever having a NSI was inversely related to the years of experience. [14] Less number of injuries among senior doctors in the present study is surprising; however, it could be due to underreporting of the NSIs as in most of the surgical cases, the patient's HIV, HBsAg, and HCV test is done before the surgical procedure. Elder and Paterson in their review concluded that the degree of underreporting of sharps injuries may be as much as 10 fold when recorded through standard reporting systems. [15] In a questionnaire-based study done in Egypt, it was seen that 74.7% HCWs did not report the injury to employee health services, and physicians were less likely to report an NSI as compared to other health-care professionals. [16] In the present study, most common type of NSI in the nursing population was deep percutaneous, followed by superficial percutaneous. The deep percutaneous injury was mostly due to wide hollow bore needle mostly at the time of collecting blood from venipuncture and was categorized as severe exposure.
In case of HIV-positive source, this incident will necessitate starting of three drug regimes as per the NACO guidelines. [10] Deep injury is one of the important factors that increase the likelihood of transmission of HIV after percutaneous injury because of high-viral load in blood and direct access of vein or artery. [13,17] It has been seen in animal studies that the effectiveness of postexposure prophylaxis following NSI is time dependent. [18,19] Most of the guidelines recommend that postexposure prophylaxis should be started within first 72 h duration as it is effective only if given during this period. [20] In our study, it was seen that all the nursing students reported within first 24 h of exposure and more than half of them reported within first 2 h. This shows the considerable awareness among this group of HCW. In addition, there is a need to further reinforce this awareness among all HCWs.
As far as, the vaccination status of nursing students is concerned, 72% students were found to be completely vaccinated for hepatitis B. In a study carried out during 2015 in Agartala, it was seen that 80% of nursing students were vaccinated against HBV. [21] Similarly, in a multicentric study carried out in Turkey, it was seen that 85% students were completely vaccinated against hepatitis-B although the results varied among students of various nursing schools. [22] Vaccination plays an indispensable role in preventing hepatitis-B infection by formation of anti-HBsAg antibodies. However, even after complete vaccination, there are chances that seroconversion will not occur in almost 5% of vaccinated individuals (nonresponders) rendering them susceptible to HBV transmission following any NSI. [23] Titer of anti-HBsAg antibody >10 mIU/ml is considered as protective in vaccinated individuals if they get an injury. All HCWs are at a risk for occupational blood or body fluid exposure should undergo compulsory vaccination against hepatitis-B and get their anti-HBsAg antibody level tested after 1-2 months of receipt of the last dose of the vaccine series. In case of injury among nonresponders (anti-HBs <10 mIU/ml), additional booster doses are required and measures taken as per recent guidelines. [24] The previous studies on HCWs published from various parts of the world have reported 12%-21% nonresponders among total HBV vaccine recipients. [25] India is considered to have intermediate level of endemicity with regard to HBV. The point prevalence of HBV is 3.7%, which include over 40 million HBV carriers. HBV is the second most common cause of acute viral hepatitis after HEV in India. [26] Unvaccinated and incompletely immunized student are at higher risk of getting hepatitis-B infection in case of NSI. [26] In addition, causes of underreporting of NSI require further evaluation, which is the need of the hour.
Conclusion
HCWs are always at high risk of attaining NSIs, and nursing students are the most vulnerable group among all, requiring extra attention. There should be a regular training and education of nursing students regarding the prevention and treatment of NSIs, and it should be ensured that proper standard precautions are followed at all levels. In a tertiary teaching hospital, mandatory provisions for complete vaccination against hepatitis-B should be made for all the medical/nursing students and all HCWs including senior faculty and residents, followed by detection of anti-HBsAg antibody titers. Since many NSI cases go underreported, regular counseling and teaching should be carried out, so that early and prompt postexposure prophylaxis measures can be undertaken in NSI cases, wherever required.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest. | 2018-09-24T14:22:43.432Z | 2018-07-01T00:00:00.000 | {
"year": 2018,
"sha1": "92ffc201000b9e34c2c4511aa5c490992c782d45",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/jfmpc.jfmpc_265_17",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "92ffc201000b9e34c2c4511aa5c490992c782d45",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235076930 | pes2o/s2orc | v3-fos-license | Management practices in community-based HIV prevention organizations in Nigeria
Background Nigeria has one of the largest Human Immunodeficiency Virus (HIV) epidemics in the world. Addressing the epidemic of HIV in such a high-burden country has necessitated responses of a multidimensional nature. Historically, community-based organizations (CBOs) have played an essential role in targeting key populations (eg. men who have sex with men, sex workers) that are particularly burdened by HIV. CBOs are an essential part of the provision of health services in sub-Saharan Africa, but very little is known about the management practices of CBOs that provide HIV prevention interventions. Methods We interviewed 31 CBO staff members and other key stakeholders in January 2017 about management practices in CBOs. Management was conceptualized under the classical management process perspective; these four management phases—planning, organizing, leading, and evaluating—guided the interview process and code development. Data analysis was conducted thematically using Atlas.ti software. The protocol was approved by the ethics committees of the National Institute of Public Health of Mexico (INSP), the National Agency for the Control of AIDS in Nigeria (NACA), and the Nigerian Institute for Medical Research (NIMR). Results We found that CBOs implement variable management practices that can either hinder or facilitate the efficient provision of HIV prevention services. Long-standing CBOs had relatively strong organizational infrastructure and capacity that positively influenced service planning. In contrast, fledgling CBOs were deficient of organizational infrastructure and lacked program planning capacity. The delivery of HIV services can become more efficient if management practices are taken into account. Conclusions The delivery of HIV services by CBOs in Nigeria was largely influenced by inherent issues related to skills, organizational structure, talent retention, and sanction application. These, in turn, affected management practices such as planning, organizing, leading, and evaluating. This study shows that KP-led CBOs are evolving and have strong potentials and capacity for growth, and can become more efficient and effective if attention is paid to issues such as hierarchy, staff recruitment, and talent retention. Supplementary Information The online version contains supplementary material available at 10.1186/s12913-021-06494-1.
Background
Nigeria continues to experience one of the largest HIV epidemics in the world. As of 2019, 1.8 million people were living with the virus nationally [1]. Key populations (KPs), such as female sex workers (FSWs), are at particular risk of infection. In 2016, an estimated 14.4% of FSWs in Nigeria were living with HIV [2]. This population faces a myriad of challenges related to receiving care, including stigma, discrimination, and socioeconomic stressors [3][4][5]. Although HIV prevention and treatment services are largely subsidized in Nigeria, the structure for delivering service to KP (public/CBO clinics) and social norms that discriminate against vulnerable groups like FSW can inhibit these KPs from accessing them [6,7]. Therefore, addressing this burden of disease and creating access to care necessitates engaging CBOs in the care delivery chain. CBOs take different forms and their functions vary by context. A CBO has been defined as "a public or private non-profit organization that represents a community or a specific part of a larger community, and targets meeting a specific need in that community" [8,9]. In the context of this study, a CBO is a group of individuals with common interest in populations most at risk for HIV; their flexible organizational structure and rapport with vulnerable groups are key to implementing prevention services.
Evidence suggests that CBO involvement in the delivery of HIV services can significantly contribute to health services access and the reduction of HIV spread through stigma reduction and effective community mobilization and engagement [10][11][12][13][14]. The invaluable services that CBOs provide to communities are completely donordriven and donor-funded. However, recent evidence shows that funds from international donors for HIVrelated services in low-and middle-income countries have been shrinking [10,15]. The significant decline in the funding of HIV programs has caused concern about sustainability of the funding model [1]. Despite scarcity, Nigeria has historically been one of the highest recipients of Global Fund financing [16]. Since the burden of HIV in Nigeria is high, the country will continue to play a significant role in the global efforts to eradicate HIV. Consequently, donors have emphasized the importance of optimizing the efficiency of existing program funding in order to make the most of limited resources [13,[16][17][18].
Efficiency can be achieved either by minimizing the number of inputs used to provide a health service or by maximizing the number of outputs. Optimization faces barriers in low-income health markets due to the rigidity of the labor market, scarce competition, and lack of skills and training [19][20][21][22]. Under such constraints, it is imperative to find levers that can increase efficiency. Management practices are tools or sets of general practices used by firms to achieve better outcomes [23]. The effective management of human, material, and financial resources is one potential way to optimize under constraints, and has been linked to increased productivity in firms and to efficiency in hospitals and local health facilities [20,[24][25][26][27]. Effective management practices can increase employees' abilities to meet organizational goals [28], and align organizational objectives [29], which in turn increases overall productivity, and efficiency. Due to the importance of management, there have been concerted efforts to identify and measure management practices in health facilities [19,24,26,29].
However, little evidence has been documented about the management practices of CBOs in Nigeria, other than the role they play in economic development [30]. Findings from other contexts like the United States may provide some insights into the management capacity of CBOs implementing HIV programs in other contexts. Two studies which focused on CBOs were conducted in the United States in 2008 and 2015. The former assessed the capacity of 20 CBOs implementing a HIV/AID program in 9 multicultural rural and urban communities, and the latter examined facilitators and barriers to effective project implementation among 72 CBOs working with men who have sex with men. Findings from these two studies showed that CBO management capacity to plan, implement, and evaluate success was very weak [31]. Additionally, CBO managers were not always mindful of the characteristics and skills required for effective leadership in program implementation, such as employing personnel who lacked the capacity for program implementation [17]. Irrespective of human resource capacity, organizational infrastructure is also crucial for efficient service delivery [32]. While these may be relevant to CBOs providing HIV services in Nigeria, the different contexts make direct comparison challenging. This study aims to fill the gaps in knowledge about management practices among CBOs providing HIV services to KP in Nigeria. Nigerian CBOs are unique because of the crucial role they play in reaching the KP with HIV preventive services, and for that reason are particularly ripe for management intervention.
Methods
Using an ethnographic and content analysis framework, we conducted a qualitative investigation of management practices among CBOs in Nigeria [33,34]. It also allowed us to engagement deeply with study participants and comprenhend the many structural and behavioral issues that could influence management practices among CBO managers. In the analysis, we segmented management practices into four interconnected phases namely: planning, organizing, leading, and evaluating, comprising 22 indicators of management practices (Supplementary Table 2). We defined the phases as follows: planning as identifying the goals and objectives of the organization and outlining activities, resources, and the timeline to achieve those objectives; organizing as establishing and maintaining the necessary relationships between human, material, and financial resources through resource allocation; leading as setting clear direction for individuals, groups, and the organizations; and evaluating as measuring whether organizational structures are working properly and identifying and addressing barriers as needed [35][36][37].
Sample population
This study focused on managers working in CBOs that have received sub-award grants from an implementing partner. The implementing partner was a large nonprofit organization was funded through the SHiPS for MARPs (Strengthening HIV Prevention Services for Most-at-Risk Populations) and the Global Fund Programs. Both funding programs focused on the provision of HIV testing and counselling (HTC), sexually transmitted infections treatment (STIT) and HIV education (HIVE) services to FSWs and other KPs. The goal of the intervention was to expand access to more high-risk populations to ultimately lower the HIV prevalence rates in these groups. The sampling frame was 31 CBOs spread across the north and south geopolitical zones, from which 7 were purposively selected based on their involvement in the implementation of HIV prevention projects for FSWs. Given the scale of insecurity in Nigeria which made some states in the south and the north inaccessible, sampling of CBOs was limited to Abuja and Nasarawa (in the north) and Lagos (in the south). For the purposes of this study, we focused on services offered by CBOs to FSWs only.
Data collection
In-depth interviews at the 7 CBOs were conducted by the research team in January of 2017. Team members brought varied backgrounds that informed data collection. All questions, contained in an interview guide, were developed by authors based on study themes (Supplementary File 1). The method adopted for data collection enabled a direct interaction with: eight program officers, who were responsible for coordinating all daily activities and managing human resources within CBOs; seven monitoring and evaluation officers, who were tasked with the responsibility of ensuring that activities are carried out in line with the project design and process; two executive directors who provided overall leadership to the CBOs; and five volunteers who were non-permanent staff of CBOs recruited on ad-hoc basis to conduct HIVE, HTC, and STIT outreach activities. We also had face-to-face interactions with nine personnel from the implementing partners. Participants represented a variety of genders and educational backgrounds. The mean length of experience was 24.6 months; minimum length was 1 month and maximum length was 84 months. All interviews were conducted privately in the office spaces of the CBOs or implementing partners, were standalone, and lasted between 45 and 75 min in length. Interviews were digitally recorded, transcribed, and anonymized. Before commencing data analysis, preliminary findings were shared among CBO managers and implementing partners for feedback.
Data analysis
Interviews were transcribed and reviewed for correctness and completeness. We generated a list of codes in line with the research themes. The majority of codes (n = 13; 59.1%) were defined a priori, though some codes (n = 9; 40.9%) emerged during analysis. To ensure inter-coder reliability, the codes were defined and mutually agreed upon by members of the research team. In order to ensure mutual understanding of the coding process between the research assistant and data manager, 10 (33%) of the transcripts were jointly coded, while the remaining transcripts were coded by a research assistant under the supervision of the data manager. Organization and generation of output were conducted primarily by the data manager and data were analyzed thematically using Atlas.ti software, v6.0 [38]. Data were analyzed in two stages. In the first stage, members of the research team categorized the analysis into identified themes. Results were shared among other members for collective review. In the second stage, members of the research team reviewed and refined findings by connecting themes and drawing further insight from the data. Our reporting is aligned with the Consolidated Criteria for Reporting Qualitative Research (COREQ) [39].
Ethical approval and informed consent
The protocol for this study was approved by the Ethics Committee of the National Institute of Public Health, Mexico (CI-1403), the Health Research Ethics Committee of the National Agency for the Control of AIDS (NACA, FHREC/2016/01/58/08-08-16), and by the Nigerian Institute for Medical Research (NIMR, IRB/17/ 024). Written consent was obtained from all participants through the use of an informed consent form. Consent was also obtained from all participants to digitally audio record the interviews.
Models of CBOs
We found that all CBOs operated heterogeneously around the three health prevention services (HIVE, HTC, and STIT), while still maintaining the procedure imposed by implementing partners. Within this context, we classified CBOs into two principal models, the fledgling and long-standing (Fig. 1). These models differed by corporate status, degree of autonomy, formality, decision-making, sustainability, and ownership. Fledgling CBOs tended to be newer and were created by a cohort of friends who had long-standing relationships leading to a horizontal management structure among them. In contrast, long-standing CBOs tended to be older, comprised of individuals brought together by formal goals rather than by friendship.
The management processdivided into four mutually inclusive stages of planning, organizing, leading, and evaluatingwas significantly influenced by the characteristics found among CBOs.
Planning
In the planning phases, three key management issues influencing the provision of health services emerged, namely: organizational infrastructure, planning autonomy, and staff recruitment.
Organizational infrastructure
Organizational infrastructure constitutes CBOs' policies and procedures on assigned roles and responsibilities of personnel. Results show that long-standing CBOs possessed some degree of organizational infrastructure and implementation capacity that significantly aided the planning phase of service delivery. These attributes also enabled them to integrate and coordinate available human resources around multiple activities. In contrast, fledgling CBOs lacked organizational infrastructure, had low organizational capacity, and relied largely on the external support of the implementing partner to plan activities and to deploy human and material resources. This meant that planning autonomy was largely impaired because planning and operational guidelines were imposed on CBOs by funding requirementsa situation which created a parallel planning structure for longstanding CBOs because they operated multiple projects. It also facilitated a complex procedure for managing human resources, especially because few personnel were in charge of running multiple projects, which often meant multi-tasking between projects. Since most fledgling CBOs had a single project to implement, they were not as encumbered with planning challenges as long-standing CBOs. Lack of autonomy among all CBOs meant that the implementing partner often interfered with the operational procedures, recruitment of personnel and volunteers, and planning budgets for CBOs. However, Fig. 1 Divergence and convergence of characteristics among CBOs the degree of interference varied, and fledgling CBOs experienced greater interference than long-standing CBOs.
Staff recruitment
This was conducted relying on the social capital of older staff or other related individuals among fledgling CBOs, while long-standing CBOs mostly adopted formalized procedures for staff recruitment. These two methods gave rise to a mix of personnel with varied skills and competences. For example, among fledgling CBOs, skillsets among personnel were rather poor and personnel were prone to organizational challenges for which external support was required. A monitoring and evaluation officer at one of the fledgling CBOs recounted his recruitment procedure using informal networks: I was once a volunteer and I know a bit about it and I have the passion to work for the community … I met this program officer here because we use to have meeting together and we live in the same community. So, when he saw me he asked me what I was doing I told him "nothing for now", then he said he could involve me in a project. That was how I came in to the organization. Monitoring and Evaluation Officer, Female, 12 months' work experience, CBO #29 The recruitment procedure allowed CBOs to recruit a staff member who was "doing nothing for now" but knew "a bit" of what was done, and was "passionate" about what was being done. This procedure often led to loss of momentum in project management and implementation, as new staff with very low skills would have to rely on old staff or external partners to help them learn on the job. The loss of momentum resulting from this recruitment procedure would significantly impact on the capacity of CBOs to organize resources and personnel and retain talent.
Organizing
Some of the indicators which measure how CBOs were organized includes how their personnel utilize available human and material resources for achieving organizational goals. Findings show that this component of the provision of services was significantly influenced by staff skillset, staff turnover, and financial autonomy.
Staff skillset and staff turnover
We found among fledgling CBOs that personnel were significantly deficient in expertise and skills for project implementation. Often, though, when a substantial level of skill has been acquired they would move to, as one interviewee called it, "greener pastures"-i.e. greater remuneration and/or a higher position at a larger organization. This created high rates of staff turnover which, in turn, led to depletion of personnel. While this was mentioned in several interviews, one monitoring and evaluation officer, referring to the way in which human resources were organized by CBO personnel, summarized the problem succinctly: I look at what is happening generally I look at the personnel, people that are providing the services, our human resource persons... could it be that is as a result of their poor performance? Could it be that is as a result of their incompetence? Could it be the community members? In every organization, there are internal control and they could be some kind of redeployment, [we recruit] these personnel, we know their capacity and we know their behavior, their level of interaction, attitude and their level of communication and how much they can actually impact. They are under our supportive intervention and mentorship all through like I said, if there is enough fund, we would have hired the best hands, the best experts but since this fund are not there to hire the best experts, like those with B.Sc.
Monitoring and Evaluation Officer, Male, 24 months' work experience, CBO #18
The effect of this on management of human resources was substantial. It was sometimes difficult for managers to coordinate activities effectively, owing to the mismatch in skills of newly employed personnel and tasks assigned. For effective organization of resources, there was evidence of skill exchange and collaboration with other agencies, particularly among fledgling CBOs, in order to enhance reach and impact service delivery. Such practices were not found among long-standing CBOs. Importantly, there were no mechanisms for sharing such creative and innovative ideas between these two models of CBOs.
Financial autonomy
We found that organizing human resources around goals and objectives was significantly influenced by CBOs' lack of financial autonomy. This manifested in two ways. First, the funding model adopted made the implementing partner directly responsible for remunerating a high percentage of volunteers, which often caused volunteers to value or prioritize directives from the implementing partner over those from CBO managers. Sometimes, this hindered CBO managers' efforts to deploy human and material resources for effective service delivery. It also impacted on the routine flow of work. A monitoring and evaluation officer expressed his frustration and provided further insight into how this structure of financing and remuneration created a bottleneck in management: We also found that expenditures were organized around a set of overhead costs at fixed amounts across these CBOs. Limiting CBOs' expenses on recurrent overhead costs created a structure of expenditure that did not consider the peculiarity of and contexts of the CBOs, which may necessitate flexible expenditure pattern. While this rule applied to both models of CBOs, fledgling CBOs were most affected because they did not operate multiple projects. In contrast, long-standing CBOs were able to spread recurrent costs over multiple projects. The rigidity in expenditure patterns complicated financial accountability and expenditure for fledgling CBOs such that initiatives and effective deployment of human and financial resources were significantly affected.
Leading/directing
One key management facet that influenced the leading and directing phase was shared decision-making. Roles and responsibilities of managers in setting clear direction and influencing others were specialized and not particularly different between the two CBOs. However, we observed a more vertical and rigid hierarchy among longstanding CBOs, and a flexible/horizontal approach among fledgling CBOs. The flexibility among fledgling CBOs promoted shared decision-making, inclusion, and team spirit, which consequently impacted on job satisfaction among personnel. While this flexibility was a hindrance in the planning phase, it was an asset in the leading and directing phase. The report below from one of the personnel further supports the point: The garbage can method is a kind of management method whereby everyone matters in an organization, even the gatekeeper, the cleaner, everyone brings a suggestion to the table, a suggestion is referred to as garbage, then the garbage is put in a can and then you shuffle it up or we iron it within ourselves and we see (laughs) the best decision to go with, if he is flexible I think it will have a positive effect on the project, that is it helps the project to move forward Case Management Officer, Male, 12 months' work experience, CBO #15 In contrast, the hierarchical structure of responsibilities and decision-making among long-standing CBOs catalyzed more feelings of discontent and disconnection: Everything that comes into the organization has to pass through the executive director. … The structure is there it is just that it … . but not fully the way I want it to be. Number one, most times he [the ED] is hardly around and there are some decisions we have [to take] that has to depend on him and the things we need to get the work done is not available and it is quite frustrating Program Officer, Female, 24 months' work experience, CBO #4 Managers of long-standing CBOs were sometimes unable to make decisions on important and urgent issues affecting service delivery because in most cases executive directors were not available to lead or direct. We also found that executive directors were less involved in the daily management of CBOs, which affected service delivery and sometimes led to delayed or unmet goals.
Evaluating
Evaluation was an integral part of the provision of services. However, we found that logistical challenges and the use of sanctions and rewards affected service delivery during the evaluating phase.
Logistical challenges
Logistical challenges significantly hampered monitoring and supervision activities. Service delivery for CBOs was target-driven and the framework for evaluating performance was largely hinged on the ability of CBOs to meet these targets. This often heightened the pressure among personnel to perform, and because of shortfall in logistic support for monitoring and evaluation, volunteers sometimes resorted to dishonest and unethical practices to meet targets. In addition, poor communication and logistics caused some CBOs to experience overlap in demarcated territories. This overlap of service area meant efforts were duplicated or "double counted", i.e. recorded or reported more than once. This likely increased project expenditure and affected optimal resource allocation. A point from one of the monitoring and evaluation officer provides more insight: What I mean by double counting is … , you have two different facilitators probably working on the same site. And when they get to that site probably one facilitator may not know that the other facilitator is already capturing a particular peer and that particular facilitator will capture that same peer and when it gets to me I will observe that we can't be servicing the same person over and over again because we give refreshment to them we give item. Monitoring and Evaluation Officer, Female, 6 months' work experience CBO #6
Sanctions and rewards
Regardless of evidence of some unethical practices and poor performance, very few CBOs applied sanctions. When sanctions were applied, such as through removing personnel or implementing salary deductions, it was usually preceded by corrective measures such as verbal or written warning. Long-standing CBOs applied this sanction more often than fledgling CBOs. The type of relationship among managers of these two kinds of CBOs bonded, integrated, and informal among fledgling CBOs and structured and formal among long-standing CBOsmight have accounted for the relative difference in their application of sanctions. Applied sanctions applied existed in form of threats and salary deductions, as reported by one of the executive directors: Okay. Like when they are late in turning in their reports, like there was one incidence that happened that their salaries were reduced because they didn't turn up their report, not just turning up their report but the sessions, the number of sessions they were supposed to have, they didn't have up to that session so we now reported to [The implementing partner] and some part of their salary was deducted that time. Executive Director CBO, Female, 60 months' work experience, # 26 While sanctions were rarely applied, a positive reward system existed across the two models of CBOs where personnel who performed well were rewarded in different ways, including promoting exceptional volunteers to the next vacancy.
Discussion
Our study investigates management practices among CBOs providing HIV services to KPs in Nigeria. Findings from this study highlight several management issues that either hinder or facilitate the provision of HIV prevention services. These barriers and facilitators were replicated in both types of CBOs (long-standing and fledgling) to varying degrees. Reward for good performance, for instance, was a facilitator while rare application of sanctions was a barrier. When these divergent elements are present within the same CBO, it is difficult to know how the effect of one (facilitator) neutralizes the effect of the other (barrier). For example, fledgling CBOs were enthusiastic and passionate, yet suffered from a lack of formal organizational infrastructure. One potential remedy to this barrier may be knowledge sharing. Long-standing CBOs demonstrated a fair degree of organizational infrastructure, and this attribute might be transmissible to fledgling CBOs if opportunities were created for knowledge and experience sharing between the two models [40,41].
Hierarchy was a major challenge among long-standing CBOs. The vertical structure of management had multiple layers which disrupted the flow of decisions from top to bottom. As was observed in this study, managers in organizations with hierarchical structure serve as liaisons to manage the gaps and translate expectations from upper management to frontline employees, which can lead to weak interactions between top management and frontline staff [42]. Thus, among most long-standing CBOs, field staff were not only disconnected from upper management, but communication between upper management and middle management was also fractured. This made decision-making cumbersome and often led to frustration among personnel [43,44]. This barrier was potentially disruptive to the provision of services as important decisions that could facilitate effective and efficient service delivery were delayed. In contrast, fledgling CBOs had a flat structure which empowered staff to make a range of decisions on their own given the seamless flow of information from top down and from bottom up. Thus, hierarchical structure among longstanding CBOs did not encourage inclusion, commitment, and decision-making, but a flexible and flat structure, typically found among fledgling CBOs, inspired confidence, inclusion, and passion [45].
Enthusiasm and passion of staff facilitated effective service delivery among fledgling CBOs, supporting evidence that passion and commitment are related features of emotional contagion among personnel that can positively influence productivity [44]. These were the main management attributes that stood out among fledgling CBOs that contributed significantly to the provision of services. Enthusiasm and passion encourages teamwork, and highly motivated staff [43,44,46]. While this model of CBOs suffered organizational challenges, their lack of structure allowed for a greater degree of flexibility. A united workforce-where each employee has a good grasp of program goals and objectives-has positive effects on the organization. Teamwork can make more effective and efficient use of labor and can improve productivity by maximizing the different strengths and skills of team members so that a greater variety of tasks may be tackled [47]. It reduces workloads for all employees by enabling them to share responsibilities or ideas [43]. Studies on teamwork have shown that the more widespread teamwork is in an organization, the higher the level of organizational innovation [44]. In contrast, teamwork can also be accompanied by unwanted phenomena that result in performance loss or poor decision-making [46,47]. While studies have shown the pros and cons of teamwork [43,44], results from this study show that CBOs, particularly fledgling CBOs, can leverage this facet in order to promote inclusion, innovation, and improved service outcomes.
Having multiple projects, which was a main feature of long-standing CBOs, was instrumental in offsetting the organizational setback that lack of financial autonomy could cause. Sharing cost among projects helped longstanding CBOs to reduce the burden of rigid overhead costs. Decision-making is a key management issue that promotes growth, efficiency, and effective service delivery [43,45]. Inability of long-standing CBOs to make decisions might have negatively impacted their potential for efficiency. In other words, were hierarchy to be flat and decision-making inclusive, service delivery might be more effective and efficient.
Rewarding performing staff is an effective means of motivating not just personnel that are productive, but a means of influencing those poorer performance [42,48]. Financial rewards are by no means the only way to motivate. Non-financial rewards are also important in directing and shaping desired behaviors among employees [48,49]. Service delivery among CBOs is targetdriven and the ability to meet or surpass targets is often coupled with financial and non-financial rewards. Pay raises, verbal and written recognitions, awards and certificates, are some of the reward systems used by CBOs. Rewards can help to motivate personnel using reinforcement theory, which states that behavior can be reinforced by basic stimulus-response linkages [50]. In other words, when personnel are rewarded for desired behavior, they are more likely to behave that way in the future [49]. This aspect of management might account for why some CBOs would deliver on targets despite rare application of sanctions. It could also indicate that positive reward may have the potential to suppress behaviors that would otherwise necessitate the use of sanctions.
Another potential way to improve service delivery would be a more formal system for talent attraction and retention that could help both models of CBOs to achieve better results. The lack of deliberate policy geared towards the creation of a workforce heightened the depletion of human resources and slowed down the pace of work. Creating a workforce plan can help monitor strategies on the attraction and retention of talent [51,52]. These strategies include recruitment, induction, performance management, professional development, and succession planning [32]. However, the two models of CBOs discussed in this study fall short of this and, as a result, they constantly experienced loss of momentum in management and project implementation. One of the features of such practice includes skill mismatch between tasks meant to be done and skills required to implement them, as well as frequent and high turnover rates of staff. Investment of time, financial, and human resources in retraining personnel could avoid regular interruption in service delivery.
Limitations
This study aims to understand management practices among KP-led CBOs spread across Nigeria. Due to security concerns in the country and resulting transportation challenges, we could not sample from all six geopolitical zones of the country. Our sample was geographically limited to Abuja, Lagos, and Nassarawa states, and therefore disproportionately reflects views from urban and suburban CBOs. These CBOs may face different challenges than more rural CBOs. We also focused primarily on KP-led CBOs and only on those serving FSW. Therefore, findings from this study might not be generalizable to CBOs serving other populations or that focus on services other than HIV transmission prevention.
Conclusions
This qualitative study highlights the importance of management practices in efficient delivery of HIV services to KPs by CBOs in Nigeria. CBOs providing HIV services to KPs can become more efficient and effective if attention is paid to issues such as hierarchy, organizational infrastructure, staff recruitment procedure, talent retention, and knowledge sharing between the different CBO models. The analysis also reveals that, far from being a monolith, the practices of management at each model of CBO are diverse. It is key for funding entities to keep this in mind as they continue to support CBOs, since they are uniquely positioned to support facilitators and discourage barriers to productivity. Likewise, it is important that future research examining or intervening on management practices at CBOs account for the heterogeneity of management implementation. At the same time, CBOs are evolving and have a capacity for growth that is ripe for intervention. | 2021-05-22T13:41:38.336Z | 2021-05-22T00:00:00.000 | {
"year": 2021,
"sha1": "7f1d69f9f7be3166229df145d5586ff184277922",
"oa_license": "CCBY",
"oa_url": "https://bmchealthservres.biomedcentral.com/track/pdf/10.1186/s12913-021-06494-1",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7f1d69f9f7be3166229df145d5586ff184277922",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
231960654 | pes2o/s2orc | v3-fos-license | Mitochondrial Calcification
One of the most fascinating aspects of mitochondria is their remarkable ability to accumulate and store large amounts of calcium in the presence of phosphate leading to mitochondrial calcification. In this paper, we briefly address the mechanisms that regulate mitochondrial calcium homeostasis followed by the extensive review on the formation and characterization of intramitochondrial calcium phosphate granules leading to mitochondrial calcification and its relevance to physiological and pathological calcifications of body tissues.
mediated by mitochondria [8], which play a crucial role in maintaining cellular calcium homeostasis by scavenging excessive cytosolic Ca 2+ as Ca-P complexes. In both contexts, key two phases of mineralization include the accumulation of calcium and phosphate ions to promote nucleation and crystal formation usually of hydroxyapatite (HA) [Ca 10 (PO 4 ) 6 (OH) 2 ] in nature followed by the exposure of these preformed apatite material to extracellular fluid promoting the crystal proliferation and thus beginning the process of mineralization and crystal deposition.
MITOCHONDRIAL CALCIUM TRANSPORT
Since the focus of this viewpoint is mitochondrial calcification rather than calcium regulation of mitochondria, in this section we will briefly address important aspects of mitochondrial calcium uptake and efflux including mechanisms that regulate mitochondrial calcium homeostasis (Figure 1). For more information on this broad scientific area readers are referred to some exhaustive reviews [9][10][11].
MITOCHONDRIAL Ca2 + UPTAKE
The direct evidence that mitochondria rapidly accumulate Ca 2+ is known since the 1960s [10][11][12][13]. The movement of Ca 2+ ions in and out of mitochondria is a concerted activity of ion transporters on outer mitochondrial (OMM) and inner mitochondrial membranes (IMM). The OMM is highly permeable to various ions, including Ca 2+ , whose transport is mediated by a non-selective porin, voltage-dependent anion channel [14,15]. In contrast, calcium entry through IMM into the matrix is facilitated primarily by a highly calcium selective channel, mitochondrial calcium uniporter (MCU), located in the IMM [16][17][18]. MCU exhibits lower affinity for Ca 2+ (Kd around 10-20 μM) and higher conductance rates than Ca 2+ uptake channels in the endoplasmic reticulum (ER), which makes them suitable to respond to large increases in cytosolic Ca 2+ that occur physiologically at the calcium release or the entry points of ER and plasma membrane calcium channels and to the pathological Ca 2+ overload [19]. The MCU-mediated Ca 2+ entry into mitochondria is an electrogenic process driven by the steep mitochondrial membrane potential, Δψm, (~150-180 mV, the mitochondrial matrix is negative) across the IMM established by the respiratory chain or by the reverse mode ATP synthase activity [13]. Accordingly, proton ionophores such as p-(trifluoromethoxyl)-phenyl-hydrazone (FCCP) that dissipate Δψm suppresses mitochondrial Ca 2+ accumulation. Whereas selective inhibitors of MCU, mainly ruthenium red-based compounds, and small-molecule inhibitor DS16570511 directly inhibit Ca 2+ uptake [20][21][22]. Despite the enormous thermodynamic pull, Ca 2+ levels in mitochondria are maintained at the resting levels (~100 nM), suggesting the presence of mechanisms that maintain the baseline levels of mitochondrial Ca 2+ by directly regulating the activity of MCU [23]. These regulating mechanisms are critical to ensure that MCU acts as a gate-keeper to prevent channel opening at resting cytosolic Ca 2+ levels, thus avoiding deleterious/futile calcium cycling and matrix overload and allowing prompt response of mitochondrial calcium uptake in situations of cytosolic Ca 2+ increase. MCU, a 40 kDa protein, functions as a tetramer where a single protomer is composed of two transmembrane domains (TM1 and TM2) joined by a highly conserved short loop facing an intermembrane face (IMS) and N-and Cdomains facing mitochondrial matrix. The motif between TM1 and TM2 characterized by negatively-charged residues (DIME motifs) serves as a selectivity filter of the MCU channel [24,25]. MCU does not have classical Ca 2+ -sensing domains, hence cannot regulate its own activity. Indeed, the activity of MCU is regulated by EF-hand-containing Ca 2+ binding proteins, mitochondrial calcium uptake 1 (MICU1) and mitochondrial calcium uptake 2 (MICU2) found in the IMS, along with IMM protein essential MCU regulator (EMRE). A study suggested that by exerting opposing effects, MICU1 and MICU2 heterodimers finetune the activity of MCU, where at lower cytosolic Ca 2+ levels the dominant inhibitory effect of MICU2 shuts down the MCU activity; however, conformational change induced in dimers by increases in cytosolic Ca 2+ releases MICU2-dependent inhibition of MCU triggering MICU1-mediated augmentation of MCU channeling activity [26]. EMRE [27], a transmembrane protein is critical for the assembly of functional MCU, and promotes MCU interaction with regulatory subunits, MICU1 and MICU2 and thus contributes to channel gating. Also, MCU, a paralog of MCU, was demonstrated to be an endogenous dominantnegative subunit of MCU that greatly impairs Ca 2+ ion permeation properties of MCU [28]. Interestingly, the expression and relative proportions of MCUb vary significantly among tissues contributing to the tissue-specific variations of the mitochondrial calcium uptake rates observed in different mammalian tissues. For example, skeletal muscle exhibits high MCU: MCUb ratio, which matches with its high mitochondrial calcium conductance rates [28,29], compared to adult heart that exhibit relatively elevated expression of MCUb resulting in considerably low MCU activity [28,29]. In cardiac cells, with 37% of cell volume being mitochondria, such regulation through the higher expression of MCUb is crucial to prevent the massive accumulation of Ca 2+ by mitochondria and dysfunction and undesired cytosolic Ca 2+ buffering preventing heart contractile activity. Further, induction of MCUb expression was shown to be a stress-responsive mechanism to overcome calcium overload following cardiac injury [30]. Future studies in this area should explore how various physiological and pathological stimuli alter the ratios of MCU: MCUb and the consequences of such an altered expression on mitochondrial Ca 2+ uptake sensitivities/ loading capacity of tissues and implications for tissue calcification. Two additional MCU regulators are MCU regulator 1 (MCUR1) and solute carrier family 25 member 3 (SLC25A23). Silencing of MCUR1 abrogated MCU-dependent mitochondrial Ca 2+ uptake in both basal and stimulated conditions [31] and was critical for full assembling MCU via its interaction with MCU and EMRE [32]. SLC25A23, an IMM protein with ATP-Mg/Pi carrier function, represents another regulator of MCU given that silencing of SLC25A23 reduced MCU activity and thus Ca 2+ influx into mitochondria following stimulation [33]. Although MCU is considered to be the predominant mechanism of mitochondrial Ca 2+ uptake, interestingly MCU-KO mice had significantly reduced but detectable levels of matrix Ca 2+ with only relatively minor alterations in the functions dependent on mitochondrial influx of Ca 2+ : mitochondrial respiration and basal metabolism. The only substantial defect is a decrease in skeletal muscle peak performance, indicating that in vivo alterations in matrix Ca 2+ are most important for adapting to needs of higher energy demands as in strenuous muscle work [34,35]. Overall, the observations from MCU-KO mice suggest the presence of additional MCU-independent Ca 2+ uptake mechanisms in mitochondria [36][37][38][39][40][41]. In addition, the possibility that in the absence of MCU, mitochondrial Ca 2+ efflux mechanisms work in reverse mode, thus bringing Ca 2+ into the matrix rather than exporting Ca 2+ , cannot be ruled out [34]. For an extensive summary on genetic manipulations of MCU and effects on mitochondrial Ca 2+ uptake and phenotypes in different cell lines and species, see reference De Stefani et al. [42].
MCU-INDEPENDENT MITOCHONDRIAL Ca2 + UPTAKE
Other potential Ca 2+ uptake pathways reported in mitochondria include the rapid mode Ca 2+ uptake (RaM), [41,42], and Ca 2+ influx through mitochondrial ryanodine receptor 1 (mRyR1) functioning in excitable cells [43,44] among others [14,40]. Of these multiple mitochondrial Ca 2+ influx mechanisms, RaM, described as a kinetic model of Ca 2+ uptake, operates very rapidly (hundred times faster than MCU; [37]) and responds to transient and low cytosolic Ca 2+ pulses of <200 nM. Its conductivity is brief, which is inhibited at extramitochondrial Ca 2+ levels greater than 200 nM by Ca 2+ binding to an external inhibition binding site before undergoing resetting by drop in external Ca 2+ levels [38]. Fast uptake of Ca 2+ can nevertheless create transient sites of high matrix Ca 2+ that can activate ADP phosphorylation [15,16]. However, the levels are not sufficient enough for global cytosolic Ca 2+ buffering and the induction of mitochondrial permeability transition pore (mPTP), a large pore in the inner mitochondrial membrane that increases the mitochondrial permeability to solutes up to 1.5 kDa whose persistent opening can lead to cell death [17]. Hence, the evolution of RaM seems to be in the regulation of the rate of oxidative phosphorylation by generating brief, high free matrix Ca 2+ levels with relatively small amounts of Ca 2+ [37]. Such a mode of transient, rapid, and low mitochondrial Ca 2+ uptake may be more relevant to tissues like a heart with very short but frequent Ca 2+ pulses, thus protecting them against matrix Ca 2+ overload the opening of mPTP but still activating Ca 2+sensitive metabolic reactions. mRyR1, mainly characterized in excitable cells like cardiac muscle cells, is another fast Ca 2+ uptake pathway in mitochondria that is active in the micromolar ranges (10-50 μM) of Ca 2+ and is inactivated at higher concentrations [39]. Since mRyR1, unlike MCU, has relatively low selectivity for Ca 2+ with high conductance rates, it can rapidly dissipate Δψm. This energetically unfavorable process is prevented presumably with a lower number of mRyR1 on single mitochondria, so that membrane depolarization is localized and is quickly corrected by metabolic activity [14,40]. The unique Ca 2+ dependence of various Ca 2+ influx channels suggests their specific roles in different cytosolic Ca 2+ environments of different tissues. However, modulation of their function in (patho) physiological conditions remains to be explored.
For the mitochondrial calcification, Ca 2+ uptake via MCU seems to be a major mechanism, since RaM and mRYR1, although they have high conductivity [37], are operational transiently around physiological or at modest elevations of extramitochondrial Ca 2+ [38] unlike MCU that operates even in the conditions of more extended and higher cytosolic Ca 2+ pulses with relatively slow conductance, thus mediating large amounts of matrix Ca 2+ accumulation necessary for calcification.
MITOCHONDRIAL Ca2 + EFFLUX PATHWAYS Na + Dependent Mitochondrial Ca2 + Efflux
As to the efflux mechanisms, Ca 2+ can be exported from the matrix via Na + -dependent or independent mechanisms. The Na/Li/Ca exchanger (NCLX) in the inner mitochondrial membrane [18,43,44], ubiquitously found in most cell types and particularly robust in excitable cells, catalyzes the exchange of Na + or Li for Ca 2+ . Although the precise stoichiometry for NCLX still unclear, the general consensus has been an influx of 3 Na + per 1 Ca 2+ efflux, indicating that NCLX is also electrogenic [45] similar to its counterparts (Na + /Ca 2+ exchanger, NCX) in the plasma membrane. The unique feature only shared with mitochondrial NCLX being Li + -mediated Ca 2+ transport in addition of Na + /Ca 2+ exchange [46], hence the name NCLX instead of NCX. Since Ca 2+ influx into mitochondrial matrix is driven by negative electrochemical gradient, the influx process is energetically downhill, and the efflux is uphill that would require energy. The minimum energy required for the export of 1 mole of Ca 2+ from mitochondria is calculated to be 33.04 kJ/mol [47]. Energy requirements for such a transport could be met by ATP hydrolysis, energy from ETC activity via oxidation of substrates, or coupling the Ca 2+ efflux to another ion that is moving down its electrochemical gradient or some combination of these energies. Na + ion, whose matrix concentrations are maintained lower than cytosolic Na + levels by a Na + /H + exchanger [19], meets such an energy requirement. Hence, the large negative Δψm coupled with Na + gradient i.e., lower inside, provides the driving force for extruding Ca 2+ from matrix against its gradient through NCLX. The electrogenic feature of NCLX would indicate that in depolarized mitochondria, NCLX would function in reverse mode mediating the influx of Ca 2+ rather than extrusion [23]. Despite the profound effect of NCLX on mitochondrial Ca 2+ , as shown in gene silencing and overexpression experiments [44], NCLX does not affect the steady-state resting level mitochondria Ca 2+ . This would suggest the low affinity of NCLX to Ca 2+ [48] and its prominent role during rapid and robust matrix Ca 2+ changes to restore mitochondrial Ca 2+ levels to baseline.
While the exact kinetics of these mitochondrial efflux mechanisms may vary significantly between tissues, overall, the kinetics of efflux rate is always much slower than influx, and this kinetic imbalance is apparent from Vmax values of these mechanisms. For example, initial studies established Vmax of MCU to be 1400 nmol Ca 2+ (mg protein) −1 min −1 compared to combined Vmax of 20 nmol Ca 2+ (mg protein) −1 min −1 for efflux mechanisms [52]. This kinetic imbalance leads to two questions: (1) Why is the efflux rate through these Ca 2+ selective mechanisms slower than influx? (2) How do mitochondria overcome pathological matrix Ca 2+ overload that could ensue since the influx rate exceeds that of combined Ca 2+ selective efflux pathways? Ca 2+ accumulation by mitochondria is a function of extramitochondrial Ca 2+ levels [53]. Hence a higher efflux rate would mean higher cycling of Ca 2+ across the IMM, which will be met at the expense of increased proton conductance manifesting as a decrease in proton electrochemical gradient and hence in increased respiration, suggesting that respiratory capacity would be spent on Ca 2+ recycling [54]. Thus, having low Vmax and easily saturable efflux pathways would limit the energy to be spent on mitochondrial Ca 2+ cycling. However, such a kinetic imbalance would expose mitochondria to a threat of Ca 2+ overload, which can be overcome by the opening of a high conductance channel in the IMM such as mPTP that shows a prominent dependence on matrix Ca 2+ for its activation [17,55]. The mPTP open and closed transition states are modulated by various endogenous effectors, and the consequences of pore opening vary dramatically based on the open time [56]. mPTP is a large, non-selective channel, which in its fully open state has a permeability cutoff for molecules up to 1500 Da. Thus, with a long term opening, transport of ions and molecules occurs between mitochondria and cytosol followed by the influx of water resulting in mitochondrial swelling. Eventually, OMM ruptures with the release of proapoptotic proteins from mitochondrial IMS into the cytosol, potentially leading to apoptotic cell death or necrosis.
Interestingly, transient openings or "flickerings" of mPTP have been reported, suggesting that mPTP may also play a physiological role in Ca 2+ efflux. Thus, mPTP is also considered to be one of the important matrix Ca 2+ efflux mechanisms [55]. Unlike other mitochondrial Ca 2+ efflux mechanisms, mPTP is not selective for Ca 2+ . Such an ion non-selectivity may facilitate a unique advantage to mPTP in overcoming the opposition by diffusion potential (−30 mV) that is generated across the IMM due to Ca 2+ efflux through Ca 2+ selective channels. Thus, in the absence of compensating ion transport, i.e., the influx of positive charges and efflux of negative charges, the efflux of Ca 2+ through Ca 2+ selective channels would be extremely slow. One way to overcome the magnitude of diffusion potential and subsequently to increase the rate of Ca 2+ efflux is to increase the IMM permeability, for example, by increasing the H + conductance. The ion non-selectivity of mPTP allows the charge compensation within a single channel itself at zero potential, thus allowing the rapid efflux of Ca 2+ from matrix regulated by the modulation of the mPTP open time. Since there is no concentration gradient for Na + and K + across IMM, mPTP is, in a way, selective for Ca 2+ transport from mitochondria [56]. Given the low affinity of mPTP Ca 2+ binding sites (Kd 25 μM), the Ca 2+ concentration required for the activation of mPTP is relatively higher than the concentration for ADP phosphorylation (20 nmol/mg vs 4 nmol/mg protein), suggesting that higher matrix Ca 2+ overload is required for pore activation [57]. Matrix modulators like elevated levels of mitochondrial reactive oxygen species (mtROS) can decrease the amount of Ca 2+ required for mPTP activation in pathological conditions. It should be noted that pore opening itself can also contribute to the generation of mtROS [58]. Other mPTP inducing agents include Pi, oxaloacetate, and acetoacetate, while adenine nucleotides and Mg 2+ are common endogenous inhibitors of mPTP activation, including acidic pH and high membrane potential [56]. Incidentally, membrane depolarization and increase in matrix pH subsequent to Ca 2+ overloading promote the activation of mPTP.
The depolarization of Δψm in turn, results in the reversal of mitochondrial F0-F1 ATP synthase, thus promoting ATP hydrolysis. Since Mg 2+ has a ten-fold higher ATP affinity, ATP hydrolysis would increase the matrix Mg 2+ levels [59]. The combination of these events would increase the concentrations of mPTP activation inhibitors (Mg 2+ and ADP), leading to pore closure restoring Δψm. This would explain the basis for transient openings of mPTP in vivo (as detailed in the review, Bernardi [56]. Another possibility for mPTP flickering could be during rapid Ca 2+ influx through RaM and mRyR1, where mPTP at these high Ca 2+ microdomains could be activated, leading to Ca 2+ -induced Ca 2+ release (discussed in Gunter and Sheu [60]). Depending on the matrix Ca 2+ load, released Ca 2+ can trigger Ca 2+ uptake into adjacent mitochondria.
MITOCHONDRIAL Ca2 + UPTAKE AND CROSSTALK WITH ROS
The major targets of mitochondrial Ca 2+ are rate-limiting enzymes of tricarboxylic acid (TCA) cycle that are activated in different mechanisms: isocitrate dehydrogenase and ketoglutarate dehydrogenase are directly activated by Ca 2+ binding whereas pyruvate dehydrogenase (PDH) activation depends on Ca 2+ -regulated PDH phosphatase [10,61]. The activation of TCA boosts the synthesis of reducing equivalents, NADH and FADH2, substrates of electron transport chain (ETC), thus enhancing the ETC activity and subsequent increase in proton-gradient. In addition, mitochondrial Ca 2+ also stimulates the activities of adenine nucleotide transporter [62] and complex V (mitochondrial F 0 F 1 ATP synthase) [63], which by harnessing proton gradient generates ATP. Overall, a rise in matrix Ca 2+ in response to an increase in cytosolic Ca 2+ , which invariably is associated with stimulated cells, allows mitochondria to decode the energy demands of cell stimulation and adjust ATP synthesis accordingly. Since mitochondrial ETC is one of the main sites that generate cellular reactive oxygen species (ROS) in physiological and pathological conditions, Ca 2+ accumulation in the matrix during cellular activation can directly contribute to mtROS by promoting mitochondrial metabolism. Mitochondrial Ca 2+ also activates nitric oxide synthase, whose product nitric oxide inhibits complex IV enhancing mtROS generation [56]. Matrix Ca 2+ overload in conjunction with oxidative stress activates the opening of mPTP. The opening of mPTP results in the rapid collapse of Δψm and membrane depolarization resulting in increased mtROS. An independent study has shown that Ca 2+ induces ROS via Ca 2+ -mediated complex II disintegration by binding to cardiolipin, a principle IMM anionic lipid that promotes complex II stability. However, when bound by Ca 2+ in the conditions of matrix overload, cardiolipin coalesces into separate homotypic clusters releasing the enzymatically competent sub-component of complex II that generates ROS by transferring electrons from succinate to molecular oxygen [64]. Oxidative stress, in turn, stimulates mitochondrial Ca 2+ overload by mPTP. Available evidence shows that various calcium transport systems are sensitive to redox conditions [65]. This includes oxidants that impair Ca 2+ influx into endoplasmic reticulum and extrusion from the plasma membrane via inhibition of sarco (endo) plasmic reticulum Ca 2+ -ATPase [66,67] and plasma membrane Ca 2+ -ATPase, respectively [68,69] complemented by increased release from endoplasmic reticulum Ca 2+ stores [70,71]. The resultant increase in the cytosolic Ca 2+ causes transient opening of mPTP to prevent cell from cytosolic overload but stimulating mitochondrial Ca 2+ overload. Interestingly, in in vitro conditions, inflammation and hypoxia-induced oxidative stress were shown to regulate MCU-mediated mitochondrial Ca 2+ uptake independent of cytosolic Ca 2+ by relieving it from gatekeeping of MICU1/ MICU2, thus resulting in augmented mitochondrial Ca 2+ at baseline cytosolic Ca 2+ [57]. Specifically, in the conditions of enhanced mtROS, conserved cysteine residue in the NTD of MCU undergoes redox modification (S-glutathionylation) that induces a conformational change MCU promoting high order oligomerization and persistent activation even in resting conditions despite the presence of functional MICU1/MICU2 [57]. The increased MCU activity with a constitutive elevation of mitochondrial Ca 2+ , in turn, led to overproduction of mtROS, perturbed mitochondrial bioenergetics, and apoptosis. Overall, these data suggest that Ca 2+ and ROS create a self-perpetuating cascade that can culminate in the mitochondrial Ca 2+ overload and perturbed cell functions [59]. Further, in the conditions of oxidative stress, Na + /Ca 2+ exchangers, the Ca 2+ efflux mechanisms function in a reverse mode promoting calcium influx rather than efflux of matrix Ca 2+ [72].
MITOCHONDRIAL MATRIX CALCIUM BUFFERING: FORMATION OF CA-P COMPLEXES
Mitochondrial matrix Ca 2+ modulates various processes, including stimulation of aerobic mitochondrial metabolism, suppression of autophagy, regulation of cell life/death processes and Ca 2+ -induced Ca 2+ feedback, cytosolic Ca 2+ buffering, and in regulating spatial restriction of Ca 2+ waves (discussed in the review, Patron et al. [26]. Thus, the maintenance of matrix Ca 2+ levels is essential, which is a function of Ca 2+ influx and efflux across the mitochondrial membranes, including the buffering of Ca 2+ . Mitochondrial Ca 2+ buffering capacity expressed as the ratio of total and free Ca 2+ is in the range of 30,000 to 150,000 respectively, for physiological and pathological conditions, suggesting the enormous importance of organelle's Ca 2+ buffering [73][74][75]. Net uptake of Ca 2+ into mitochondria is coupled to the co-transport of Pi, resulting in the formation of Ca-P complexes [18][19][20][21]. Since mitochondria, unlike endoplasmic reticulum [76] do not have specialized Ca 2+ binding proteins, complex formation with Pi is considered a major mechanism of buffering matrix Ca 2+ contributing to mitochondria's massive calcium storage ability [22][23][24]. In fact, it was shown that there exists a linear relationship between total and free calcium levels below 10 nmol Ca 2+ /mg of mitochondrial protein, but beyond which (in the range of 1-5 μM) the matrix-free calcium remains invariant due to buffering by calcium phosphate [75]. Consistently, depletion of mitochondrial Pi resulted in the loss of mitochondrial calcium homeostasis with uncontrolled matrix-free Ca 2+ levels [75]. Pi enters into matrix through phosphate carrier (PiC) or phosphate transporter whose main physiological role is to function as a Pi:H + symport. The PiC transports the Pi, which is equivalent to its fully protonated form H 3 PO 4 with H + . Since the phosphate form that interacts with matrix Ca 2+ is PO4 3− , the phosphate has to undergo three stepwise deprotonations (H 3 PO 4 to H 2 PO 4 − to HPO 4 2− to PO 4 3− ) and thus the concentration of Pi in matrix is inversely proportional to the third power of proton gradient in the matrix. As we know that Ca 2+ accumulation in matrix decreases Δψm which is compensated by the net expulsion of H + by respiratory chain. If this were the only way, Ca 2+ accumulation will eventually stop since entire Δψm will be converted to proton gradient (ΔpH). Note that Ca 2+ influx into mitochondria is driven by Δψm component of proton motive force. However, Pi's transport with increasing ΔpH (provided the presence of external Pi) will neutralize the increasing matrix pH facilitating the Ca 2+ accumulation and the formation of reversible Ca-P complexes by transported Pi with accumulated matrix Ca 2+ [77]. At around ten nmol Ca 2+ mg protein −1 in the mitochondrial matrix, there is a kinetic balance between influx and efflux where the efflux pathway becomes independent of matrix Ca 2+ called set point since it is at this concentration Ca-P complexes begin to form thus buffering matrix Ca 2+ [75]. It should be noted that these Ca-P complexes are osmotically inactive, thus preventing mitochondrial matrix swelling as ion accumulation progresses [26].
NATURE OF MITOCHONDRIAL CALCIUM SALT COMPLEXES
As demonstrated in both isolated and in situ brain mitochondria, robust Ca 2+ accumulation induced by extramitochondrial Ca 2+ levels beyond the set point causes the formation of electron-dense granules within the matrix [78,79]. These electron-dense intra-mitochondrial Ca-P granules are amorphous and have both organic and inorganic constituents. Based on the method of granule isolation, the organic moiety accounts for about 16-60% of Ca-P granule content represented by nitrogen, protein, and sugar ribose, suggesting the presence of RNA [80]. Chemical analysis revealed that Ca 2+ and Pi are the major inorganic constituents of matrix Ca-P granules primarily corresponding to hydroxyapatite and whitelockite or a mixture of both as shown by the X-ray diffraction patterns of microincinerated granules (inducing the crystallization). Also, significant traces of MgO presumably derived from MgCO3 were also found [80]. Similar to precipitates analyzed in the context of biomineralization [81], the composition of mitochondrial precipitates seems to be complex both in structure and composition. Based on the Ca/P ratios ranging from 1.0 to 1.67, stoichiometric compounds of Ca and P reported in mitochondria include various forms of calcium orthophosphates [80,82,83] as shown in Table 1. Many of Ca-P complexes identified in the mitochondrial matrix are known to spontaneously interconvert based on the Ca/Pi ratios and energy availability [83,84]. The rate of mitochondrial Ca 2+ accumulation seems to be one of the majorfactors affecting the stoichiometry of calcium phosphate complexes, where faster Ca 2+ infusion rates promote higher Ca/P ratios (~1.5, Ca 3 (PO 4 ) 2 ) as shown in rat liver mitochondria [83]. Findings from electron microscopy and X-ray analysis of Ca 2+ -loaded mitochondria and the fact that Ca-P precipitates of crystalline nature are not observed in live cells reveal the indefinite amorphous nature of Ca-P complexes suggesting that crystallization is held in check within the mitochondrial matrix [30]. The amorphous nature of dense mitochondrial granules containing Ca-P was also confirmed with samples prepared by cryo-scanning transmission electron tomography, which overcomes the limitations associated with dehydrated or heavy-metal staining samples [85]. Further, the dissociation of Ca-P complexes upon mitochondrial depolarization and their release from respective transporters confirms the reversible nature of these granules [86]. Thus, allowing the gradual exit of calcium and Pi from mitochondria through their respective carriers once the cytoplasmic calcium storm subsides [22,34]. The indefinite amorphous nature of matrix Ca-P was attributed to endogenous mineralization inhibitors such as citrates and magnesium ions, ATP and ADP within the mitochondria matrix [31][32][33]. In addition to these endogenous inhibitors, polyphosphates (polyP, (P n O 3n+1 ) (n+2)− ) expressed by mitochondria can also inhibit the formation of insoluble Ca-P complexes or precipitates, thus regulating the levels of free Ca 2+ in the mitochondrial matrix. PolyP are negatively charged polyanions formed by the polymerization of many Pi molecules [87], which are known as potent inhibitors of Ca-P precipitation in vitro [88]. Accordingly, cells overexpressing mitochondrially targeting polyP hydrolyzing enzyme called polyphophatase (MitoPPX cells) have decreased levels of free matrix Ca 2+ despite similar loading of Ca 2+ uptake compared to wild type (independent of Ca 2+ efflux rates), suggesting the buffering of matrix Ca 2+ as Ca-P insoluble clusters [89,90]. This conclusion is supported by microscopic data where an increased accumulation of electron-dense granules was seen in MitoPPX cells compared to wild type cells under both basal and stimulated conditions [90].
INTRAMITOCHONDRIAL AGGREGATES
In a detailed study of experimentally induced mitochondrial calcification, both apatite-like crystalline, needle-shaped aggregates, and granular aggregates have been identified [91][92][93]. Consistent withelectron-dense granules of Ca 2+ overloaded mitochondria, intramitochondrial aggregates had both inorganic and organic components (glycoproteins, lipids). Interestingly, the type of intramitochondrial inorganic aggregates differed based on the tissue type examined in the study. Crystalline aggregates were restricted to apparently normal muscular and myocardial cells, and granular aggregates were mainly found in swollen mitochondria of degenerated hepatic cells. However, the relationship between the state of cells and the type of intramitochondrial aggregates is less evident in literature, which requires examination at the early stages of mitochondrial calcification. In general, consistent with the presence of crystallization inhibitors, crystalline aggregates are less commonly found in mitochondria, and they have been reported in both normal and damaged cells. Granular aggregates are mostly widely reported both in the context of mitochondria overloaded with Ca 2+ and in mitochondria of normal cells and cells at various stages of degeneration. In this study, although, morphologically both aggregate forms seem to be very closely associated with mitochondrial cristae there were some differences during the early stages of calcification. Crystalline aggregates were more closely situated near cristae membranes, unlike granular aggregates, which are close to cristae but lie more in the matrix. This association of crystalline aggregates with membranes is interesting considering the affinity of anionic phospholipids to Ca 2+ and their potential role as organic components aiding in the deposition of inorganic material and in the initiation of mineralization [94]. Further, no relationship was found between these two aggregate forms as only rarely granular and crystalline structures were found in the same aggregate, and mitochondria with one or two crystalline structures were found without any apparent granular aggregates [92]. Mitochondria filled with granular structures representing supersaturated ratios of Ca/Pi did not show any crystalline structures. Although results from this study suggest that intramitochondrial crystalline aggregates can form directly in the absence of granular intermediates, the process of mitochondrial calcification may be similar to bone calcification involving phases of nucleation and crystal growth, respectively [95][96][97]. According to classic nucleation theory, the major energy barrier for crystal growth is the formation of the critical nucleus (nucleation stage), which will support the growth and proliferation of crystals by adding more ions or nuclei clusters. Nucleation to occur de novo in the solution will require the respective ion concentrations to exceed their solubility properties (i.e., critical supersaturation). However, pre-nucleation clusters or surfaces that resemble crystal nucleus facilitate nucleation even at biological concentrations, thus overcoming the energy barrier of nucleation. For mitochondria, such quasi-stable pre-nucleation structures promoting the formation of apatite-like structures could be amorphous tricalcium phosphate or maybe even brushite [95,96,98,99]. Once this intermediate, obligatory step of forming insoluble Ca-P precipitates is achieved, HA crystals can form involving the poorly characterized complex process of crystal growth by adding more ions in the context of ongoing matrix Ca 2+ and Pi overload.
EFFECT OF CALCIFICATION ON MITOCHONDRIAL FUNCTION
Since both crystalline and granular aggregates are closely associated with mitochondrial cristae they could affect mitochondrial function, namely mitochondrial metabolism and ROS production. Matrix free Ca 2+ overload induces mtROS generation. However, very little is known about how the formation of Ca-P precipitates affects mitochondrial function. Ca-P granules effecting mitochondrial respiration were demonstrated in a study where the activity of complex I was inhibited, thus decreasing the rate of ATP synthesis. It was proposed that Ca-P precipitates could be forming physical barriers isolating complex I from its substrate, NADH [100]. However, it remains to be explored why complex I, but not other respiratory complexes, are inhibited by such Ca-P precipitation.
CELLULAR REACTION TO CALCIFIED MITOCHONDRIA
In an experimentally induced calcification of rat myocardium where focal areas of calcification were restricted to mitochondria, severely calcified cells generated cellular reaction [91]. In that study, cells of macrophagic type surrounded calcified areas and were seen to be engaged in active phagocytosis. Such a prompt inflammatory reaction seems to be important in preventing calcification from spreading to surrounding structures since only myocardial cells but not interstitial, and collagen fibers were involved in the calcification [91]. Neutrophils could be another potential phagocytic cell type involved in the cellular reaction to calcified cells. In an inflamed muscle tissue of patients with JDM, we have demonstrated infiltrating neutrophils and macrophages adjacent to calcified tissue involving in the engulfment of seemingly indigestible calcium crystals potentially of mitochondrial origin [101,102]. Since calcified mitochondria can potentially be harmful to cellular health, such calcified mitochondria could be extruded out of the cell as a protective mechanism to prevent cellular damage. However, if phagocytes do not promptly clear extruded calcified mitochondria, it could also result in ectopic calcification under a pro-calcifying environment and additionally could also induce a pathological crystal-mediated inflammation [101,102].
MITOCHONDRIAL CALCIFICATION IN HEALTH AND DISEASE
The role of mitochondrial granules in biological mineralization has been reported [103][104][105][106][107][108]. Incidentally, the early discovery of how cells load calcium into the matrix vesicles leading to chondrocyte growth plate calcification is based on the findings that significant amounts of accumulated mitochondrial calcium get transferred to MVs in the form of mitochondrial granules [8,14,15]. Similar proposition has been made for bone mineralization based on the temporal relationship between mitochondrial granule depletion and the mineralization front, suggesting that calcium and phosphate ions for bone mineralization are stored in mitochondrial granules [104,108]. But evidence directly linking intramitochondrial granules with vesicles participating in extracellular mineralization process has been missing. More recently, a direct evidence on the role of mitochondrial granules in extracellular mineralization has been demonstrated where calcium-containing vesicles were identified conjoining with calcium phosphate containing mitochondria, suggesting Ca-P granule storage and transport processes [106]. According to the proposed model, mitochondrial Ca-P granules are first transferred to intracellular vesicles possible by diffusion, which is not unusual for mitochondria given the evidence of vesicular transport between mitochondria and other cellular organelles [109,110]. These intracellular vesicles loaded with amorphous calcium phosphate are then transported to extracellular space propagating into apatite-like structures in extracellular matrix initiating mineralization [106].
Unlike metastatic calcification, which is caused by the increased substrate availability, dystrophic calcification is secondary to the altered membrane integrity due to trauma or inflammation and as such is observed at the sites of tissue degeneration. Mitochondria could be the initial sites for intracellular calcification in both types of calcification considering their robust Ca 2+ uptake and storage abilities. In case of metastatic calcification, elevated levels of extracellular calcium and phosphate ions could lead to increasing levels of these ions within the cell. Although some of these ions will be exported out of the cell via efflux mechanisms on the plasma membrane, but over the time ions will accumulate in mitochondria forming Ca-P complexes, thus initiating the process of intracellular metastatic calcification. In case of dystrophic calcification, despite the presence of normal levels of calcium and phosphate ions in circulation, increased plasma membrane permeability due to injury, inflammation or hypoxia make expulsion of ions from the cell ineffective leading to their accumulation in mitochondria initiating the process of intramitochondrial mineral formation. Mechanistically, mitochondrial calcification can be the contributing factor for soft-tissue calcification of dystrophic type as observed in many pathological conditions including dermatomyositis, scleroderma, systemic lupus erythematosus, and mixed connective tissue diseases, some of in which mitochondria have been implicated [5,[60][61][62][63][64][65]. However, there is still lack of definitive evidence of mitochondrial calcification in these disease conditions and its role in disease. The observation that inflammation and the associated mitochondrial oxidative stress leading to pathological mitochondrial Ca 2+ overload even in baseline cytosolic Ca 2+ levels has important implications for diseases like juvenile dermatomyositis in which dystrophic calcifications of muscle and skin are associated with chronic inflammation [111]. Incidentally, there is emerging evidence that mitochondrial calcification in skeletal muscle cells subsequent to inflammation is driven by excessive mtROS [101], warranting further studies on how various pathophysiological stimuli can cause dysregulated mitochondrial Ca 2+ uptake and calcification.
To summarize, mitochondrial calcification is a physiological process to protect cells from calcium-induced cytotoxicity; however, dysregulated may contribute to disease and calcification of tissues. Hence, understanding mechanisms regulating mitochondrial calcification and its role in accumulation of extracellular calcium deposits in tissue may allow for identification of novel therapeutic targets in several diseases, including dermatomyositis [5,102,112]. As detailed in the main text, physiological levels of mitochondrial Ca 2+ result from highly regulated Ca 2+ influx and efflux mechanisms, including buffering by the formation of calcium phosphate complexes. The formation of amorphous calcium phosphate complexes is promoted by the alkaline pH of the mitochondrial matrix and undefined nucleation factors. The crystallization of calcium phosphate into hydroxyapatite is prevented by factors such as magnesium ions, ATP, ADP, citrate, and polyphosphates. However, in the conditions of inflammation, hypoxia, and injury an imbalance of calcium influx and efflux ensues filling mitochondria with amorphous calcium phosphate complexes and crystalline hydroxyapatite granules. Details in the text. Figure concept adapted from [12]. | 2021-02-20T05:07:00.920Z | 2021-01-29T00:00:00.000 | {
"year": 2021,
"sha1": "847ff3eb5a9e798f3f93e7b4a04968c17c329605",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.20900/immunometab20210008",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "847ff3eb5a9e798f3f93e7b4a04968c17c329605",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
181410565 | pes2o/s2orc | v3-fos-license | Evaluation of nutritional value of Asystasia mysorensis and Sesamum angustifolia and their potential contribution to human health
Abstract Wild indigenous vegetables make considerable contributions to food baskets among subsistence farmers in sub‐Saharan Africa. The aim of this study was to evaluate the proximate analysis, mineral composition, vitamin C content, β‐carotene content, and GC‐MS profile of crude methanolic extracts of Asystasia mysorensis and Sesamum angustifolia. Crude extracts obtained through sequential extraction using ethyl acetate and methanol were screened for the presence of secondary metabolites. Functional groups present were determined with a Shimadzu FT‐IR spectrophotometer, while β‐carotene content and ascorbic acid content were evaluated using a Shimadzu HPLC and Shimadzu UV‐VIS spectrophotometer, respectively. Secondary metabolites present in the extracts were determined qualitatively using a Shimadzu GC‐MS system equipped with a NIST spectral database. From the results obtained, the two plants could supply the recommended daily requirement for micronutrient and vitamin C content needed for a healthy diet. The total phenolic and flavonoid contents in S. amgustifolia were higher as compared to A. myorensis; hence, their consumption is highly beneficial as some compounds identified in the GC‐MS profile have been reported to have medicinal properties. The findings on the mineral and chemical composition, GC‐MS profile of A. mysorensis and S. angustifolia indicate that their consumption may provide the recommended nutritional requirements needed for a healthy diet.
survey carried out in 2007, fifty-one percent of Kenyans lack access to adequate food and poverty, which is estimated to be forty-six percent nationally, and it has been associated with food security (Joshi, 2012;Muthoni & Nyamongo, 2010). Moreover, Kenya continues to face a challenge in food availability, due to high cost of farm inputs, inadequate rains, postelection violence, and spread of livestock and plant diseases such as rift valley fever and armyworm infestations (Muthoni & Nyamongo, 2010;Oyas et al., 2018). Many communities in developing countries, such as Kenya, consume wild edible plants that have a much higher nutrient content than globally known varieties or species but they are often underutilized (Durst & Bayasgalanbat, 2014). These indigenous green leafy vegetables are climate tolerant; hence, they are less damaging to the environment, address cultural needs, and assist in the preservation of the culture of local communities and they have been reported to be good sources of macro-and micronutrients (Maseko et al., 2017;Uusiku, Oelofse, Duodu, Bester, & Faber, 2010).
However, there is still a high prevalence of malnutrition, especially micronutrient deficiencies among low-or marginal-income bracket of the population (Maseko et al., 2017). The use of indigenous vegetables has been proposed as part of the solution to the problems of micronutrient malnutrition among these populations (Durst & Bayasgalanbat, 2014). These vegetables have been reported to assist in managing hunger, influence the intake of cereal staples, and play a key role in household food security among the poorer rural groups as they are rich in micronutrients, vitamins, and secondary metabolites (Maseko et al., 2017;Muthoni & Nyamongo, 2010). They contain essential vitamins such as A, B, and C and essential mineral elements such as calcium and iron as well as protein and calories that can eliminate dietary deficiencies. Because of their medicinal value, people suffering from medical conditions such as high blood pressure, HIV/AIDS, cancer, and hypertension are often encouraged to consume them (Muhanji, Roothaert, Webo, & Mwangi, 2011). Moreover, dietary changes from traditional low-fat, plant-based proteins toward high-fat, animal proteins have received considerable attention due to their contributing factor to the increased occurrence of chronic lifestyle diseases (Hung et al., 2004). These vegetables compare well with Swiss chard, cabbage, and spinach in terms of micronutrient levels, while dark green leafy vegetables (DGLV) have been reported to be a rich source of folate and linoleic acids (Van der Walt, Ibrahim, Bezuidenhout, & Loots, 2008). Nutritional data on wild varieties of traditional African green leafy vegetables are fragmentary and almost nonexistent for wildgrowing Asystasia mysorensis and Sesamum angustifolia. The present study reports on the nutritional value, phytochemical screening, GC-MS profiles, vitamin C, and β-carotene content of two wild-growing indigenous vegetable species A. mysorensis and S. angustifolia.
| Extraction of plant material
Cold sequential extraction of A. mysorensis and S. angustifolia was carried out using ethyl acetate and methanol as the extracting solvents. Hundred grams of the plant powders was macerated in 1,000 ml ethyl acetate followed by methanol at room temperature.
The extracts were filtered using Whatman No. 1 filter paper and concentrated using a Rota evaporator (BUCHI R 200;Labortechnik) set at 40°C. The crude extracts were then left in a fume chamber to dry, after which they were stored at 4°C until further analysis (Madivoli et al., 2018).
| Phytochemical screening of plant extracts
Standard established procedures for identifying metabolites were used to carry phytochemicals screening on the ethyl acetate and methanol extracts as described by Harborne (1998). An aliquot of every plant extract was analyzed for the presence of saponins, alkaloids, terpenoids, phenols, and tannins (Ezeonu & Ejikeme, 2016;Madivoli et al., 2018).
| Characterization of crude extracts
The crude extracts were characterized using a Fourier transform infrared spectrophotometer, Shimadzu FTS-8000 (Shimadzu Corporation). The KBr pellets of the extracts were prepared by mixing 10 mg of finely grounded samples, with 250 mg KBr (FT-IR grade). The spectral resolution was set at 4 cm -1 and the scanning range from 400 to 4,000 cm -1 (Madivoli et al., 2018).
| Total phenolic content
The total phenolic content of the crude extracts was evaluated by the Folin-Ciocalteu method with some modifications (Thangaraj, 2016). 0.1 g of plant material was extracted with 4.9 ml 80% methanol and filtered through a Whatman No. 1 filter paper to make the stock sample. Fifty microliter of the stock sample was made to 1 ml with distilled water and then 0.5 ml of 1 N Folin-Ciocalteu reagent and incubated for 5 min. 2.5 ml of 5% Na 2 CO 3 was then added and the total volume made to 4 ml using distilled water. The resultant solution was incubated at room temperature for 40 min.
The same quantity of the reagents was used to prepare the calibration standards. Both distilled water and garlic acid (0-75 µg/ ml) were used to produce standard calibration curve, and the total phenolic content was expressed in mg of garlic acid equivalents (GAE)/g of dry weight extract (DW). Absorbance was measured at 769 nm using a Shimadzu 1800 UV-VIS spectrophotometer (Baba & Malik, 2015;Madivoli et al., 2018;Mburu et al., 2016;Thangaraj, 2016).
| Total flavonoid content
Aluminum chloride method was used to determine the total flavonoid content as described in literature (Baba & Malik, 2015;Mburu et al., 2016;Thangaraj, 2016). 0.1 g of plant material was extracted with 4.9 ml 80% methanol and filtered with Whatman No. 1 filter paper to make the stock sample. One hundred microliter of the stock sample was made to 1 ml with distilled water. Then to this solution, 150 µl of 5% sodium nitrite (NaNO 2 ) and the contents were vortexed and incubated for 5 min at room temperature. One hundred fifty microliter of 10% aluminum trichloride (AlCl 3 ) was then added, and the solution was vortexed and then incubated for 6 min at room temperature.
2.0 ml of sodium hydroxide was then added, and the solution was made to 5.0 ml and incubated at room temperature for 15 min. The same quantity of the reagents was used to prepare the calibration standards. Both distilled water and rutin (0-75 µg/ml) were used to generate the standard calibration curve, and the total flavonoid content was expressed as mg of rutin equivalents (RE)/g of dry weight (DW) extract. Absorption readings were carried out at 511 nm using Shimadzu 1800 UV-VIS spectrophotometer (Madivoli et al., 2018;Mburu et al., 2016;Thangaraj, 2016).
| Estimation of macro-and micronutrient content
The concentration of micronutrient, macronutrients, and toxic elements in A. mysorensis and S. angustifolia was evaluated using Agilent 7900 ICP-MS (Agilent) after acid digestion. One gram of dried material was digested with 12 ml of HCl: HNO 3 (1:3) to remove all organic matter from the plant samples. After digestion, the residue was washed with distilled water and filtered into a 50-ml volumetric flask and topped to the mark to await further analysis using an FAAS (Kumari, Parida, Rangani, & Panda, 2017;Thangaraj, 2016;Uddin et al., 2016;Yami, Chandravanshi, Wondimu, & Abuye, 2016).
| Estimation of β-carotene content
Approximately 2 g of plant samples was extracted using 50 ml of acetone twice followed by concentrating to 1 ml using a Bibby Sterling RE 100B UK rotary evaporator. The extracts were then eluted through a chromatographic column which was packed with silica gel to elute the beta-carotene as a yellow pigment which was collected in a 25-ml flask. Five solutions of standard beta-carotene whose concentration range was 0.5-2.5 µg/ml were then prepared from a stock solution containing 2.5 µg/ml β-carotene. The concentration of beta-carotene in the plant samples was then estimated using a Shimadzu 1800 UV-VIS spectrophotometer (Shimadzu Corporation) set at 440 nm (Fungo et al., 2017;Kumari et al., 2017).
| Estimation of ascorbic acid content
The vitamin C content of the plant samples was determined using high-performance liquid chromatography (HPLC) (LC 6 A, Shimadzu) using a C18 (ODS) column (50 mm i.d × 30 cm) equipped with a UV detector set at 266 nm. The extract was obtained after 2 g of each sample was extracted with 0.8% metaphosphoric acid, followed by centrifugation at 10397.4 g, filtering through 0.45-µM filter and diluted with 10 ml of 0.8% metaphosphoric acid. A calibration curve was obtained by preparing a series of standard solutions comprising of ascorbic acid at different concentrations which were used to estimate the vitamin C content of the plant samples (Rangani et al., 2019;Vikram, Ramesh, & Prapulla, 2005).
| GC-MS analysis
GC-MS analysis of crude hexane and methanol extracts was evaluated using a Shimadzu GC-MS QP2010SE. A Shimadzu GC-MS QP2010SE (Shimadzu Corporation) operating in EI mode at 70 eV equipped with a NIST spectral database was used for the qualitative identification of compounds present in the extracts. A BPX5 capillary column 30 m × 0.25 mm (id) and Helium gas with a flow rate of 1.2 ml/ min were used as the carrier gas, while the oven temperature and the mass range were set at 60°C and 40-400 m/z, respectively (Madivoli et al., 2018).
| Qualitative phytochemical screening
The phytochemicals tested include saponins, alkaloids, tannins, glycosides, and flavonoids and the results are presented in Table 1.
Important phytochemicals such as saponins, steroids, flavonoids, phenolic compounds, and tannins were found to be present in A. mysorensis and S. angustifolia crude extracts. Thus, phytochemical screening serves as the initial step in predicting the types of potentially active compounds present plant samples (Cheruiyot et al., 2015). It has been reported that alkaloids tend to intercalate DNA thereby inhibiting cell division, while phenolics and polyphenols such as flavonoids, quinones, tannins, and coumarins all exert remarkable antifungal activity (Gupta & Birdi, 2017).
| Proximate analysis
The analysis of proximate composition of A. mysorensis and S. angustifolia is depicted in Table 2.
From the results obtained in this study, S. angustifolia had a higher moisture content, ash content, total carbohydrate, and total proteins as compared to A. mysorensis which had a higher crude fat content. The difference between the contents of the two plants may be as a result of different growth conditions, genetic variation, stage of maturity, or due to differences in postharvest handling (Fungo et al., 2017). The results obtained indicate that dried A. mysorensis and S. angustifolia plants are a good source of dietary fibers, minerals (ash), protein, and energy, but not a good source of edible fat given the fact that drying lowers the proximate composition of vegetables as reported elsewhere (Hassan et al., 2007).
| Total phenolic and total flavonoid content
The total phenolic content and total flavonoid content of A. mysorensis and S. angustifolia plant extracts are depicted in Figure 1 as garlic acid equivalent per dry weight (mg GE/g DW) and rutin equivalent per dry weight (mg RE/g DW), respectively.
Phenols are secondary metabolites that are prepared by plants as a defense mechanism to protect themselves against parasitic organisms (Tamokou, Mbaveng, & Kuete, 2017). From the results obtained in this study, S. angustifolia had the highest total phenolic content of 59.93 ± 0.05 mg GE/g DW, while A. mysorensis had a total phenolic content of 26.5 ± 1.67 mg GE/g DW, respectively. Apart from supplying the required nutrients, green leafy vegetables may provide a host of other components such as non-nutrient phytochemicals that can have a positive impact on human health. Dietary polyphenols have been associated with lowered risks of chronic lifestyle diseases such as cancer and cardiovascular diseases because they have the ability to neutralize free radicals through electron donation or hydrogen atom (Quinones, Miguel, & Aleixandre, 2013;Tsao, 2010;Zhou et al., 2016). The regular intake of dietary green leafy vegetables that have a high concentration of phenolic compounds has been reported to reduce the risk of lifestyle-related diseases (Moyo et al., 2018). Generally, plant extracts that contain a high concentration of polyphenols have been reported to exhibit high antioxidant activity (Uusiku et al., 2010). Flavonoids such as myricetin, quercetin, kaempferol, isorhamnetin, and luteolin have been reported to be present in leafy vegetables (Uusiku et al., 2010). Quantitative total flavonoid content of A. mysorensis and S. angustifolia plant extracts is depicted in Figure 1. From the results obtained in this study, S. angustifolia had a higher total flavonoid content of 9.22 ± 0.06 mg RE/g DW as com-
| FT-IR characterization
Crude extracts of both A. mysorensis and S. angustifolia were assayed to determine the functional groups present and the results are depicted in Figure 2.
From the spectrum obtained, the crude extracts revealed presence of hydroxyl groups that are characteristic of alcohols at around 3,300 cm −1 , CH 2 functional group at 2,900 cm −1 , and C-O-C functional groups at 1,100 cm −1 . The presence of these functional groups is an indication of presence secondary metabolites such as glycosides, tannins, flavonoids, phenols, and saponins (Sasidharan, Chen, Saravanan, Sundram, & Latha, 2011).
| Micro-and macronutrient content
The micronutrient, macronutrients, and toxic elements present in the samples were determined using a Agilent ICP-MS, and the results are depicted in Table 3.
The micro-and macronutrient contents of green leafy vegetables vary widely and are influenced by several factors such as stage of maturity and postharvest handling (Moyo et al., 2018). In the present study, the vegetables were obtained from the wild in a fresh state at the sampling point to determine the micronutrient concentration available to consumers. The mineral content in the two vegetables was either in the range or higher than mineral content reported in other indigenous vegetables. In comparison with the estimated FAO/WHO recommended daily intake and forms part of vitamin B12, cobalamin, which supports production of red blood cell and the formation of myelin nerve coverings. No specific recommended daily allowance has been suggested for cobalt since dietary needs are very low, and they are fulfilled by vitamin B12. Considerably, high iron content has also been reported for some wild, traditional leafy green vegetables (Schonfeldt & Pretorius, 2011). Iron plays a crucial role as an oxygen carrier from lungs to body tissues, as a transport medium for electrons within cells and an integral part of important enzyme systems such as cytochromes (Moyo et al., 2018). The mineral content found in African leafy vegetables has been reported to exceed the levels found in exotic vegetables such as cabbage (Maseko et al., 2017;Rahmdel et al., 2018). Analysis of toxic elements such as lead and mercury also revealed that the two plants had an appreciable amount of these elements which can be as a result of absorption from the environment. The metal concentrations of vegetables in this study were significantly different and could be attributed to the differences in their morphology and physiology for heavy metal uptake, exclusion, accumulation, and retention (Rahmdel et al., 2018).
| Vitamin C content
The vitamin C contents in the A. mysorensis (mg/g) and S. angustifolia (mg/g) were determined using Shimadzu HPLC. S. angustifolia recorded the highest vitamin C content of 92.42 mg/g, while A. mysorensis had the lowest value of 42.04 mg/g. The two vegetables recorded vitamin C contents that were higher than the recommended daily intake of 40-70 mg/100 g if consumed in large quantity.
Analysis of the vitamin C content of the two indigenous vegetables against RDA revealed that a portion of the vegetables can supply more than the recommended minimum daily requirements of 75 mg/ day for males and 60 mg/day for females between the ages of 19 and 70 years, respectively. The two vegetables can be a substituent for conventional vegetables such as spinach which are largely consumed to meet the RDA. Moreover, vitamin C is a potent antioxidant which plays an important role as an electron donor for enzymes involved in collagen hydroxylation, tyrosine metabolism, and carnitine biosynthesis (Prockop & Kivirikko, 1995). The high vitamin C content of A. mysorensis and S. angustifolia makes the two vegetables compatible to use with starchy staples because they contain ascorbic acid, (Maseko et al., 2017). Ascorbic acid promotes absorption of soluble nonheme iron through chelation or by maintaining the iron in the reduced form. In addition, it also significantly counteracts the inhibition of iron absorption by phytates in the diet. Besides its ability to scavenge free radicals, ascorbic acid also plays a part in the regeneration of other antioxidants species such as tocopheroxyl and the carotene radical cation from their radical species (Uusiku et al., 2010).
| Beta-carotene content
Beta-carotene content was determined calorimetrically after extraction with acetone and separation by column chromatography. Betacarotene content was evaluated by taking absorbance readings at 470 nm against a blank sample. The experimental results showed that the amount of β-carotene in the S. angustifolia and A. mysorensis were 1.72 ± 0.00 mg/g and 1.12 ± 0.00 mg/g, respectively. Overall A. mysorensis and S. angustifolia have a much higher β-carotene content than other globally known species or varieties commonly produced and consumed as green leafy vegetables. Vitamin A is an essential nutrient in humans which plays a vital role in the functioning of the visual system, and maintenance of cell function for growth and epithelial cellular integrity as well as production of red blood cells (World Health Organization, 2009
| GC-MS profile
The GC-MS chromatogram of eluted compounds present in the methanolic extracts with their chemical structures and the retention time is depicted in Tables 4 and 5, (Appendix S1 and S2) respectively.
Plant-derived therapeutic compounds belong to various classes of secondary metabolites that have a wide range of activity which is dependent on the species, climate of the country of origin, the topography and may contain different categories of active components (Tamokou et al., 2017 tumor, tumor cerebri, and space occupying lesions. It is also effective in lowering intraocular pressure in glaucoma and shrinking the brain during neurosurgical procedures (Frank, Nahata, & Hilty, 1981;Ghosh et al., 2015). The compound, 3-deoxy-d-mannoic lactone with % peak area (10.155), has previously been reported to have antibacterial activity (Ghosh et al., 2015;Shobana, Vidhya, & Ramya, 2009). lifestyle-related diseases (Hung et al., 2004;Uusiku et al., 2010).
| CON CLUS ION
Moreover, the compounds identified from methanolic extract of A. mysorensis and S. angustifolia have been reported to possess biological activity and further isolation of these phytoconstituents may prove their medicinal importance in future. Encouraging the use of wild edible green leafy vegetables such as A. mysorensis and S. angustifolia needs to be realized by highlighting their importance in areas where they currently being produced and consumed (Baldermann et al., 2016).
ACK N OWLED G M ENTS
The authors take this opportunity to acknowledge the National research fund, AFRICA-ai-JAPAN project JFY 2018/2019, Jomo Kenyatta University of Agriculture, and Technology for their support in accessing facilities.
CO N FLI C T O F I NTE R E S T
The authors declare that they do not have any conflict of interest.
E TH I C A L S TATEM ENT
This study does not involve any human or animal testing. | 2019-06-07T21:33:45.286Z | 2019-05-15T00:00:00.000 | {
"year": 2019,
"sha1": "19be57546abfa8b068b31aa3790561ae1dc71d1b",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/fsn3.1064",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "19be57546abfa8b068b31aa3790561ae1dc71d1b",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
54639557 | pes2o/s2orc | v3-fos-license | Recent status of A Positron-Electron Experiment (APEX)
A project is underway to generate an electron-positron plasma by using the NEPOMUC positron source at the FRM-II facility combined with a multicell-type Penning trap (PAX) and a superconducting dipole magnetic field trap (APEX). In the APEX project, proof-of principle experiments are proposed for the development of efficient injection methods of positrons by using a small dipole magnetic field trap with a permanent magnet. Plans for the APEX project and its recent status are reported.
Introduction
There has been growing interest in the formation of an electron-positron plasma in a laboratory [1]. Conventional plasmas are characterized by large mass differences between electrons and ions. Many plasma phenomena, such as wave propagation and stability properties, are strongly related to the mass asymmetry and differences in the mobility of electrons and ions. Unlike conventional plasmas, the electron-positron plasma is in the class of pair plasmas, plasmas consisting of equal-mass particles. It is theoretically predicted that pair plasmas exhibit unique physical properties. According to theoretical studies and recent observation in -ray astronomy, formation of a large amount of electrons and positrons is predicted in pulsar magnetospheres and active galactic nuclei. Thus the experimental understanding of the electron-positron plasma is also important for astrophysics.
Although theoretical studies have been conducted intensively, there are very few experiments to generate an electron-positron plasma. This is mainly because of (1) the difficulty in the simultaneous confinement of electrons and positrons as plasmas, and (2) the availability of strong positron sources.
(1) Concerning the confinement configurations, stable confinement of non-neutral pure electron plasmas has been demonstrated in a toroidal stellarator [2] and in a dipole magnetic field trap [3]. One can confine plasmas at any degree of non-neutrality in the toroidal geometries in principle. Based on these experiments, we will construct A Positron-Electron Experiment (APEX) for the confinement of the electron-positron plasma. (2) The positron source, NEPOMUC [4] at the FRM-II facility is the brightest DC moderated source in the world, with a rate in the order of 10 9 positrons s -1 . Moreover, the development of a Positron Accumulation Experiment (PAX) is underway for the accumulation and fast extraction of a large amount of cold positrons. PAX consists of a multicell-type Penning trap and is designed to trap of the order of 10 11 positrons and extract them in a few milliseconds.
We plan to generate an electron-positron plasma in combination with the NEPOMUC, PAX, and APEX facilities [1]. In this report, we focus on the APEX project and present its plans and recent status. While excellent confinement properties are expected in the closed toroidal geometry, it is not straightforward to transport positrons from the source to the trap region. For this purpose, two methods have been proposed [1]. One is to use external electric fields and the other is to use positronium as intermediate particles, which are generated on single crystal surfaces [5] hit by the DC positron beam. Prior to the pair plasma experiment to be conducted in a superconducting dipole field trap, we plan to conduct proof-of-principle experiments to test the injection methods by using a small dipole field trap with a permanent magnet. The required parameters for the pair plasma formation and plans for the small trap experiment are described in the following sections.
Target parameters for pair plasma production in APEX
In order to observe collective plasma phenomena of a charged particle cloud, the scale length a of the cloud must be larger (preferably >~10 ) than the Debye length = √ 2 ⁄ , a typical length of electrostatic shielding [1]. The key techniques are (1) efficient injection methods of a large amount of low temperature positrons into a confinement region with small volume and (2) excellent confinement properties during injection, confinement, and mixing phases. Because the condition ~10 is achieved when temperatures = 1 eV and a number density = 10 12 m -3 for a realistic scale of ~10 cm, we set these parameters as a target ( Fig. 1 (a)). By using the DC positron beam, the required confinement time of the trap is ~10 s when the confinement region volume V~10 -2 m -3 . Although the maximum confinement time of a toroidal pure electron plasma exceeds this value, it is difficult to realize such a long confinement by using the relatively weak DC positron beam, clearly showing the importance of the PAX development. The total efficiency (after transport, cooling, and mixing with electrons) above 10% is needed for the injection of 10 11 positrons from PAX to APEX.
As an antimatter plasma, the effects of (1) annihilation with neutral particles, (2) pair-annihilation with electrons, and (3) positronium formation processes should be considered as well as confinement properties of the trap system. We estimate the lifetimes of positrons set by these effects according to Ref. [6] for the above target parameters. (1) As shown in Fig. 1 (b) with solid lines, positron annihilation on neutral gas is negligible in clean UHV environments. For nitrogen gas, =2×10 4 s at pressure 10 -6 Pa, which is routinely obtained with a standard vacuum system. (2) When mixed with positrons, the dominant two photon annihilation time [s]~10 20 /ne[m -3 ], and the annihilation effects are again negligible for low density plasmas, as plotted with a chain line in the figure. (3) Lifetimes set by the three body recombination process are plotted with dot lines for different electron and positron temperatures. It is likely that this effect will not be a problem but may cause significant loss of plasmas at very cold (<0.01eV) and high-density (>10 14 m -3 ) cases. In addition to these effects, it is possible that instabilities and enhanced turbulent transport emerge due to the two fluid effects, which should be investigated when positrons are mixed with electrons in future experiments. Figure 2. Schematic drawing of a proof-of-principle experiment, including the supported neodymium magnet, E×B plates for vertical injection, rotating wall for tangential injection, and diagnostics.
Proposed proof-of-principle experiments in a small dipole field trap
As a first step experiment, we will develop appropriate injection schemes of positrons (1) by using the effects of external electric fields, and (2) by using the positronium reemission process on solid materials in this proof-of-principle experiment [1,5] . These experiments will be conducted in a small dipole magnetic field illustrated in Fig. 2. The dipole field is generated by a mechanically supported neodymium magnet and the typical field strength in the confinement region is 0.05T. In order to inject positrons from the guiding field of the beam line into the confinement region, positrons must be transported across closed field lines. As injection methods by using external electric fields, we plan to test two procedures. The first one is a vertical injection scheme by using the E×B drift motion induced by a local crossed electric field. As shown in Fig. 2, the positron beam is vertically guided to the peripheral region of the dipole field, where a local electric field is applied in the perpendicular direction. Figure 3 3 fields is to use a rotating wall (a technique to generate field asymmetry by using segmented electrodes [7]) with tangential positron injection. A charged particle in a poloidal dipole field undergoes a toroidal rotation due to the grad B and curvature drifts. The typical rotation frequency for a 10 eV positron in the present configuration is in the order of MHz. By applying an azimuthal electric field by using segmented electrodes ( Fig. 4 (a)) in the dipole field, effective radial transport is induced. Figure 4 (b) shows the typical orbits of a positron. When the rotating wall frequency is synchronized with the rotation frequency of the positron, positrons are effectively transported to the confinement region. After transported inward, positrons are expected to relax into an equilibrium state in the dipole field. For the diagnostics of the injected number of positrons, finally the magnet is negatively biased so that trapped positrons are dumped onto the magnet surface. The rays from annihilation are counted by a scintillator detector with a pulse height analysis system. As a second positron injection method [1], positrons are guided to solid state materials and converted into positronium atoms [5]. The neutral positrons are freely transported into the confinement region, where they are photo ionized to generate an equal amount of electrons and positrons. We will study the formation ratio of positrons by injecting the positron beam from NEPOMMUC and access the feasibility for the electron-positron plasma formation. As well as lifetime measurements, a coincident Doppler-broadening spectroscopy will be applied for the measurements [8].
Summary and outlook
Aiming for the electron-positron plasma experiment, we have started the APEX project and plan to develop injection methods of positrons by using a small-scale dipole magnetic field trap with a permanent magnet. Based on these proof-of-principle experiments to investigate the injection efficiency of positrons scheduled to be carried out in 2014, we plan to construct a superconducting levitated dipole field trap and simultaneously confine positrons and electrons as a future experiment. | 2018-12-06T21:33:50.022Z | 2014-04-28T00:00:00.000 | {
"year": 2014,
"sha1": "48f6ca690a3ebd75dd2863666689ca637c616e07",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/505/1/012045",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "9b883a1a5721682c321b48907602806df98acdc0",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
233702082 | pes2o/s2orc | v3-fos-license | Cell size, genome size, and maximum growth rate are near‐independent dimensions of ecological variation across bacteria and archaea
Abstract Among bacteria and archaea, maximum relative growth rate, cell diameter, and genome size are widely regarded as important influences on ecological strategy. Via the most extensive data compilation so far for these traits across all clades and habitats, we ask whether they are correlated and if so how. Overall, we found little correlation among them, indicating they should be considered as independent dimensions of ecological variation. Nor was correlation evident within particular habitat types. A weak nonlinearity (6% of variance) was found whereby high maximum growth rates (temperature‐adjusted) tended to occur in the midrange of cell diameters. Species identified in the literature as oligotrophs or copiotrophs were clearly separated on the dimension of maximum growth rate, but not on the dimensions of genome size or cell diameter.
| INTRODUC TI ON
To the extent ecological strategies of species can be captured via measurable traits, this makes comparisons possible at global scale.
For vascular plants on land, major dimensions of strategy variation have been described through traits (e.g., Díaz et al., 2016), and responses to competition have been generalized across different vegetation types through traits (e.g., Kunstler et al., 2016). The possibility of a trait-based ecology for bacteria has been advocated by several research groups (Fierer et al., 2007(Fierer et al., , 2014Hall et al., 2018;Ho et al., 2017;Krause et al., 2014;Litchman et al., 2015;Litchman & Klausmeier, 2008;Malik et al., 2020;Wood et al., 2018), but up to the present has taken the form of discussing concepts or interpreting particular study situations. Based on synthesis of quantitative and phenotypic trait data across bacteria and archaea as a whole (Madin et al., 2020), we here assess correlation patterns among some major traits and consider what they imply for ecological strategies. By "as a whole," we mean spanning all clades and habitats, but excluding species that have not been brought into culture. For species known only from metagenomic assembly, cell sizes and maximum growth rates are not known; hence, they are not included. It is possible that trait correlation patterns among not-yet-cultured species may prove different, but since phenotypic trait data are not yet available for them, that question cannot be addressed at present. For the main-text narrative, we have excluded also mycoplasmas and other taxa specialized to make their living inside eukaryote cells. Versions including these taxa are shown in Appendix Figures A1-A4.
In the present paper, we focus on cell diameter, genome size, and maximum growth rate. These traits are widely thought to have important roles in ecological strategy among bacteria and archaea (reviewed briefly below), and they are available across a reasonably wide range of species. Major habitat groups are considered as a potential influence. Relationships to aerobic versus anaerobic metabolism are discussed elsewhere (Nielsen et al., 2021).
The question addressed here is how these three quantitative traits correlate with each other across species. Consider the following two ends of a spectrum of possibilities. At one end, the three traits might vary independently, meaning that at any given level for one, a wide range of values for the others can be found.
This might be expected on the basis that each is capable of evolving independently of the others. At the other end of a spectrum of possibilities, all these traits might be coordinated with the oligotrophy-copiotrophy spectrum generally regarded as important in bacterial ecology. If oligotrophy favors small cells, small genomes and slow maximum growth rate, and if the oligotrophycopiotrophy spectrum is a major influence on variation across species, then we would expect all these traits to be distinctly correlated across species. Further if such a correlation were present, then a subsidiary question would be whether it was clearly evident within habitats, or whether it might take the form mainly of differences between relatively oligotrophic habitats such as pelagic water versus relatively copiotrophic habitats such as waste water.
We first summarize briefly what is known about each of the three quantitative traits, then turn to their relationships to copiotrophy and oligotrophy.
| Cell size
Recorded mean cell radial diameter varies about one order of magnitude across species, running mostly between about 0.2 and 3 μm.
Cell volume varies more widely, being the cube of a linear dimension and also due to the diversity of cell morphologies. Here, we adopt radial diameter as our main descriptor of size. It captures surface area to volume relations effectively both for spheroidal cocci and for rod-shaped bacilli, the two most common shapes.
Potential diffusion of substrate toward and into the cell, per cell volume, increases steeply as cells become smaller, to −2 power of radius (Fenchel et al., 2012;Fiksen et al., 2013;Jumars, 1993;Madsen, 2008). This means that smaller cells can sustain a given consumption rate per cell volume from lower ambient substrate concentrations. It has been seen as a reason why small cells should be favored in oligotrophic settings (e.g., Madsen, 2008;Schulz & Jorgensen, 2001).
Lower limits to cell diameter are thought to be set by costs of cell wall and membrane construction becoming larger at the expense of investment in synthetic and metabolic machinery. For example, a calculation by Raven (1994) suggested that boundary membranes reach more than 30% of cell dry mass by the time a spherical cell becomes as small as 0.5 μm radius.
Cell sizes are known to adjust plastically within cell lineages in response to substrate supply (Lever et al., 2015), with volumes decreasing up to 10-fold after 28 days of starvation conditions compared with growth conditions. Available cell size measurements have nearly all been made under laboratory growth conditions. Measurements can be considered standardized in this respect, and should capture differences across species, though not necessarily reflecting actual field cell sizes.
| Genome size
Variation in genome size across bacteria and archaea reflects mainly the number of different coding genes, rather than noncoding sequence or genes found in multiple copies (Konstantinidis & Tiedje, 2004; and this was true in our dataset also, Figure A1).
Genome size can therefore be thought of as capturing ecological strategy variation along a versatility dimension (Guieysse & Wuertz, 2012). It is expected to reflect the range of different resources that can be transported or metabolized, together with flexibility in responses to different circumstances. Consistent with this interpretation, genome size is correlated with the proportion of the genome occupied with receiving internal and external signals and using those to modify gene expression, and also with aerobic metabolism and with sporulation (Nielsen et al., 2021).
Much discussion has focused on genome reduction (Giovannoni et al., 2014;Swan et al., 2013). This takes two disparate forms (Giovannoni et al., 2014). Species that grow inside eukaryote cells or otherwise in very intimate association often have come to rely on their associate to provide metabolic products and the corresponding pathways are no longer present in their own genome. Small effective population sizes increase the importance of drift relative to selection (Bobay & Ochman, 2018), making more genes effectively neutral and prone to be eliminated. In contrast, where effective population sizes are large and resources low, selection can minimize resources required for replication. The pelagic taxa Prochlorococcus and Pelagibacter are exemplars.
| Maximum growth rate
Maximum growth rate is the potential relative rate of increase under favorable growth conditions, μ max in the Monod equation. Like measurements for cell size, it should be thought of as a bioassay that captures differences across species, not as a typical field observation.
The growth temperatures adopted for culture vary across species and growth rates tend to be faster at higher temperatures. Here, we use a temperature-adjusted maximum growth rate.
Also of interest, and investigated in appendices, is ribosomal RNA operon copy number (RRN). This is a contributor to maximum growth rate and is quite widely used as a proxy for it (Nemergut et al., 2016;Niederdorfer et al., 2017;Valdivia-Anistro et al., 2016). However, reported correlations between RRN and maximum growth correspond to only moderate r 2 values in the range .15-.35 (Nielsen et al., 2021;Vieira-Silva & Rocha, 2010). Both maximum growth rate and RRN are expected to be most strongly under selection in lifestyles where resources become episodically available and there is a race to convert them into population. For example, Li et al. (2019) showed that RRN was not correlated with growth rates in soil, but became correlated with growth rates following glucose addition.
Larger RRN allows species to build up ribosome numbers faster and perhaps to maintain larger numbers. However, the more ribosomes produced or maintained, the less protein is available for metabolic machinery that would use substrate more completely (Flamholz et al., 2013;Molenaar et al., 2009;Polz & Cordero, 2016;Roller et al., 2016). Accordingly, high RRN is associated with a rate-yield trade-off, whereby faster-multiplying populations are less efficient in converting substrate into cell material (Polz & Cordero, 2016). The rate-yield trade-off occurs also as plastic response, with gene expression shifting to economize on possible downstream mechanisms of energy use. In summary, RRN and potential rate of increase are correlated, but not identical.
Overall, enough is known to feel confident that cell size, genome size, maximum growth rate and RRN are each an important influence on the ecology of bacterial and archaeal species.
| Traits in relation to the oligotrophycopiotrophy spectrum
A strategy spectrum widely regarded as important in microbial ecology runs from oligotrophy, coping with low resource supply, to copiotrophy, the capacity to take advantage of rich resource supply (Fenchel et al., 2012;Fierer et al., 2007;Madsen, 2008). This spectrum is expected both on a within-habitat and a between-habitat basis. Between habitats, some environments such as deep aquifers and the pelagic waters of central gyres clearly offer much lower levels of resource supply than (say) wastewater treatment plants.
Within habitats, opportunity for many heterotrophic bacteria and archaea arises in the form of successions initiated by an injection of substrate, via (say) death of a zooplankter or production of a fecal pellet. Initial occupancy of such a resource is expected to favor copiotrophs that capture a large proportion by rapid multiplication. As resource concentrations become depleted, the competitive balance is expected to shift to oligotrophic taxa that can sustain growth from lower substrate concentrations.
The strongest expectation is that oligotrophs will have slower maximum growth rates than copiotrophs and that these will be associated with higher yields and lower RRN. It has also been quite widely argued that oligotrophy should be characterized by smaller cell sizes (Giovannoni et al., 2014;Lauro et al., 2009;Lever et al., 2015;Poindexter, 1981) and smaller genome sizes (Fierer, 2017;Giovannoni et al., 2014), although Poindexter (1981) reasoned that oligotrophs needed to extract all possible energy from substrate, which would often require them to have multiple pathways and to be aerobic.
Some have sought to apply the competitor-stress tolerator-ruderal (CSR) strategy triangle from plant ecology to microbes, with the S dimension of this scheme corresponding to oligotrophy (Fierer, 2017;Krause et al., 2014). These treatments similarly suggest small cell size and small genomes may tend to be associated with oligotrophy.
So then, if these expectations for oligotrophy are correct, and if also the oligotroph to copiotroph spectrum is a substantial influence on variation across bacterial and archaeal species, we would expect to find correlation across species among small cell size, small genome size, slow maximum growth rate, and low RRN. At the other end of the spectrum of possibilities, these traits might vary more or less independently. This would mean that they operated separately as influences on ecological strategy, and all combinations have been able to emerge during the course of prokaryote evolution.
We note that DeLong et al. (2010) and Kempes et al. (2012Kempes et al. ( , 2016 have argued that maximum growth rate, genome size, and cell size are observed to be positively correlated across species. Their data are compared with ours in Appendix B. Briefly, the differences in conclusions trace mainly to which species are included and how many.
| ME THODS
The species-by-traits dataset used here is produced by a scripted workflow, described in depth by Madin et al. (2020), that reproducibly merges 26 existing datasets. Most records in the datasets are at the level of genotypes or 16S rRNA phylotypes. The workflow (a) prepares datasets to be merged; (b) combines datasets and condenses equivalent traits into columns; and (c) condenses rows into species based on the GTDB taxonomy (Parks et al., 2018) (https:// gtdb.ecoge nomic.org). This taxonomy applies the conventional criterion of average nucleotide identity ≥96.5% for grouping entities into species.
Where there are multiple records for a species, these are condensed down to a single row. The records are typically averaged (for quantitative traits) or a majority rule is applied (for categorical traits). The rules are specified in more detail below for selected traits and in Madin et al. (2020). During this process, standard deviations have been calculated and outliers identified. A substantial number of records have been corrected, or sometimes removed as not credible. A table of these corrections is implemented by the code. The number of records for individual species ranges from >10,000 for Staphylococcus aureus down to 1 for many species. Among the traits considered here, maximum growth rate has the least coverage at 618 species, but this is still an advance over the 214 species in previous compilations (Vieira-Silva & Rocha, 2010).
Our aim was to develop coverage of traits and their correlations as widely as possible across bacteria and archaea. We have condensed to species level as a working compromise, intended to capture ecologically meaningful variation without letting the dataset be unduly dominated by a few species with thousands of records each (e.g., Staphylococcus aureus, Salmonella enterica, Streptococcus pneumoniae). Because our focus has been on phenotypic traits such as cell diameter and potential rate of increase, the data come largely from species that have been brought into culture. These may tend to have larger genomes and faster potential growth rates and more often to be aerobic, compared with the many uncultured species (Fierer, 2017;Giovannoni et al., 2014;Nayfach & Pollard, 2015;Solden et al., 2016). However, the species included here do span a full range of possibilities, including extreme oligotrophy, very small genome sizes, and very slow potential growth rates.
For purposes of the main text, we have excluded species that live inside the cells of eukaryotes, and also mycoplasmas as a group.
These are well known to have strongly reduced genomes for reasons not connected to oligotrophy, and their maximum growth rates must be conditioned by relations with their host as well as by their uptake and conversion of resources. There were 35 such species in our dataset with both genome size and cell diameter, and 27 such species with both genome size and maximum growth rate. They are We have built a list of species (Table A1) identified in the literature as definite oligotrophs or definite copiotrophs, in order to be able to position these in the trait-space figures. To avoid circularity, we have not applied criteria of our own to the question whether they are oligotrophs or copiotrophs, but have adopted the opinions of the authors of the papers.
Because maximum growth rates tend to be faster for species cultured at higher growth temperatures, we have used here temperature-adjusted maximum growth rates, which are residuals from the regression fit log 10 (max growth) = 0.0105(growth temp) − 1.2003, r 2 = .11. In other words, these are deviations above or below the expected mean max growth at their growth temperature, in log 10 units. The basis for adopting this particular temperature adjustment is explained further at Appendix C.
The data reported here are survey or correlative. As is well known, correlation unlike manipulative experiments cannot prove causation, because of the likelihood of cross-correlation with other variables, including those unmeasured and unconsidered. Accordingly, the statistics presented should be interpreted as quantifying variation and correlation across species, rather than as significance tests of hypotheses about causation. For the major correlations, we provide also versions partialled for phylogeny, using phylogenetic generalized least squares (PGLS) via phylolm (Tung Ho & Ané, 2014). Phylolm v2.6.1 was installed from https://github.com/lamho 86/ phylolm. The phylogenetic tree adopted corresponded to GTDB taxonomy with seven levels (superkingdom, phylum, class, order, family, genus, and species), star phylogeny at each node, and unit branch lengths. GTDB taxonomy was adopted because it is monophyletic, so far as can be determined from the 120 protein-coding genes used, and because it places taxonomic ranks at a consistent relative distance from the tree root. Partialling for phylogeny via PGLS has the effect of measuring correlation of trait divergences averaged across the ensemble of nodes. Compared to correlation across present-day species, it downweights differences between major clades.
| RE SULTS
Across culturable species where records are available, there was little to no correlation (2% of variation or less) among temperatureadjusted maximum growth rate, cell radial diameter, and genome size (r 2 values in Table 1, Figure 1a-c). The same was true of correlations partialled for phylogeny (Table A2).
Although there was little overall correlation between maximum growth rate and cell radial diameter, there was some evidence for a particular nonlinearity, with the fastest growth rates tending to occur in the midrange of cell diameters (Figure 2). If indeed lower and upper limits to cell size coincide with disadvantage, at the smalldiameter end from increasing relative allocation to cell envelope, and at the large-diameter end from decreasing diffusive uptake per cell volume, it would make sense that very fast growth rates were only achievable in the midrange of sizes. Note, however, that slow maximum growth rates were also common in the midrange of cell sizes.
A more complete search for interactions or nonlinearities is described in Table 2. The most substantial contributions to R 2 were for a nonlinear response to cell diameter (model 4 in Table 2, ca. 6%) and for habitat (model 6, ca. 10%). The best model overall by AIC (model 7) simply had these two effects additive, and R 2 = 0.167. This is the model fitted in Figure 2. Providing for interaction between the response to diameter and habitat (model 8) and for interactions of all these with genome size (model 9) did not increase R 2 commensurate with the df invoked, and AIC deteriorated.
Other points of interest in Figure 1, besides the absence of substantial correlation across species, are as follows. First, correlation was absent also within major habitat types (color scheme in Figure 1, and the cell size-genome size graph further separated into habitats in Figure 3). There was no indication of oligotrophy-related correlations within particular habitats such as marine waters, with these then being obviated by differences between different major habitats. Second, certain species are indicated that have been explicitly identified in the literature as either oligotrophs (triangle symbols) or copiotrophs (square symbols) (listed in Table A1). These were rather clearly separated on the dimension of maximum growth rate, but not on the dimensions of genome size or cell radial diameter.
Third, species from thermal environments tended to smaller genome sizes (Figure 1a,b), as observed previously (Lear et al., 2017;Sabath et al., 2013;Sauer & Wang, 2019;Sorensen et al., 2019). Fourth, the density contours in Figures 1 and 3 were more or less circular. This indicates little interaction between the two traits. The corners of the trait space are not unachievable, but are thinly occupied simply because of low incidence in each dimension.
The independent variation among maximum growth rate, genome size, and cell diameter was not much affected by including species that make a living within eukaryote cells (Figures A2-A4; discussed further in Appendix B). Archaea tended to smaller genomes than bacteria, but correlation was equally absent within each domain ( Figures A5-A7).
Ribosomal RNA operon copy number RRN was indeed correlated with temperature-adjusted maximum growth rate (Table 1, Figure A8), as expected and as previously shown from smaller datasets without temperature adjustment (Klappenbach et al., 2000;Vieira-Silva & Rocha, 2010). RRN was also correlated with genome size (Table 1, Figure A9), with large RRN not being found in TA B L E 1 Correlation r 2 among the four traits considered here, all log-scaled. Number of species for each trait pair given next to the correlation F I G U R E 1 (a) Temperature-adjusted maximum growth rate in relation to genome size across species. (b) Temperature-adjusted maximum growth rate in relation to cell radial diameter across species. (c) Genome size in relation to cell radial diameter. Dashed lines indicate density contours. In the habitat classification (color scheme), fresh and marine waters include both water and sediment. Host-associated species are attributed to endotherm or to ectotherm hosts if they multiply within the host body or gut, or to "other" if they grow on the host's external surface or are associated with plants, algae, or fungi or have no habitat attributed. Species identified in the literature (Table A1) as copiotrophs or oligotrophs are denoted by squares and triangles, respectively association with small genome sizes. Species identified in the literature as copiotrophs (squares in Figures A8 and A9) rather consistently had higher RRN than identified oligotrophs (triangles in the figures), as they did faster temperature-adjusted maximum growth rates. RRN is also a quantity that is available across more species than maximum growth rates. However, RRN, like maximum growth rate, was hardly correlated with cell radial diameter ( Figure A10).
| Individual relationships
Although discussion of genome reduction often assumes that shedding genes will be an advantage unless they confer some definite benefit, it has been known for some time that maximum growth rate is not faster in species with smaller genomes (Vieira-Silva & Rocha, 2010). Figure 1 confirms this result with expanded coverage. This is possible because fast-doubling species commonly operate more than one set of bidirectional replication forks at the same time (Vieira-Silva & Rocha, 2010). This in turn has consequences for genome architecture. Genes closer to the origin are expressed in more copies at any given time, and it appears that genes are rearranged so that these distance-dosage effects are beneficial, particularly for genes coding for rRNA, RNA polymerase, ribosomal protein, tRNA, and ubi-tRNA. There are advantages to high expression of these genes during rapid growth.
The absence of correlation between genome size and cell radial diameter implies either that there is little consistent relationship between the mass of cell machinery and the radial diameter (in other words larger-diameter species tend to have lower-density cytoplasm), or that there is little relationship between the genome size and the mass of cell machinery, or both of those things. Rod-shaped bacteria tended to have slightly larger genomes and slightly smaller radial diameters than spheroidal ( Figure A11), but with little correlation evident within either shape.
| Overall conclusions
The principal result emerging has been that genome size and cell radial diameter vary across species substantially independently from each other and from temperature-adjusted maximum growth rate and RRN.
F I G U R E 2 Temperature-adjusted maximum growth rate in relation to cell diameter, with polynomial fits separated by habitat. The model (Table 2) accounts for about 17% of variance in log tempadjusted maximum growth rate in total, with habitat contributing about 11% and nonlinear response to cell diameter about 6%. Coefficients of the model are in Table A3. While the models in Table 2 use only species for which all data are available so that AIC is comparable, use of all available species for each model (Table A4) There have been three previous reports of positive correlation across species between genome size and cell size (DeLong F I G U R E 3 Genome size in relation to cell radial diameter, separated by habitat type. Symbols as in Figure 1 et Shuter et al., 1983;West & Brown, 2005). DeLong et al. also reported positive correlation between maximum growth rate and cell size. Differences between their results and ours arise partly from their including intracellular parasites (which contributed strongly to the small-cell, small-genome, slow-growth end of their patterns) and partly from their species coverage being 10-to 20fold smaller than ours, details in Appendix B. We believe our results are more representative for this reason. In further support, Guittar et al. (2019) compiled a dataset from the literature emphasizing (but not confined to) species found in infant microbiomes. Across the 2,223 records in that dataset, correlation between genome size and cell diameter was weak at r 2 = .0031 (Guittar pers comm).
We consider three possible interpretations for the apparently independent variation found among genome size, cell size, and maximum growth rate: a. Existing measurements are too noisy b. If not-yet-cultured species could be included then correlation would be found c. These three traits are not the decisive ones for copiotrophy and oligotrophy; the oligotrophy to copiotrophy spectrum is not a major influence on variation across species in these traits First, how likely is it that the measurements are so noisy that no correlation can be expected? Genome size is quite tightly characterized relative to the differences across species. For species with 10 or more records, median coefficient of variation was 3% (Nielsen et al., 2021). For maximum growth rate, fewer species are covered, the numbers are known less precisely, and variation across strains within species is hardly ever known. There is uncertainty in the actual measurement, and then also there is uncertainty as to how closely culture conditions have approached the best possible.
Nevertheless, reported maximum growth rates range across more than three orders of magnitude, from less than 0.01 to more than 1 per hour. Further, maximum growth rate does increase with RRN in the genome ( Figure A8; r 2 = .30). This correlation is well established, and indeed RRN has quite often been used as a surrogate or indicator for potential rate of increase (Nemergut et al., 2016;Niederdorfer et al., 2017;Roller et al., 2016;Stoddard et al., 2015;Vieira-Silva & Rocha, 2010). Given the wide range and this established correlation, we believe the estimates for maximum growth rate do contain meaningful signal.
Cell radial diameter measurements are typically given as either a single number or a range, without specification as to what the range represents. We believe the range usually represents a sampling of individual cells within a culture, more so than different stages of the cell division cycle, different provenances within a species, or different growth conditions. We have not thought it possible to estimate any form of within-species variation from this. Plasticity within the same genotype in response to growth versus starvation conditions is considerable (Lever et al., 2015), but measurements will nearly all have been taken under favorable growth conditions and standardized to that extent.
In summary, while there is certainly noise in the data, we do not believe it is so extreme as to obviate correlations that are there in reality.
A second possible interpretation for the apparently independent variation found among genome size, cell size, and maximum growth rate is that if not-yet-cultured species could be included there would be correlation. It certainly seems true that not-yet cultured species tend toward smaller genomes (e.g., Nayfach & Pollard, 2015), and it is possible that once brought into culture, they will be found also to have smaller cells and slower potential rates of increase. While such a result would be interesting, it would not really detract from the results in Figures 1 and 2. The data available do include species that the literature regards as strong oligotrophs as well as copiotrophs, as indicated in the figures, and most ideas about the nature of oligotrophy have been developed from species brought into culture.
A third possible interpretation is that these traits are not actually among the principal traits contributing to oligotrophy versus copiotrophy. For example, Lauro et al. (2009) found that no single trait was a clear identifier of oligotrophy, and a complex multi-trait approach was needed. We think this interpretation is the likeliest with regard to cell diameter and genome size. For the compilation we have made of species identified in the literature as oligotrophs or copiotrophs, maximum growth rate and RRN were indeed rather strong predictors. However, cell diameter and genome size were not, and were also substantially uncorrelated with maximum growth rate.
These results suggest future research can usefully focus on developing stronger ecological interpretation of cell radial diameter and of genome size.
CO N FLI C T O F I NTE R E S T
None of the authors have any conflict of interest.
DATA AVA I L A B I L I T Y S TAT E M E N T
Data analyzed here are drawn largely from a data paper (Madin et al., 2020) that merges multiple sources. The version used here is the product condensed to one row per species, species being defined by Genome Taxonomy
S U PPLE M E NTA RY FI G U R E S A N D TA B LE S F I G U R E A 1
Relationship across species between number of different coding genes and total genome size. Ordinary least squares regression has r 2 of .976 across 3,300 species. In the habitat classification (color scheme), fresh and marine waters include both water and sediment. Intracellular species are those making a living inside eukaryote cells. Host-associated species are attributed to endotherm or to ectotherm hosts if they multiply within the host body or gut, or to "other" if they grow on the host's external surface or are associated with plants, algae, or fungi. Species without habitat information are also attributed to "other." Species identified in the literature as copiotrophs or oligotrophs (Table A1) are denoted by squares and triangles, respectively F I G U R E A 2 Temperature-adjusted maximum growth rate in relation to genome size across species, including intracellular species. 646 species, r 2 = .0066. Dotted lines are density contours. Habitat and copiotrophyoligotrophy coding as in Figure A1 F I G U R E A 3 Temperature-adjusted maximum growth rate in relation to cell radial diameter across species, including intracellular species. 529 species, r 2 = .00044. Habitat and copiotrophyoligotrophy coding as in Figure A1 F I G U R E A 4 Genome size in relation to cell radial diameter across species, including intracellular species. 3,502 species, r 2 = .019. Habitat and copiotrophy-oligotrophy coding as in Figure A1 F I G U R E A 5 Temperature-adjusted maximum growth rate in relation to genome size across species, showing archaea vs bacteria and excluding intracellular species. Species identified in the literature (Table A1) as copiotrophs or oligotrophs are denoted by squares and triangles respectively
F I G U R E A 6
Temperature-adjusted maximum growth rate in relation to mean cell radial diameter across species, distinguishing archaea from bacteria F I G U R E A 7 Genome size in relation to mean cell radial diameter across species, distinguishing archaea from bacteria F I G U R E A 8 Relationship across species between ribosomal RNA operon copy number and temperature-adjusted maximum growth rate; r 2 = .30 across 389 species. rRNA operon counts have been averaged across multiple records within species, where available, hence noninteger counts sometimes appear. Host-associated species were attributed to endotherm or ectotherm hosts, or to "other" if they came from external animal surface or were associated with plants, algae, or fungi, or had no habitat attributed. Species identified in the literature (Table A1) as copiotrophs or oligotrophs are denoted by squares and triangles, respectively F I G U R E A 9 Genome size in relation to rRNA operon copy number; r 2 = .14 across 2,727 species, or if those with growth temperature >50°C are excluded, r 2 = .048 across 1,666 species. In the habitat classification (color scheme), fresh and marine waters include both water and sediment. Host-associated species are attributed to endotherm or to ectotherm hosts if they multiply within the host body or gut, or to "other" if they grow on the host's external surface or are associated with plants, algae, or fungi, or have no habitat attributed. Species identified in the literature (Table A1) as copiotrophs or oligotrophs are denoted by squares and triangles, respectively F I G U R E A 1 0 Cell radial diameter in relation to rRNA operon copy number; r 2 = .023 across 926 species. In the habitat classification (color scheme), fresh and marine waters include both water and sediment. Host-associated species are attributed to endotherm or to ectotherm hosts if they multiply within the host body or gut, or to "other" if they grow on the host's external surface or are associated with plants, algae, or fungi, or are not attributed to any habitat. Species identified in the literature (Table A1) as copiotrophs or oligotrophs are denoted by squares and triangles, respectively F I G U R E A 11 Mean cell radial diameter in relation to genome size, separating rod-shaped bacilli from nearspheroidal cocci and coccobacilli. Species identified in the literature (Table A1) as copiotrophs and oligotrophs are denoted by squares and triangles, respectively Kempes et al. (2016), with precursors in (DeLong et al., 2010Kempes et al., 2012), developed arguments about upper and lower limits of cell size in bacteria and archaea. In the course of this, they reported positive scaling of genome size and of maximum growth rate with cell volume.
A PPE N D I X B PR E V I O US R EP O RTS O F P OS ITI V E CO R R EL ATI O N A M O N G M A XI M U M G ROW TH R ATE , CELL S IZE , A N D G EN O M E S IZE
We have come to the view that the difference between their results and ours traces mainly to differences in the sets of species included. Table S2). Although this relationship was significant, it had an r 2 of only .16 across 35 species. Further, the significance arose from inclusion of four mycoplasma species, three of which have particularly small cell sizes and slow maximum growth rates. With mycoplasmas excluded, the relationship was not significant (r 2 = .02, df = 31, p = .40). Figure B1 shows the relationship to volume across 383 species from our data, and the correlation is negligible either excluding (r 2 = .00046) or including (r 2 = .00043) species that make a living inside eukaryote cells. These intracellular species contribute only 7/383 (<2%) in our data, versus 4/35 (>10%) in DeLong et al. (2010). Kempes et al. (2016) reported a log-log scaling slope of 0.21 between genome size and cell volume across 145 taxa. Their data were compiled from three previous reports (DeLong et al., 2010;Shuter et al., 1983;West & Brown, 2005). Kempes et al Table S1 provides the data but not the species names, and we have not been able to elicit them otherwise, except for Shuter et al who published names along with data.
Consequently, we have only been able to investigate the consequences of including particular species where our coverage overlaps with Shuter et al. (1983). Across Shuter's 49 records, r 2 was .73. One mycoplasma species with notably small genome and cell size contributed to TA B L E A 3 Coefficients ± SE for the model for log10 maximum growth rate fitted in Figure 2, treating each species as an independent item of evidence. Coefficients for each habitat are relative to fresh water, which is the intercept hence, for the simpler models, they have more degrees of freedom than in Table 2 this, but was not solely responsible. Across our dataset's records for 12 species that also occurred in Shuter et al, the relationship was similarly positive with r 2 = .30. Across our dataset as a whole ( Figure B2), the relationship was notionally positive but very much weaker (slope 0.044 ± 0.006 CI compared with 0.22 ± 0.019 for Shuter et al) even including the 14 intracellular species. Similar to the relationship between maximum growth rate and cell size, this indicates the difference between their results and ours lies mainly in coverage of species, rather than in different estimates for the same species. For our dataset ( Figure B2), it can be seen that intracellular species do tend to lie toward lower left, but they are not sufficient in number to create a strong positive relationship.
In summary, our opinion is that our results indicating little to no correlation between cell size and maximum growth rate or genome size are more representative than the positive relationships reported by Kempes et al. (2016). For maximum growth rate, their positive relationship depends entirely on including mycoplasmas. For genome size, the very weak correlation with cell size reported in our results is based on 3,466 species for cell diameter or 2,628 species for cell volume compared with 145 observations in Kempes et al. (2016).
F I G U R E B 1 Maximum growth rate in relation to cell volume across 390 species, including intracellular, from our data. R 2 = .00043, F-statistic 0.1659 on 1 and 382 df, p-value .684 F I G U R E B 2 Genome size in relation to cell volume across 2,628 species, including intracellular, from our data. R 2 = .01989, Fstatistic 53.32 on 1 and 2,627 df. Leaving out the 14 intracellular species R 2 = .0119, F-statistic 31.69 on 1 and 2,613 df A PPE N D I X C
TE M PE R ATU R E A DJ US TM E NT S TO M A X I M U M G ROW TH R ATE
Maximum growth rates are influenced by the growth temperature where they were measured. An ideal adjustment of maximum growth for temperature would express them relative to the fastest growth rates that could potentially be achieved by species that had over evolutionary time fully optimized their physiology in relation to the temperature. However, there is no consensus how this could be done.
The simplest adjustments apply a Q10 increase factor (usually 2 or 1.5) per 10°C increase in temperature to metabolic rates or growth rates.
When a Q10 of 2 is applied to adjust maximum growth rates to a standard growth temperature of 37°C, many thermophiles have decidedly slow growth rates. We cannot tell whether this is biologically realistic-for example, the protein adjustments needed for high temperatures might prevent rapid metabolism-versus whether a Q10 of 2 is too steep. Thermophilic enzymes are in general stiffened to counteract the increased molecular motion associated with higher temperatures. When operating at room temperature, they typically have either lower or similar activities compared to their mesophilic homologs (Chang et al., 2020).
It is well established that Q10 itself changes with temperature. Considering soil respiration and decomposition rates, meta-analysis showed Q10 around 4-6 at 0°C declining to 2 at around 25°C and continuing around 2 out to 50°C (Hamdi et al., 2013). A review of theoretical equations for temperature response (Noll et al., 2020) considered 19 models that are variations on Arrhenius (linear response of ln growth or metabolic rate to reciprocal of absolute temperature) from 1946 up to the present. Several of these equations have growth rates declining above some optimal temperature. If these were applied, the effect would be for species growing at 70-100°C to have their growth rates increased rather than decreased when adjusted down to 37°C. Different mechanisms are invoked by different models, but enzyme adaptation is not among them.
In absence of a consensus method for temperature-adjusting maximum growth rates, we have adopted residuals after regression of log10 max growth rate on growth temperature. These residuals measure how much faster or slower a species grows compared with the mean at that growth temperature. The regression on temperature in deg C did not have noticeably inferior fit compared with Arrhenius regression on 1/ absolute temperature (r 2 = .110 vs. .114). | 2021-05-05T00:09:11.184Z | 2021-03-16T00:00:00.000 | {
"year": 2021,
"sha1": "6dab05eef7288c57c8d234564b5b69b08c7ab8e9",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ece3.7290",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "19db75eac15357cffd143d58296c1af9c814141e",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
40135683 | pes2o/s2orc | v3-fos-license | Multiple Rectal Neuroendocrine Tumors : Report of Five Cases
Carcinoids are slow growing neuroendocrine tumors (NET) originating in the enterochromaffin cells of the gastrointestinal tract. In previous studies, rectal NET comprised only about 1% of all anorectal neoplasms; however, the incidence of rectal NET has shown a recent increase. Typically, rectal NET presents as a single subepithelial nodule, and multicentricity of rectal NETs is rare, with reported incidence of 2-4.5%. Due to the rarity of multiple rectal NETs, there is no consensus or guidelines for treatment of multiple rectal NETs. However, NETs of the rectum that are less than 10 mm in diameter and do not infiltrate the muscularis propria, without distant metastasis, can be removed by endoscopy, as with solitary rectal NET. We encountered five cases of multiple rectal NETs which were treated successfully by endoscopy. (Korean J Gastroenterol 2014;64:103-109)
INTRODUCTION
Carcinoid tumors are a group of well-differentiated neuroendocrine tumors (NETs) of neuro-endocrine neoplasms, according to the World Health Organization (WHO) classification, 1 and approximately 64% of cases arise in the gastrointestinal tract.The appropriate modern term for carcinoid is neuroendocrine neoplasm, NET, or neuroendocrine Carcinoma (NEC).
The small intestine is the most common location (28.7%), followed by the appendix (18.9%), rectum (12.6%), and colon (6%).As reported by Teleky et al. 2 in 1992, only 1% of all anorectal neoplasms are rectal NETs.However, recently, due to an increase in the number of screening colonoscopies, the incidence of rectal NETs has been increasing in Korea.
In the past, NETs were detected unexpectedly during autopsy or surgery for other causes.Currently, most rectal NETs are found incidentally during colonoscopy.In general, rectal NETs occur as a single subepithelial lesion, and multicentricity is rare, occurring in 2% to 4% of all cases.
3 Therapeutic options for rectal NETs depend on size, depth of invasion, or presence of distant metastasis. 4,5Rectal NETs smaller than 10 mm in diameter without distant metastasis can be completely removed by endoscopic resection.
However, due to the rarity of multicentricity, no therapeutic consensus for multiple rectal NETs has been established.
Nevertheless, as in the following cases, if each NET is less than 10 mm and is a Tis or T1 lesion without distant metastasis, multiple rectal NETs might be treated by endoscopy as a solitary lesion.
Case 1
A 52-year-old male who visited our institute for health promotion underwent colonoscopy, which showed two yellowish subepithelial nodules in the rectum; therefore, biopsies were performed.Grossly, the subepithelial lesions were similar to rectal NETs; however, two lesions were diagnosed pathologically as hyperplastic polyp and chronic non-specific inflammation, respectively.In gross finding by endoscopy and in our experiences, those lesions were strongly suspected as NETs.He was admitted to our hospital and no symptoms or signs of carcinoid syndrome were observed.
The patient underwent colonoscopy again, which showed two subepithelial lesions measuring 4 mm in size and small nodular mucosa adjacent to the two subepithelial lesions (Fig. 1).The two subepithelial lesions were removed by endo-scopic submucosal resection using a ligation device (ESMR-L) and the nodular mucosa was removed by forcep biopsy (Fig. 1).In the pathological report, all of them were confirmed as well differentiated NETs (grade I according to WHO 2010 classification) (Fig. 2).
Tumor markers were within normal limits, level of CEA was 1.7 ng/mL, and CA 19-9 was 7.76 U/mL.CT of the abdomen and pelvis did not show abnormality.PET scan showed no evidence of distant metastasis.After six months, he underwent sigmoidoscopy and no abnormality was found.In colonoscopic finding, there was no sign of recurrence after 12 months, and he underwent sigmoidoscopy again after 18 months.Due to the possibility of incomplete resection by forcep biopsy, he was examined by EUS, which showed no residual tumor.
Case 2
A 32-year-old male was transferred to our hospital for treatment of rectal NETs.He had undergone examination of his colon and rectum by colonoscopy for health screening at a local medical center.He had no history of specific disease, alcohol intake, or smoking, and no characteristic familial history.He also had no symptoms.Peripheral blood test performed at mm, and 7 mm in diameter (Fig. 3).All lesions were removed by ESMR-L (Fig. 3).Although bleeding related to the endoscopic procedure occurred, it was managed successfully with endoscopic hemoclipping (Fig. 4).The triple subepithelial lesions were confirmed as WHO grade I NETs by a pathologist and resection margins were all negative (Fig. 5).
CT of the abdomen and PET scan showed no evidence of distant metastasis.Sigmoidoscopy after six months and 12 months and colonoscopy after 18 months showed no local recurrence.
Case 3
A 65-year-old female was transferred to our institute for an elevated level of serum CEA .She had no specific symptoms or signs.Colonoscopy showed multiple rectal polyps and three yellow-colored elevated mucosa measuring approximately 5 mm, 6 mm, and 7 mm in size, located at 8 cm, 10 cm, and 11 cm from the anal verge.Biopsies were performed for each of the three lesions.The pathologic report showed chronic non-specific inflammation in all specimens.Gross colonoscopic findings were highly suggestive of NETs, therefore, she was admitted, and each lesion was removed by endoscopic mucosal resection.The final pathological report showed that all three lesions were NETs with positive immuno-histochemical staining of chromogranin and synaptophysin and resection margins were all clear.
Serum serotonin and 5-hydroxyindoleacetic acid level were within normal limits and no symptoms or signs of carcinoid syndrome were observed.CT of the abdomen and pelvis did not show any abnormality.She did not visit our institute afterwards and follow-up was lost.
Case 4
A 62-year-old male was examined by colonoscopy for health promotion.Two subepithelial nodules measuring 5 mm in size were detected at the rectum.Regarding past medical history, he had undergone subtotal gastrectomy due to stomach cancer two years ago and he took a medication for diabetes mellitus.Two subepithelial nodules were removed by ESMR-L.In the pathological report, the lesions were diagnosed as WHO grade 1 NETs; resection margins were negative and perilymphatic-vascular or perineural invasion was not observed.Abdominal CT and PET scan showed no evidence of distant metastasis.
Case 5
A 48-year-old female with a family history of colon cancer underwent colonoscopy, which showed two yellowish subepithelial lesions in the rectum and sigmoid colon.Each of the two lesions was removed by ESMR-L and post-ESMR-L bleeding of the rectum was treated with endoscopic hemoclipping.Pathologically, the two lesions were NETs grade 1 of the WHO 2010 classification, and were completely removed.After 12 months and 24 months, follow-up colonoscopy showed no evidence of local recurrence.We provide a schematic summary of the five cases in Table 1.
DISCUSSION
Carcinoid tumors were first named as "karzinoide" by Oberndorfer in 1907.The first clinical feature of a carcinoid tumor of the rectum was described by Saltykow in 1912.The appropriate modern term for carcinoid is neuroendocrine neoplasm.Nowadays, the concept of carcinoid can be divided into two parts, NET and neuroendocrine carcinoma (NEC).
Most NETs are asymptomatic, thus, prediction of an accurate prevalence rate is difficult; however, annual incidence has generally been reported as 1 to 2/100,000/year. 6In Korea, 64% of NETs arise in the gastrointestinal tract, and rectum and stomach are the most frequently involved sites. 6,7sed on past studies, rectal NET comprises only 1% of all anorectal tumors, 80-90% of rectal NETs are diagnosed incidentally by sigmoidoscopy for examination of anal diseases, such as hemorrhoid, anal fissure, and anal fistula, or for health screening, and incidence rate by sigmoidoscopy is 1/2,500 person. 2 However, in the USA, frequency of rectal NETs has increased by 800-1,000% in the past 35 years.
8
This increase is probably related to the introduction of colonoscopic screening, which has also resulted in "incidentally" detected neuroendocrine rectal tumors.As in the USA, in the era of screening colonoscopy, rectal NETs are becoming more common in Korea.
Nevertheless, multicentricity of rectal NET is rare, and has been reported as only 2 to 4%. 3 Saha et al. 9 reported that up to 10% of rectal NETs show multicentricity and three to 10 lesions can occur in the same area.In Japan, two cases of multiple NETs of the colon and rectum, containing numerous carcinoid micronests with lymph node metastasis have been reported. 10Although Japanese cases are extremely rare, they are not NET, but NEC.Several cases of multiple NETs in patients with neurofibromatosis or ganglioneuromatosis have also been reported. 11Two cases of double rectal NET in a patient without underlying disease have been reported in the Korean literature. 12,13Multicentricity is a poor prognostic factor in small intestinal carcinoids, however, its prognostic effect in rectal NET is not known.
14
In the case of a single rectal NET, metastasis is found in 0-3% when the size of the rectal NET is less than 10 mm, 10% 108 박찬서 등. 다발성 직장 신경내분비종양 The Korean Journal of Gastroenterology when 10-19 mm, and 80-100% when greater than 20 mm. 5 Naunheim et al. 5 reported that if size is less than 20 mm, invasion to muscularis propria occurs in 20%, if greater than 20 mm, 94% and if it invades the muscular layer, and the possibility of malignancy and distant metastasis increases.
Therefore, the method used for treatment of rectal NET differs according to size and depth of invasion, and, typically, if the tumor size is less than 10 mm and does not infiltrate the muscularis propria, endoscopic resection is recommended first.Even small rectal NETs can primarily invade the submucosa, therefore, a special technique for reliable resection of deep regions of the submucosal layer is needed, and ESMR-L is a good method for complete resection of rectal NET.This procedure is known to be technically simple, minimally invasive, and relatively safe.In addition, the treatment efficacy of ESMR-L was far better in margin negativity and local recurrence than that of conventional polypectomy.
In contrast with consensual treatment options for single In all four cases, multiple NET lesions were located at the rectum, and in the fifth case, the lesion was located at the sigmoid colon and rectum.Particularly in the first case, two rectal NET lesions were too adjacently located and removal of the second lesion with ESMR-L was difficult due to adhesion made by ESMR-L of the first lesion.Although risk of perforation in removal of sigmoid colon NET is higher than that of rectal lesions, the NET lesion at the sigmoid colon was removed without complication.
All of our five cases were confined to the mucosa and sub-mucosa layer and were less than 10 mm in size, with no vascular or neural invasion and clear resection margins; mitotic count was under 2 and Ki-67 count was below 2% at 10 high power field (HPF), indicating a grade 1 NET.In the first case, follow up rectal EUS was also performed in order to rule out the possibility of residual tumor as resection margin of the biopsy lesion was positive.In the second case, although the resection margin was negative, with a benign, low grade tumor, follow up endoscopy was performed three times with intervals of six months, as in the first case.In the fifth case, follow-up colonoscopy was performed two times with an interval of 12 months due to tumor lesion of the sigmoid colon.
However, due to a small number of NET cases, there are no established principles concerning follow-up periods and modalities for multiple NETs.Merely, in the case of multiple NETs, relatively short term follow-up endoscopy might be needed in order not to miss other residual NET lesions.
Rectal EUS, which can determine size, invasion depth, and metastasis status to adjacent lymph node, and detect separation of a submucosal tumor from muscularis propria, is essential in testing when deciding on a treatment plan or for evaluation of the stability of endoscopic resection. 20However, as distant lymph node or liver metastasis is not identifiable through EUS, abdominal CT or PET should be performed in order to confirm distant metastasis status.As in the first case, EUS can also be a good modality in follow-up testing for verification of completeness of resection or local recurrence of NET removed by forcep biopsy.
In recent years, the incidence rate of rectal NET in Korea has increased beyond our expectation.In addition, as in our cases, the endoscopist might encounter multiple rectal NETs.Thus, when incidental rectal NET is found, the possibility of multicentricity should be considered.There is no established gold standard treatment for multiple rectal NETs, and the efficacy or long-term prognosis of endoscopic resection in patients with multiple rectal NETs is uncertain.
However, according to our results, in cases of multiple rectal NETs, tumors that are 10 mm or less, which do not infiltrate the muscularis propria, can also be successfully treated endoscopically.The treatment policy, long-term prognosis, and methods of follow-up for multiple rectal NETs should also be discussed in the future by accumulating cases like those described in our report.
Fig. 1 .
Fig. 1.Endoscopic views.(A) A small subepithelial nodule at the rectum was observed.(B) The small nodule was removed by biopsy.(C) Two other subepithelial nodules distal to the lesion (A) were observed.(D) Two subepithelial nodules were removed by endoscopic submucosal resection using a ligation device.
Fig. 3 .
Fig. 3. Endoscopic submucosal resection using a ligation device (ESMR-L) technique.(A) Yellowish triple subepithelial lesions at the rectum were noted.(B) Lifting of lesions by hypertonic saline injection and band ligation are noted.(C) Triple rectal carcinoids were removed by ESMR-L.
rectal NET, due to the rarity of multiple NETs, there are no standard guidelines for treatment.In addition, the long-term prognosis of endoscopic resection for multiple rectal NETs is still uncertain.However, treatment can be administered based on size and depth of invasion of each rectal NET.All of our multiple NETs were removed successfully by endoscope and the short-term prognosis was good, without local recurrence.However, because rectal NET is very slow growing, assessment of the long-term efficacy of endoscopic resection or long-term prognosis is difficult.
Table 1 .
Summary of Our Five Cases | 2018-04-03T04:42:48.433Z | 2014-08-01T00:00:00.000 | {
"year": 2014,
"sha1": "92e69dd9617c8198a2f723a02579173b2e272b92",
"oa_license": "CCBYNC",
"oa_url": "https://synapse.koreamed.org/upload/SynapseXML/0028kjg/pdf/kjg-64-103.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "92e69dd9617c8198a2f723a02579173b2e272b92",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
239670419 | pes2o/s2orc | v3-fos-license | Research Article Compressed Sensing for THz FMCW Radar 3D Imaging
A terahertz (THz) frequency-modulated continuous wave (FMCW) imaging radar system is developed for high-resolution 3D imaging recently. Aiming at the problems of long data acquisition periods and large sample sizes for the developed imaging system, an algorithm based on compressed sensing is proposed for THz FMCW radar 3D imaging in this paper. Firstly, the FMCW radar signal model is built, and the conventional range migration algorithm is introduced for THz FMCW radar imaging. Then, compressed sensing is extended for THz FMCW radar 3D imaging, and the Newton smooth L0-norm (NSL0) algorithm is presented for sparse measurement data reconstruction. Both simulation and measurement experiments demonstrate the fea-sibility of reconstructing THz images from measurements even at the sparsity rate of 20%.
Introduction
Terahertz (THz) wave lies between infrared wave and millimeter wave, which is an electromagnetic wave that has not been fully recognized and utilized by human beings. Due to its ability of material penetration and harmless nonionizing radiation to human body, THz technology can be employed to effectively identify stealth and deceptive measures that cannot be distinguished by conventional means in the military fields and to identify concealed objects made of metal or inorganic materials for security check. As THz has a smaller wavelength and a wider bandwidth which will result in a higher resolution, THz imaging has been widely used in nondestructive inspections and medical diagnosis [1,2]. With the rapid development of information countermeasures, antistealth, target search, and tracking, and materials science, THz imaging technology has made a great progress during the past decades.
Motivated by the huge application potential of highresolution THz imaging technology, there has been growing interest in developing 3D imaging radar working at THz range. In 2007, a 220 GHz experimental frequency-modulated continuous wave (FMCW) inverse synthetic aperture radar (SAR) with a bandwidth of 8 GHz is designed to determine high-resolution scattering center distributions of targets [3]. A 240 GHz 3D FMCW imaging radar with a maximum bandwidth of 42 GHz is discussed in [4]. An imaging radar with the operation frequency of 580 GHz which is implemented in an all-solid-state design is developed at Jet Propulsion Laboratory (JPL) [5]. e first THz radar for fast standoff personnel screening with the operation frequency of 675 GHz is also built by JPL. A fast scanning device is designed to enable imaging at a frame rate of 1 Hz [6,7]. An active FMCW imaging system ranging from 514 to 565 GHz (frequency centered at 540 GHz) is studied to image objects with a resolution of millimeter [8].
It can be found that THz 3D imaging is commonly realized with SAR technique, rather than building antenna arrays. is is due to that expensive devices in THz regime will lead to a high-cost imaging system having multiple transceivers. For a THz imaging system with SAR technique, the single transceiver is moved in a grid-like manner to produce an image with the processing approaches. Several SAR imaging algorithms in time domain, frequency domain, and wavenumber domain have been proposed for THz FMCW SAR, respectively. A typical time domain backprojection algorithm is studied to obtain the image with 2D aperture synthesis for the SynView THz 3D imaging system [9]. A back-projection imaging approach has been presented in [10] for data processing of a 300 GHz imaging system. However, imaging algorithms in the time domain have a heavy calculation burden though they are able to process SAR data under a great variety of imaging geometries. A revised range-Doppler algorithm is presented for FMCW SAR imaging by compensating radar migration [11]. A nonlinear frequency-scaling algorithm is proposed by Meta et al. to achieve FMCW SAR focusing by applying Dopplerrange correction [12]. However, the range-Doppler algorithm and the nonlinear frequency-scaling algorithm in frequency domain will result in deviations due to the highorder phase error. So, the range migration algorithm (RMA) in wavenumber domain is preferable for THz FMCW SAR imaging [8,13].
As mentioned above, a single transceiver is moved in a grid-like manner to acquire data for THz FMCW SAR imaging in most cases. And the mechanical scanning parameters should obey the Nyquist-Shannon sampling theorem for the RMA imaging algorithm. However, a smaller wavelength in THz regime requires a smaller scanning step, and it will lead to a larger collection points and a longer data acquirement time.
e THz imaging efficiency is low for the conventional imaging algorithms. To reduce imaging data acquirement time and to improve imaging efficiency, an algorithm based on compressed sensing is proposed for 3D imaging of THz FMCW SAR in this paper. e proposed algorithm relies on the advantages of compressed sensing that it can reconstruct signal from the sparse data samples. e paper is organized as follows. e developed 220 GHz FMCW SAR imaging system is introduced in Section 2. And the RMA for THz FMCW radar imaging is given in Section 3. Section 4 describes the proposed 3D imaging algorithm based on compressed sensing. e experimental results are given and analyzed in Section 5 to verify the proposed imaging algorithm. e last section is Conclusion. Figure 1, the FMCW THz imaging system developed in this paper consists of radio frequency transceiver subsystem, signal acquisition and processing subsystem, and planar scanning subsystem.
System Briefs. As shown in
When the THz FMCW imaging system works, an X-band FMCW signal generated through the direct digital synthesizer serves as the driving source of the transmission link which will output a THz FMCW signal. en, the transmitted signal will be reflected by target and then received by the antenna. e received echoed signal is mixed with the reference signal to get the intermediate frequency (IF) signal which will be processed with the IF processing unit. Lastly, the processed IF signal is acquired with the data acquisition unit. e planar scanner controlled by the data acquisition and processing subsystem moves with a "stopgo-stop" manner. e echoed signals at certain points are acquired and stored by the processing software and are processed to generate 3D images after finalization of scanning. e picture of the developed THz FMCW imaging system is presented in Figure 2. And the specification parameters are illustrated in Table 1.
THz FMCW Signal Modeling.
e transmitted signal of the THz FMCW imaging system can be expressed as where f c denotes the central frequency, T means the sweep duration, and k r is the chirp frequency rate. Assumed that the distance between the target and the radar is R, the echoed signal can be expressed as where τ R � 2R/c is the echoed signal delay and g represents the target reflection coefficient. Dechirp signal processing technique is adopted in the developed imaging system to obtain IF signal. at is, IF signal is output by mixing the transmitted signal and the echoed signal, and it can be written as where πk r τ 2 R is the residual phase error (RVP) after dechirp. In general, this item can be ignored for imaging. en, equation (3) can be simplified as where f � k r t + f c , so the above expression can be rewritten as
Range Migration Algorithm for THz FMCW Radar Imaging
ough several algorithms have been developed for THz FMCW radar imaging, RMA in the wave number domain is widely used due to a higher efficiency. e imaging geometry and RMA for THz FMCW radar will be introduced in this section.
e imaging geometry of the developed imaging system is presented in Figure 3. e transceiver mounted on the planar scanner is controlled to move with a grid-like manner, which will lead to a 2D rectangular synthetic aperture formation e on the X ′ O ′ Y ′ plane which is parallel to the XOY plane.
For the convenience of expression, equation (5) is rewritten as 2 Complexity 2 is the range between the measurement point (x ′ , y ′ , z 0 ) and the scattering center (x, y, z). en, the sum of all received echoed signals at a measurement point (x ′ , y ′ , z 0 ) within the imaging area is where P denotes the imaging area and g(x, y, z) represents the reflection coefficient matrix of target. Applying two-dimensional Fourier transform of the received echoed signals along the scanning direction, where k x and k y represent the spatial frequencies in the Xand Y-directions and S 0 (k x , k y , k) � Je − j2kR e − jk x x′ e − jk y y′ dx ′ dy ′ . And S 0 (k x , k y , k) can be solved as Considering that k 2 x + k 2 y + k 2 z � 4k 2 , equation (8) can be rewritten as After multiplying the reference function we can obtain the following expression: It can be found from equation (11) that S 1 (k x , k y , k) is the Fourier transform of g(x, y, z), so the reflection coefficient matrix g(x, y, z) which corresponds to the image of target can be derived by applying inverse Fourier transform. However, the measured data are not uniformly distributed in the k z domain due to a nonlinear conversion from k to k z . Generally, the Stolt interpolation method is used to obtain a uniformly distributed data in the k z domain. Finally, the reflection coefficient matrix g(x, y, z) can be obtained with e THz FMCW SAR imaging algorithm based on RMA can be summarized as follows: (1) 2D Fourier transform is applied on the data collected by planar scanning to obtain wavenumber domain formulation S(k x , k y , k) (2) e reference function e − j ������� 4k 2 − k 2 x − k 2 y √ z 0 is multiplied at the reference range z 0 (3) Stolt interpolation is performed to generate data which are uniformly distributed in the k z domain (4) Finally, 3D inverse Fourier transform is performed to produce the image of target.
Compressed Sensing Principle.
Compressed sensing is a signal processing technique which is able to realize the recovery of a sparse signal with fewer samplers required by the Nyquist sampling theorem. In view of the advantages of compressed sensing, this technique is investigated for THz FMCW SAR imaging to reduce the requirement for data sampling and to increase the imaging speed. Suppose g is a discrete signal with a length of N in the time domain, and it can be represented linearly by a set of orthonormal basis as where Ψ � Ψ 1 , . . . , Ψ N is the sparse transformation basis and x � x 1 , . . . , x N is the weighting coefficients of g satisfying that x i � Ψ T i g. It can be seen from equation (13) that x is the equivalent representation of g. If there are only K nonzero elements in x, then x is the K-sparse representation of signal g, and the signal sparsity is K.
Generally, the received THz FMCW radar signal is nonsparse in the time domain. So, it is necessary to transform the nonsparse time domain signal to the sparse transform domain. Fourier transform is employed here for signal transform operations in this paper. Compressed sampling is realized with a measurement matrix ϕ which projects high-dimensional signal on the low-dimensional space: where y is the vector of measurements of the original highdimensional signal g under a random matrix ϕ, Ψ is the sparse basis matrix, and A � ϕΨ is the sensing matrix with a dimension of M × N(M ≪ N). e N-dimensional signal x can be recovered from the M-dimensional measurement data through signal reconstruction. And the signal reconstruction is realized by solving the L 0 -norm minimization problem expressed as ough the above minimum L 0 -norm is an NP-hard problem which cannot be solved directly, an optimal solution can be realized with greedy search or convex optimization algorithms.
THz FMCW Radar Imaging Algorithm Based on
Compressed Sensing. Because the recovery accuracy of greedy search reconstruction algorithms like orthogonal matching pursuit (OMP) [14,15], stage-wise orthogonal matching pursuit (StOMP) [16], regularized orthogonal matching pursuit (ROMP) [17], and compressive sampling matching pursuit (CoSaMP) [18] is poor with lower signal noise ratio (SNR), an improved smoothed L 0 -norm minimization (SL0) algorithm based on the convex optimization is presented in this paper.
For the SL0 algorithm, the objective function is defined as follows [19,20]: where F σ (x) is a smoothed function, which can be regarded as ‖x‖ 0 when σ is close to 0. It is obvious that the smoothed function F σ (x) will approximate to the optimal solution by choosing a suitable σ. And F σ (x i ) is defined as 4 Complexity Compared with the Gauss smoothed function, the presented smoothed function will get a better performance in signal reconstruction as this leads to a closer approximation to ‖x‖ 0 . e steepest descent algorithm is commonly applied to solve equation (16). However, it is difficult to estimate the optimal searching step in the algorithm, and this will lead to a slower convergence speed. A revised Newton method in [21] is utilized in this paper to solve the optimization problem more efficiently. e Newton directions is revised as where ) + ε k I, I is identity matrix, and ε k is positive to make sure that diagonal values of G are also positive. Here, ε k is set as So, the revised Newton directions can be written as (20) e realization steps of the presented sparse signal reconstruction algorithm based on the Newton smooth L 0 -norm (NSL0) are summarized in Table 2.
(2) Choose a suitable decreasing sequence for σ, σ 1 , σ 2 , . . . , σ j , σ j � βσ j− 1 , β(0 < β < 1) is the decreasing factor. Realization: Let r � y − Ax j n ; (e) If ‖r − r 0 ‖ < e, break; Else, r 0 � r; E, x j � x j n ; (2) Final answer is x � x j . Based on the above sparse signal reconstruction algorithm, the 3D imaging algorithm of THz FMCW SAR based on compressed sensing proposed in this paper can be summarized as follows: (1) Design a measurement matrix ϕ which is able to meet the requirement for data acquisition (2) Collect THz FMCW SAR echoed signal at the corresponding position given by the designed measurement matrix to obtain measurement signal y (3) Reconstruct original signal g from the sparse measurement data using the presented NSL0 algorithm (4) Apply 2D Fourier transform with respect to the reconstructed signal g (5) Perform reference function multiplying (RFM) (6) Perform Stolt interpolation (7) Perform 3D inverse Fourier transform to generate the image e proposed THz FMCW SAR imaging algorithm based on compressed sensing is shown in Figure 4.
Experimental Result
Simulation and measurement experiments are performed to verify the presented THz FMCW SAR imaging algorithm in this paper.
Point Targets Simulation.
e THz FMCW SAR imaging model is built with MATLAB, and six-point targets are simulated for imaging. e coordinates of targets are (0.2m, 0m, 0m), (− 0.2m, 0m, 0m), (0m, 0.2m, 0m), (0m, − 0.2m, 0m), (0m, 0m, 0.2m), and (0m, 0m, − 0.2m). e imaging radar system parameters used in the simulation are listed in Table 3. e measurement matrix used in the simulation is a sparse random matrix, and the sparse sampling rate is 50%. e sparse simulated data are firstly recovered with compressed sensing. en, the recovered signal is processed with RMA to produce a 3D image as shown in Figure 5(a). Also, the imaging results with the full data using RMA are presented in Figure 5(b) for comparison.
It can be seen from the figures that the produced 3D images are almost the same. And, it shows that the presented THz FMCW SAR imaging algorithm works well with the sparse measurement. 8 Complexity scanning plane, and the scanning area is 151 mm × 151 mm. e planar scanner moves with a "stop-go-stop" manner with a scanning step of 1 mm. e collected full data are processed to reconstruct the image of target with RMA first. e reconstructed images of a disc and a pair of scissors as shown in Figure 6 are presented in Figure 7. en, 10%, 20%, 30%, and 50% of the collected data are extracted according to the measurement matrix ϕ. e extracted sparse data are then processed with the proposed compressed sensing imaging algorithm. e reconstructed images under different sparsity rates are shown in Figures 8 and 9.
It can be seen from the experimental results that the reconstructed 3D image quality is poor with the 10% data sparsity rate, and it is difficult to identify the specific targets. However, the images can be reconstructed well even at 20% data sparsity rate. And a larger data will result in a better image. Also, a reconstruction error ε is introduced to evaluate the quality of the images reconstructed with different reconstruction algorithms: where g ′ (x, y, z) is the reconstructed signal and g(x, y, z) denotes the original signal. A larger ε denotes a significant deviation of the reconstructed signal from the original signal and thus means a poorer performance of the reconstruction algorithm. e reconstruction error comparisons between SL0 and NSL0 under different sparsity rates are listed in Table 4. Also, the calculation time of the algorithms is presented in Table 5. e CPU is Intel Core i5-4210 M @ 2.6 GHz, and the memory is 8 GB.
e above results show that the presented NSL0 reconstruction algorithm has a smaller error and a fast calculation speed compared with the SL0 reconstruction algorithm. is is contributed by Newton's method in the NSL0 algorithm.
Conclusions
An algorithm for THz FMCW SAR imaging based on compressed sensing is investigated in this paper. e developed 220 GHz FMCW imaging radar system is introduced, and the signal model is built firstly. RMA for the developed THz FMCW SAR is then derived. Compressed sensing is described, and the NSL0 reconstruction algorithm is presented to reconstruct signal with sparse samples. And the algorithm based on compressed sensing for the developed THz FMCW SAR is summarized. Experiments are performed to verify the presented imaging algorithms. e experimental results show that it is able to reconstruct the image well even at the sparsity rate of 20%. e presented 3D imaging algorithm for the THz FMCW imaging radar system can improve the imaging efficiency by reducing the requirements for spatial data acquisition. e developed 220 GHz FMCW SAR imaging system has been used for nondestructive testing of composite materials in aerospace and critical structural applications. As only a single transceiver is integrated in the system, it must employ a grid-like mechanical scanning to cover an area and results in a high time cost. e imaging system can be upgraded to multiple transceivers which will have a faster imaging speed. e proposed compressed sensing image reconstruction algorithm can be also applied for sparse transceiver array configuration which is able to achieve a lower imaging system cost by reducing THz transceivers.
Data Availability
e data used to support this study are included within this articles as tables. If there is a need for any other information, the corresponding author may be contacted by e-mail.
Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper. | 2021-09-25T15:45:33.790Z | 2021-08-26T00:00:00.000 | {
"year": 2021,
"sha1": "3801d90ce5246902eaa0acecff6d68291add59e3",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/complexity/2021/5576782.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "57241f61f723f9ebdd3e9290ee8936ea8333c8cb",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
250195102 | pes2o/s2orc | v3-fos-license | Detection of mecA gene and methicillin-resistant Staphylococcus aureus (MRSA) isolated from milk and risk factors from farms in Probolinggo, Indonesia
Background: Staphylococcus aureus is commonly found in dairy cows and is a source of contamination in milk. S. aureus that are resistant to beta-lactam antibiotics (especially cefoxitin) are referred to as methicillin-resistant Staphylococcus aureus (MRSA). The spread of MRSA cannot be separated from sanitation management during milking; it can originate from milk collected from the udder or from the hands of farmers during the milking process. The purpose of this study was to examine the level of MRSA contamination in dairy cow's milk and farmer's hand. Methods: A total of 109 samples of dairy cow’s milk and 41 samples of farmer’s hand swabs were collected at a dairy farm in Probolinggo, East Java, Indonesia. Samples were cultured and purified using mannitol salt agar (MSA). The profile of S. aureus resistance was established by disk diffusion test using a disk of beta-lactam antibiotics, namely oxacillin and cefoxitin. Results: The S. aureus isolates that were resistant to oxacillin and cefoxitin antibiotics were then tested for oxacillin resistance screening agar base (ORSAB) as a confirmation test for MRSA identity. S. aureus isolates suspected to be MRSA were then tested genotypically by polymerase chain reaction (PCR) method to detect the presence of the mecA gene. The results of the isolation and identification found 80 isolates (53.33%) of S. aureus. The results of the resistance test found that 42 isolates (15%) of S. aureus were resistant to oxacillin and 10 isolates (12.5%) were resistant to cefoxitin. The ORSAB test found as many as 20 isolates (47.62%) were positive for MRSA. In PCR testing to detect the presence of the mecA gene, three isolates (30%) were positive for the mecA gene. Conclusions: This study shows that several S. aureus isolates were MRSA and had the gene encoding mecA in dairy farms.
Introduction
Staphylococcus aureus is a pathogenic bacteria that can cause public health problems, because these bacteria often contaminate products of animal origin, including milk or commonly known as milk-borne disease (MBD). 1 This opportunistic bacterial pathogen that can be found in animals and humans. This bacterium can cause various diseases ranging from mild to systemic skin infections such as pneumonia, arthritis, and meningitis. [2][3][4] In previous studies, S. aureus was mostly transmitted to humans through contaminated milk. 5 S. aureus is commonly found on the skin and mucosa of livestock, especially dairy cows with subclinical or clinical mastitis, which is a source of contamination in milk. 6 If these bacteria are resistant to beta-lactam antibiotics is referred to as methicillin-resistant S. aureus (MRSA). 7 It has been noted in earlier investigations that MRSA can result in new health issues for both people and animals. 8 The high rate of MRSA contamination in dairy farms due to excessive administration of antibiotics in the treatment of dairy cows and the spread of these bacteria cannot be separated from sanitation management during milking. 3 Contamination can happen from milk that is collected from the udder as well as from the hands of farmers during the milking process. 9 The Probolinggo Regency, specifically in Krucil District, is one of the largest milk-producing centers in Indonesia. 10 Antibiotics have been widely used as treatment in cases of infection in dairy cattle in Probolinggo, especially in cases of mastitis, so contamination by MRSA in dairy farms in Probolinggo 11 is possible.
S. aureus evolved into strain MRSA because it received the insertion of a large DNA element between 20-100 kb called staphylococcal cassette chromosome mec (SCC mec), that underlies the change in normal penicillin-binding protein (PBP), namely PBP2 to PBP2a. 12 PBP2a is expressed by the gene encoding mecA contained in SCC mec which has a very low affinity for beta-lactams, so that event cultured on media containing high concentrations of beta-lactams, MRSA survives. 13 Molecular detection of the mecA gene using polymerase chain reaction (PCR) is often carried out to confirm the presence of MRSA isolates, but cannot be done in all laboratories because of the ability and cost constraints. 14 Constraints in the use of PCR can be replaced by examining MRSA using the disk diffusion method with the antibiotics oxacillin and cefoxitin, which is then continued with an examination using oxacillin resistance screening agar base (ORSAB). 15 The purpose of this study was to examine the level of MRSA contamination in dairy cow's milk and farmer's hand in Probolinggo, Indonesia, as well as to compare phenotypic detection methods using screening with oxacillin and cefoxitine diffusion disks, ORSAB, and confirming genotypes using PCR to detect mecA-coding genes. The sensitivity and specificity of the test show the effectiveness and ease of application of the MRSA detection method.
Sampling
Milk samples were taken from the udders of female cows who were in lactation period, while the samples of farmer's hand swabs were taken from farmers who were milking. The sample size in this study refers to the formula used by Regasa et al. 16 in the study of the milk safety assessment of Staphylococcus aureus as follows:
REVISED Amendments from Version 2
Based on reviewer comments, we have removed some unnecessary information, we have added some information to the isolation and identification results, we have added the research sample calculation formula to the method, and we have added some research pictures. We have improved the discussion section accordingly. We have corrected the minor typos, and grammar and we want to thanks the reviewers for their valuable comments which improved the quality of manuscript.
Any further responses from the reviewers can be found at the end of the article P = Expected prevalence is 4.8% 17 d = Desired absolute precision (4%) Based on these calculations, 109 milk samples was obtained with the selection of dairy cooperatives purposively based on the amount of milk production in an area and the willingness of dairy cooperatives to participate in the study. Meanwhile, the number of farmer hand swab samples was adjusted to the number of dairy cows owned by each farmer in the dairy cooperative area, of which 41 cattle were obtained from 109 cows.
A total of 109 samples of dairy cow's milk and 41 samples of farmer's hand swabs were collected at a dairy farm in the Probolinggo region, East Java, Indonesia from July to September 2021. Dairy cow's milk samples were taken from each cow in the third press as much as 30 ml which was then stored in a 60 ml sample bottle; the farmer's hand swab samples were taken from each farmer after the milking process using a sterile cotton swab which was then stored on Amies medium.
Bacteria isolation and identification
As much as 1 ml of each milk sample was put into a 20 ml test tube filled with 9 ml of Mannitol Salt Broth (MSB) medium while for hand swab samples, the Amies medium was vortexed until it became liquid and then 1 ml was added into a 20 ml test tube which has been filled with 9 ml of MSB media. The test tube containing MSB which had been mixed with the sample was incubated in an incubator (Isuzu Model 2-2195, Jica) at 37°C for 24 hours. The samples were cultured and purified using Mannitol Salt Agar (MSA) (Oxoid CM0085) and then incubated at 37°C for 24 hours.
Microscopic examination of bacteria was done through Gram staining to visualise Gram-positive bacteria in the form of cocci and clusters. 18 The biochemical examination was carried out using a catalase test and a coagulase test. The catalase test was carried out by dripping 3% hydrogen peroxide (H 2 O 2 ) on bacterial colonies that had been placed on the surface of the glass. 19 The coagulase test was carried out by dripping 200 μl of rabbit plasma into a coagulase test tube containing bacterial colonies, which was then incubated at 37°C for 24 hours. 20
Oxacillin and cefoxitin disk diffusion methods
The test was carried out following the Clinical and Laboratory Standards Institute (CLSI) 2020 guidelines: S. aureus was tested for susceptibility to the antibiotics oxacillin 1 μg and cefoxitin 30 μg (Oxoid) on Muller Hinton Agar (MHA) plates (Oxoid, CM0337). The identified isolates were purified on mannitol salt agar (HiMedia Pvt. Ltd., M118) and incubated at 37°C for 24 hours. Using a sterile cotton swab (AKD 10903610549), standardized isolates (0.5 McFarland standard) were evenly streaked on the surface of the MHA medium (Oxoid, CM0337). The oxacillin (1 μg) and cefoxitin (30 μg) antibiotic disks were placed side by side with a distance of 50 mm on MHA that had been inoculated with isolates, and then incubated at 37°C for 24 hours to measure the inhibition zone.
Oxacillin resistance screen agar test S. aureus isolates resistant to oxacillin 1 μg and cefoxitin 30 μg (Oxoid) were confirmed by ORSAB (HiMedia M1415) using S. aureus isolates from the MHA media; plus Oxacillin Resistance Selective Supplement (Supplement, HiMedia Pvt. Ltd., FD191). 21 Detection of the mecA gene All S. aureus isolates that were resistant to cefoxitin 30 μg and positive on ORSAB examination were then subjected to a PCR test to detect the presence of the mecA gene. 22 The DNA extraction process was carried out according to the QIAamp DNA Mini Kit protocol (51304 & 51306), where previously the isolates were purified on MSA (HiMedia Pvt. Ltd, M118) and inoculated on MHA (Oxoid, CM0337). The primer used was mecA F: 5 0 -AAA ATC GAT GGT AAA GGT TGG C-3 0 and mecA R: 5 0 -AGT TCT GCA GTA CCG GAT TTG C-3 0 . 23 The PCR master mix used GoTaq Green Master Mix (Promega, 9PIM712) which is a ready-to-use solution mixture containing Taq DNA polymerase, dNTPs, MgCl 2 , and a reaction buffer. DNA was amplified using a Thermal Cycler T100 machine (Bio-Rad, 186-1096) for 40 cycles in 25 μl of the reaction mixture with the following steps: denaturation at 94°C for 30 seconds, annealing at 55°C for 30 seconds, and extension at 72°C for 1 min with a final extension at 72°C for 5 min. A total of 10 μl of PCR product were analyzed by 2% agarose gel electrophoresis, and the gel was visualized under ultraviolet light. 24 A positive test indicated a PCR product in the 533-base pair (bp) band.
Result
The results of the isolation and identification tests yielded 80 (53.33%) S. aureus isolates from 150 samples taken at a dairy farm in Probolinggo, East Java, Indonesia. The 80 isolates that were positive for S. aureus consisted of 54 isolates from dairy cow's milk samples and 26 isolates from farmer's hand swab samples as shown in Table 1. S. aureus had phenotypic colony characteristics on MSA medium, namely a change in color in the medium from red to golden-yellow indicating mannitol fermentation, while the colonies had various pigments including white, golden, and yellow as shown in Figure 1. The Gram staining test showed the Gram-positive colonies in the form of cocci and clusters as shown in Figure 2, which were then confirmed by the catalase test and coagulase test as shown in Figures 3 and 4. 19 The disk diffusion method on MHA medium showed that 42 isolates exhibited resistance to oxacillin preparations, with a percentage of 52.5% (28 isolates came from dairy cow's milk samples and 14 isolates came from farmer's hand swab sample); on the other hand, 10 isolates showed resistance to cefoxitin, with a percentage of 12.5% (five isolates came from dairy cow's milk samples and five isolates came from farmer's hand swab samples) as shown in Table 2 and Figure 5. No S. aureus isolate was found to simply be resistant to cefoxitin, according to the disc diffusion test results, and all isolates that were found to be resistant to cefoxitin were also found to be resistant to oxacillin, as shown in Table 3.
Confirmation of the phenotype test that for resistance to oxacillin and cefoxitin was followed by ORSAB test, with a blue culture coloration indicating positive results while a white coloration indicated negative results. The ORSAB test showed that of the 42 isolates of S. aureus that were resistant to oxacillin, 20 isolates (47.62%) were confirmed MRSA by the disk diffusion method, as shown in Table 4.
S. aureus isolates suspected to be MRSA (Phenotypically resistant to cefoxitin and positive for ORSAB) were then tested genotypically using PCR to detect the presence of the gene encoding mecA. A total of 10 isolates suspected to be MRSA were tested, from which three isolates (30% of the total isolates tested by PCR) were detected positive for the mecA gene, as shown in Figure 6. The results of the PCR test showed that isolates suspected to be MRSA were found to have the mecA gene, which is resistant to the antibiotics cefoxitin and oxacillin, as shown in Table 3.
Discussion
MBD is quite a common public health problem, because it not only has an impact on human health, also has an impact on the health of dairy cows, especially in the milk production and quality sector. 25 Several previous studies have reported that the incidence of contaminated milk by S. aureus resistant to antibiotics is found in both developed and developing countries. 26 Improper and unhygienic handling of milk, especially during the milking process, plays an important role in the occurrence of milk contamination. 27 33 showed that the difference in the number of isolates found could be influenced by differences in study design such as population and geographic distribution of the sample, infection control practices, and the type of antibiotic used, as seen in Figure 6.
The problem of the incidence of S. aureus infection continues to grow with the emergence of MRSA, which is resistant to all beta-lactam antibiotics, including monobactams and cephalosporins, which are a group of antibiotics often used to treat Staphylococcus infections. 34 MRSA infection causes treatment problems and facilitates its spread, so prompt and early diagnosis is needed to identify MRSA accurately. 35 In this study, 42 samples (52.5%) of S. aureus were found to be resistant to oxacillin disks, and 10 samples (12.5%) to cefoxitin disks. Miragaia 36 stated that the phenotypic detection of MRSA using disk diffusion still has not shown accurate results, and mecA genotyping using PCR is still the main recommendation even though it cannot be done routinely. However, even so, identification of MRSA with disk diffusion is still widely used because it can be done quickly and at a lower cost. 37 Diffusion disks using oxacillin and cefoxitin have the same sensitivity level of 100%, and specificities of 74.07% for oxacillin and 92.59% for cefoxitin. 38 However, several previous studies reported that the use of the cefoxitin disk diffusion method had a better sensitivity level than that of oxacillin in detecting MRSA, because the oxacillin disk diffusion method still has a high false positive rate. 39 Vyas et al. 38 stated that false positives could be influenced by beta-lactamase hyperproduction, resulting in the phenotypic expression of oxacillin resistance but without a genotypic resistance mechanism.
In this study, all isolates detected were resistant to the cefoxitin and oxacillin disks. All isolates detected to be resistant to oxacillin and cefoxitin were confirmed by ORSAB assay, in line with a report by Pourmand et al. 40 which stated that the ORSAB test has a specificity of 100%. In this study, 20 of the 42 isolates (47.62%) were found to be positive for MRSA. The sensitivity level confirmed the resistance strain being tested while the specificity was to the minimum inhibitory concentration (MIC). 41 Cefoxitin-resistant and ORSAB-positive S. aureus isolates were tested genotypically using PCR to detect the presence of the gene encoding mecA; these isolates also had positive results in all phenotypic methods (resistance to cefoxitin and oxacillin in the disk diffusion method and positive results in the ORSAB test This project contains the following extended data: • Sentence 2: "The purpose of this study was to examine the level of MRSA contamination in dairy cow's milk and farmer's hand swabs." Comment 2: Authors should delete the word "swabs" in the sentence because what is being actually assessed are the hands of the farmers. The swab is just a tool used to collect the sample.
Comment 3:
The keyword "Swab's hand" should be changed to "hand swabs" in the list of keywords.
Introduction:
The introduction was generally very good. I will suggest that the authors make a change in the last paragraph of this section: Last paragraph of introduction: The purpose of this study was to examine the level of MRSA contamination in dairy cow's milk and farmer's hand swab in Probolinggo, Indonesia, as well as to compare phenotypic detection methods using screening with oxacillin and cefoxitine diffusion disks, ORSAB, and confirming genotypes using PCR to detect mecAcoding genes.
Comment: I think the authors should remove the word "swab" as what is being actually assessed are the farmers' hands, just like I mentioned in my earlier suggestion in the abstract section.
Methods:
The methodology was well-detailed except for some important technical corrections which I have suggested:
Oxacillin and cefoxitin disk diffusion methods
The test was carried out following the Clinical and Laboratory Standards Institute (CLSI) 2020 guidelines: S. aureus was tested for susceptibility to the antibiotics oxacillin 30 μg and cefoxitin 30 μg (Oxoid) on Muller Hinton Agar (MHA) plates (Oxoid, CM0337). The identified isolates were purified on mannitol salt agar (HiMedia Pvt. Ltd., M118), incubated at 37°C for 24 hours as a 0.5 McFarland suspension, and then taken using a sterile cotton swab of size S (AKD 10903610549). They were then wiped evenly on the surface of the MHA medium (Oxoid, CM0337). Disk. The oxacillin 30 μg and cefoxitin 30 μg antibiotic disks were placed side by side with a distance of 5 cm on MHA that had been inoculated with isolates, and then incubated at 37°C for 24 hours to measure the inhibition zone.
Comment 1:
Authors should correct the concentration of oxacillin antibiotic disc to 1 μg because oxacillin disc concentration from Oxoid, UK is 1 μg while that of cefoxitin is correct at the 30 μg indicated. I think this might have been an oversight during the writing of the manuscript.
○ Comment 2: Authors should take note of the bolded sections in the sentence and make corrections as I indicated below for the sentence to be more comprehensive and understandable. Also, 5cm is the same as 50mm, so it is preferable to indicate that the distance between the oxacillin and cefoxitin antibiotics was 50 mm instead of 5 cm since distance units in the CLSI charts are in mm. As I mentioned earlier, the sentence in the last section should be written as: "The identified isolates were purified on mannitol salt agar (HiMedia Pvt. Ltd., M118) and incubated at 37°C for 24 hours. Using a sterile cotton swab (AKD 10903610549), standardized isolates (0.5 McFarland standard) were evenly streaked on the surface of the MHA medium (Oxoid, CM0337). The oxacillin (1 μg) and cefoxitin (30 μg) antibiotic disks were placed side by side with a distance of 50 mm on MHA that had been inoculated with isolates, and then incubated at 37°C for 24 hours to measure the inhibition zone."
Comment 3:
The concentration of all the oxacillin discs in the manuscript should be changed to 1 μg.
Results:
The results are very clear and understandable. Data were properly interpreted and comprehensive. However, I suggested some important changes and corrections: Comment 1: The colour of S. aureus on mannitol salt agar (MSA) is golden-yellow. I will suggest authors use this all through the manuscript.
○ Sentence: "Based on the results of the disk diffusion test, no S. aureus isolate was to only be resistant to cefoxitin: all S. aureus isolates that were detected to be resistant to cefoxitin were also identified as resistant to oxacillin as shown in Table 3." Comment 2: I suggest that authors should rephrase this sentence to be more understandable.
Comment 3: I suggest that the authors delete the column "mecA detection using PCR" in Table 3 as it is empty and serves no function since the last column is already indicating the total isolates that harboured the mecA gene.
Discussion:
The discussion is good but needs some critical changes in some confusing sentences which I have suggested below: Sentence: ..contamination; this percentage is higher than the research conducted by Wang et al. 27 which isolated 195 milk samples, of which 90 samples (46.15%) were contaminated with S. aureus, and from another study conducted by Jahan et al. 28 who isolated 47 milk samples, of which 12 (25.53%) were contaminated with S. aureus.
Comment 1:
There is a mix-up in the sentence above. The sentence is stating that milk samples were isolated while what was actually isolated was the S. aureus from the milk samples. I will suggest that authors should re-write this section as ": ..contamination; this percentage is higher than the research conducted by Wang et al. "The Gram staining test showed the Gram-positive colonies in the form of cocci and clusters, which were then confirmed by the catalase test and coagulase test" -Change the line and add both biochemical test results (+/-). | 2022-07-02T15:11:27.411Z | 2022-06-30T00:00:00.000 | {
"year": 2022,
"sha1": "c73744f040239cee5247c6a96f491871457b5931",
"oa_license": "CCBY",
"oa_url": "https://f1000research.com/articles/11-722/v3/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "536377e5337ead9211828cce79cea29dd3c44ed2",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
119542096 | pes2o/s2orc | v3-fos-license | Erratum: Detection of diffuse TeV gamma-ray emission from the nearby starburst galaxy NGC 253
The CANGAROO-II telescope observed sub-TeV gamma-ray emission from the nearby starburst galaxy NGC 253. The emission region was extended with a radial size of 0.3-0.6 degree. On the contrary, H.E.S.S could not confirm this emission and gave upper limits at the level of the CANGAROO-II flux. In order to resolve this discrepancy, we analyzed new observational results for NGC 253 by CANGAROO-III and also assessed the results by CANGAROO-II. Observation was made with three telescopes of the CANGAROO-III in October 2004. We analyzed three-fold coincidence data by the robust Fisher Discriminant method to discriminate gamma ray events from hadron events. The result by the CANGAROO-III was negative. The upper limit of gamma ray flux was 5.8% Crab at 0.58 TeV for point-source assumption. In addition, the significance of the excess flux of gamma-rays by the CANGAROO-II was lowered to less than 4 sigma after assessing treatment of malfunction of photomultiplier tubes.
Introduction
NGC 253 is a nearby (d = 2.5 Mpc) (de Vaucouleurs, 1978), normal spiral, starburst, and edge-on galaxy. Starburst galax-Send offprint requests to: R. Enomoto ⋆ e-mail: enomoto@icrr.u-tokyo.ac.jp ies are generally expected to have cosmic-ray energy densities about hundred times larger than that of our Galaxy (Voelk et al., 1989) due to the high rates of massive star formation and supernova explosions in their nuclear regions. The star-formation rates can be estimated from the far-infrared (FIR) luminosities, and the supernova rates can be also inferred based on the as-sumption of an initial mass function. Since the supernova rate of NGC 253 is estimated to be about 0.05 -0.2 yr −1 (Mattila and Meikle, 2001, Antonucci and Ulvestad, 1988, van Buren and Greenhouse, 1994, a high cosmic-ray production rate is expected in this galaxy. Although there were no non-thermal X-ray nor GeV gamma-rays detections yet, an extended synchrotron-emitting halo of relativistic electrons was observed (Carilli et al., 1992). The halo extends to a large-scale height, where inverse Compton scattering (ICS) may be a more important process for gamma-ray production than pion decay and bremsstrahlung. The seed photons for ICS are expected to be mainly FIR photons up to a few kpc from the nucleus, and cosmic microwave background radiation at larger distances.
In 2002 CANGAROO-II reported on the detection of a diffuse TeV gamma-rays in the direction of NGC 253 (Itoh et al. 2002(Itoh et al. , 2003b. The estimated size was 0.3 ∼ 0.6 degrees in radius. The emission was later interpreted as halo-like (Itoh et al., 2003a). H.E.S.S., however, claimed null results on them (Aharonian et al. 2005). The upper limits were located in marginal values (actually H.E.S.S.'s upper limits crossed over with CANGAROO-II's fluxes around TeV). The main purpose of this report is to clarify this. H.E.S.S. also discussed calorimetric gamma-ray emission at the very central region of this galaxy in that report. The point source search at the center of this galaxy, therefore, is also subjected.
In this purpose, we observed NGC 253 with the CANGAROO-III telescope in 2004 October. In this paper we describe results of this observation with three telescopes coincidence. The responsibility of this part (Section 2-4) is taken by the authors of Enomoto et al. (2006b). Also the discussion on the previous CANGAROO-II analysis is included.
Observation
CANGAROO-III is one of two major imaging atmospheric Cherenkov telescopes located in the southern hemisphere. The CANGAROO-III stereoscopic system consists of four imaging atmospheric imaging telescopes located near Woomera, South Australia (31 • S, 137 • E). Each telescope has a 10-mφ reflector. Each reflector consists of 114 segmented spherical mirrors (80 cm in diameter with a radius of curvature of 16.4 m) made of FRP (Kawachi et al. 2001) mounted on a parabolic frame ( f /d=0.77, i.e., a focal length of 8 m). The total light collection area is 57.3 m 2 . The first telescope, T1, which was the CANGAROO-II telescope (Itoh et al. 2003b), is not presently in use due to its smaller field of view and higher energy threshold. The second, third, and fourth telescopes (T2, T3, and T4) were used for the observations described here. The camera systems for T2, T3, and T4 are identical and their details are given in Kabuki et al.(2003). The telescopes are located at the east (T1), west (T2), south (T3) and north (T4) corners of a diamond with sides of ∼100 m (Enomoto et al. 2002b).
The observations were carried out in the period from 2004 October 7 to 17 using "wobble mode" in which the pointing position of each telescope was shifted in declination between ±0.5 degree from the center of the galaxy (RA, dec = 11.888 • , −25.288 • J2000) every 20 minutes (Daum et al. 1997). Data were recorded for T2, T3 and T4 when more than four photomultiplier (PMT) signals exceeded 7.6 photoelectrons (p.e.) in any telescope. The GPS time stamp was recorded in each telescope dataset. An offline coincidence of time stamps within ±100 µs (Enomoto et al. 2006a) was required for a stereo event. The typical trigger rate for each telescope was 80 Hz, which was reduced to 10 Hz for stereo events for three-fold coincidence. Each night was divided into two or three periods, i.e., ON-OFF, OFF-ON-OFF, or OFF-ON observations. Note that the OFF-source observations were also made in "wobble mode". This is carried out because the previously we claimed the detection of the diffuse source. ON-source observations were timed to contain the meridian passage of the target. On average the OFF source regions were located with an offset in RA of +30 • or −30 • from the center of the galaxy. The total observation time was 1179 and 753 min, for ON and OFF observations, respectively.
Next we required the images in all three telescopes to have clusters of at least five adjacent pixels exceeding a 5 p.e. threshold (three-fold coincidence). The event rate was reduced to ∼6 Hz by this criterion. Looking at the time dependence of these rates, we can remove data taken in cloudy conditions. This procedure is the same as the "cloud cut" used in the CANGAROO-II analysis (Enomoto et al. 2002a). We also rejected data taken at elevation angles less than 70 • . In total, 750 min. data survived these cuts for ON and 517 min. for OFF, with a mean elevation angle of 78.6 • .
The light collecting efficiencies, including the reflectivity of the segmented mirrors, the light guides, and the quantum efficiencies of photomultiplier tubes were monitored by a muonring analysis (Enomoto et al. 2006a). The light yield per unit arc-length is approximately proportional to the light collecting efficiencies. The ratios of these at the observation period with respect to the mirror production times (i.e., deterioration factors) were estimated to be 45, 55, and 73% for T2, T3, and T4, respectively. The measurement errors are considered to be at less than the 5% level. These values were checked analyzing Crab data which were obtained in 2004 November described in Enomoto et al.(2006b). The deteriorations were mostly due to dirt and dust settling on the mirrors. We cleaned the mirrors with water in October 2005 and the partial improvement (a factor of 1.3-1.4) of the light collecting efficiencies were observed.
Analysis
The analysis procedures used were identical with those described in Enomoto et al. (2006aEnomoto et al. ( , 2006b, we omit a detailed discussion here. At first, the Hillas parameters (Hillas 1985) were calculated for the three telescopes' images. The gamma-ray incidence directions were adjusted by minimizing the sum of squared widths (weighted by the photon yield) of the three images seen from the assumed position (fitting parameter). Then the Fisher Discriminant (hereafter FD in short) (Fisher 1936) is calculated. The input parameters are energy corrected widths and lengths for the T2, T3, and T4.
Since we have FD distributions for OFF-source data and the Monte-Carlo gamma-ray events, we can assume these are background and signal behaviors. We, therefore, can fit the FD distribution of ON with the above emulated signal and real background functions, to derive the number of signal events passing the selection criteria. With this fit, we can determine the gamma-ray excess without any positional subtractions, i.e., appropriate for diffuse radiations. This is a two-parameter fitting and these coefficients can be exactly derived analytically.
This method was checked by analysis of Crab nebula data taken in November 2004. The wobble-mode observation was also used. The analyzable data corresponded to 316.4 min. The flux is 1.2±0.3 times the standard Crab flux with the power-law consistent with the standard index of −2.5.
Results
The signal function for FD is shown by the black histogram in Fig. 1-c). That for background was made from the region θ 2 < 0.5 degree 2 in the OFF data (the green histograms in Fig. 1a) and b)). As has been described in the previous section, we carried out two-parameters fit, one is the vertical normalization of the background shape and the other is of the signal. The best fit results are shown in Fig. 1 a) and b). The black data points with error bars are ON data. The background subtracted signals are shown by the blue points. The red histograms are the best fitted yield for signals. The entry within θ 2 < 0.05 degree 2 is plotted in Fig. 1-a) and b) is for θ 2 < 0.25 degree 2 , i.e., a) for the point source assumption and b) for 0.5 degree diffuse assumption, respectively. In the both regions, we did not see any statistically significant signals. The threshold of this analysis was estimated to be 0.58 TeV.
Then we study spatial distributions of gamma-ray like events. At first we select these by |FD| < 1 ( see Fig 2-c). The black points with error bars in a) and b) are the ON data. The green histogram in a) is the OFF data with the normalization based on ON/OFF observation times. These two agree well, i.e., there is no signal anywhere in this plotting range. The red points in a) and b) are the background subtracted data. Since the statistics of the OFF-source run is limited, the errors in the background-subtracted data are dominated by this.
For the point source assumption, we can use "wobble"background analysis. The signal region for it is θ 2 < 0.05 degree 2 , therefore, we obtain six background points. The sum of them with a normalization factor of 1/6 is shown by the green histogram in Fig. 2-b). Now the error due to the subtraction becomes small, however, we again can not see any signal excess.
We made FD distributions for θ 2 slices and carried out the same fitting procedure as in the case of Figs 1. The excesses obtained are plotted in Fig. 2-c) (the black points). The red histogram is a 2σ upper limit (37.5 events) for signal for pointsource assumption. Actually χ 2 -minimum of this fit has negative excess. We, therefore, constrained that the excess is positive in deriving upper limit. The result is 5.8% Crab at 0.58 TeV, a factor worse than H.E.S.S.'s upper limit. For references, the upper limits obtained from the red histograms in Fig. 2a) Fisher Discriminant (FD) distributions; a) for θ 2 < 0.05 degree 2 (point-source assumption), b) θ 2 < 0.25 degree 2 (0.5-degree diffuse), and c) the Monte-Carlo gammaray events. The black data point with error bars were obtained from the ON source runs. The green histograms were made by the OFF source runs. Note that the vertical normalization of each histogram was a result of the fitting procedure described in the text. The blue points were the background-subtracted data and the red histograms are best fitted signals.
the assumption of the point-source and is marginal under that of 0.5-degree diffuse emission.
We also searched for signals in the broad range of such as 3.8 × 3.8 degree 2 . In total 316 FD distributions were made by the spatial bin size of 0.2 × 0.2 degree 2 . The signal function is the same as in Fig. 1-c). Each background function was made from the OFF-source runs with the bin size of 0.6 × 0.6 degree 2 with the same center position as ON data points in or- region within 0.5 degree circle is zero consistent, also even in the surrounding region, we can not find any excess. The same kind of analysis was repeated with five different energy thresholds estimated from the total number of photoelectrons. Here, H.E.S.S., showed integral flux upper limits and we follow it in order to compare with them. The upper limit of the integral flux versus energy is obtained and is shown in Fig. 4. The red line is 2σ upper limit for the point-source assumption. The blue is that for 0.5-degree diffuse case. The CANGAROO-II data points were obtained from Table 6 of Itoh et al. (2003b). Since they are differential, we multiplied E/(γ − 1) bin by bin bases (the black points with error bars). The power-law index γ was assumed to be 3.85 which was the best fitted value in the same reference. They are slightly harder than those in Fig. 2. of Aharonian et al. (2005), which we do not know why there is difference. The black line is their upper limit for the point-source assumption and the green that for 0.5-degree diffuse.
Discussion
Our upper limits are 2∼3 times higher than those obtained by H.E.S.S. (Aharonian et al. 2005). These factors can be understood by the blur spot sizes of the segmented mirrors (0.14, 0.12, and 0.09 degrees for T2, T3, and T4, respectively). The effective area of three telescopes (when three-fold coincidence is required) is smaller than that by a single telescope measurement, i.e., the threshold is higher. Also important thing is that the CANGAROO-II carried out multiple-years ( and multiplemonths per year ) observation, while this is single year and single month one (or rather slightly larger than a single week) observation. Therefore, the previous CANGAROO-II fluxes (Itoh Itoh, Enomoto, Yanagita, Yoshida et al. Before doing it, we need to consider the fact that the CANGAROO-II data and its analysis software are still available. We checked the previous CANGAROO-II analysis in detail. The detailed description can be found in Itoh et al. (2003b). We found an improper part which can be found in the description written in Section 3.6 in Itoh et al. (2003b), that is the procedure to remove hot channel. In the previous CANGAROO-II analysis, the deformation of α spectrum appeared in OFF data (non-flat α distribution) (image oriented angle: Hillas parameter (Hillas 1985)). Generally hot pixels deform α spectrum. We carried out the following procedure to find those bad pixels; -Hot "box" scan for recovering flatness were carried out, where "box" is a unit of sixteen (four by four) neighbored photo-multiplier tubes ( as shown in Fig. 1 of Itoh et al. 2003b). -further scan inside these sixteen channel were done and finally find the field-deforming pixels.
Note that this was not applied to RX J1713.7-3946 (Enomoto et al. 2002a) , Galactic Center (Tsuchiya et al. 2004), nor RX J0852.0-4622 (Katagiri et al. 2005). For RX J1713.7-3946, we removed hot pixels due to small discharges triggered by the bright star passages. For Galactic center and RX J0852.0-4622, we selected them based on the χ 2 calculated by the pixel-hit rate and deviation of each ADC spectrum from the average one. These three observations had bright stars in the field of view (FOV). On the other hand, the FOV of the NGC 253 observation did not contained any bright ones, i.e., it was relatively dark field. Although there were no explicit high hit-rate pixels, the deformation of α spectrum appeared in OFF data. This is why we adopted the above procedure. These rejections for masked pixels were applied commonly to the ON and OFF runs. We, therefore, thought it was unbiased. We, however, found that there is a big discrepancy of the excess events before and after this procedure. Their numbers were 700 and 2000, respectively. Excess of 2000 events which was 11σ is now reduced to be less than 4σ, that is lower than the standard of claiming a positive signal. Assuming 2σ upper limit, it is now clear that at most a half level of signal is allowed compared to the previous flux level (Itoh et al. 2003b), which is lower than the upper limit by H.E.S.S. of extended source assumption. In this case, the new expected yield for this observation would be approximately the red histogram in Fig. 2-c) in the point-source assumption.
To summarize the present situation, we have nothing to deny H.E.S.S.'s observation, i.e. for diffuse radiation of order 0.5 degree the emission should be less than 6% Crab and for the point-source it is less than 2% Crab at 300 GeV. However, the physics interest on this astronomical object is not lost. In fact, H.E.S.S. discussed the possibility of calorimetric gamma-ray emission in the starburst region (Aharonian et al. 2005). Also the radio halo should be originated by the high energy electrons (Carilli et al. 1992). The fine resolution (spatial and energy) and high sensitivity with also wide energy range observation is still awaited both for the point and diffuse sources.
Conclusions
We have observed the nearby starburst galaxy NGC 253 in October 2004. TeV gamma-rays were searched for in the data obtained by three telescopes. No statistically significant signals were obtained for both assumptions of point and diffuse source. Our upper limits were marginally inconsistent with the previous CANGAROO-II observation. We, therefore, further investigated the previous analysis and found an improper procedure in hot channel rejection algorithm. After removing that procedure, the previous CANGAROO-II flux was reduced less than a half. We concluded that we can not claim any evidence for gamma-ray emission from NGC 253. | 2018-10-23T00:29:37.701Z | 2006-10-10T00:00:00.000 | {
"year": 2006,
"sha1": "f27f0e839cb1bd30d329793ae1dd78d6cccecd57",
"oa_license": null,
"oa_url": "https://www.aanda.org/articles/aa/pdf/2007/04/aa6244-06.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "f27f0e839cb1bd30d329793ae1dd78d6cccecd57",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
38976729 | pes2o/s2orc | v3-fos-license | Role of SVP in the control of flowering time by ambient temperature in Arabidopsis
Plants must perceive and rapidly respond to changes in ambient temperature for their successful reproduction. Here we demonstrate that Arabidopsis SHORT VEGETATIVE PHASE ( SVP ) plays an important role in the response of plants to ambient temperature changes. The loss of SVP function elicited insensitivity to ambient temperature changes. SVP mediates the temperature-de-pendent functions of FCA and FVE within the thermosensory pathway. SVP controls flowering time by nega-tively regulating the expression of a floral integrator, FLOWERING LOCUS T ( FT ), via direct binding to the CArG motifs in the FT sequence. We propose that this is one of the molecular mechanisms that modulate flowering time under fluctuating temperature conditions.
Plants are sessile organisms and are, consequently, exposed to a wide variety of environmental stresses, both abiotic and biotic, exerted by their surroundings. The most common of these is temperature. Within the range of temperatures tolerable to plants, the response to low temperature, particularly near-freezing temperature, is well understood. Plants have evolved a number of adaptive mechanisms to meet the challenge of low temperature. In Arabidopsis, flowering is accelerated by prolonged exposure to cold, a process called vernalization. The epigenetic silencing of the FLOWERING LOCUS C (FLC) (Michaels and Amasino 1999;Sheldon et al. 1999) is central to the vernalization process (Sung and Amasino 2005), and this silencing has been attributed to the activities of the VERNALIZATION1 (VRN1), VERNAL-IZATION2 (VRN2), and VERNALIZATION INSENSI-TIVE3 (VIN3) genes (Gendall et al. 2001;Levy et al. 2002;Sung and Amasino 2004). Cold acclimation is another well-characterized response to low temperature (Guy 1990). Plants become tolerant to freezing temperatures by being previously exposed to short periods of low but nonfreezing temperatures. Analyses of mutant plants have identified C-Repeat-binding factor (CBF)-dependent and CBF-independent signaling pathways in cold acclimation (Sharma et al. 2005), suggesting that plants use distinct mechanisms to respond to low temperature.
There is increasing concern about the potential impact of global temperature changes, which significantly affect ambient temperature, on plant development. Several lines of evidence suggest that the recently observed alterations in the flowering times of many plant species and the increase in plant respiration rates are closely associated with these changes in ambient temperature (Fitter and Fitter 2002;Atkin and Tjoelker 2003). Although a great deal of progress has been made in our understanding of the regulation of plant development by low temperature, less is currently known about the molecular mechanisms underlying the responses of plants to changes in ambient temperature (Coupland and Prat Monguio 2005;Samach and Wigge 2005). Here, we show that the SHORT VEGETATIVE PHASE (SVP) gene mediates ambient temperature signaling in Arabidopsis and that the SVP-mediated control of FLOWERING LOCUS T (FT) expression is one of the molecular mechanisms evolved by plants to modulate the timing of the developmental transition to flowering phase in response to changes in the ambient temperature.
Results and Discussion
As a first step to determining the mechanism underlying the perception and transduction of ambient temperature signaling in plants, we assessed mutants in known flowering time genes for their insensitivity to changes in ambient growth temperature. Of the flowering time mutants tested, one with a lesion in svp was indeed insensitive to such changes. The flowering of the majority of these flowering time mutants was noticeably delayed at 16°C, with flowering time ratios (16°C/23°C) ranging from 1.1 to 2.0 ( Fig. 1A), the exception being ld-1. However, svp-31 and svp-32 mutants, the T-DNA alleles of SVP (Supplementary Fig. 1; Hartmann et al. 2000), manifested almost identical flowering times at 23°C and 16°C (Fig. 1A). svp mutants were early flowering, especially at 16°C, suggesting that a reduction in SVP activity significantly decreased plant response to lower temperature and that the loss of SVP activity would result in the loss of the effects of low temperature. In contrast, SVP overexpressor plants were late flowering, especially at 23°C, suggesting that overexpression of SVP can mimic the effect of low temperature. Reduced FT expression was likely responsible for this late flowering phenotype of 35SϻSVP plants ( Supplementary Fig. 2). The weak temperature response seen in 35SϻSVP plants can be explained by the differential expression of FT, such that FT expression at 23°C was higher than that at 16°C in 35SϻSVP plants. SVP probably performs a nonredundant role in ambient temperature sensing, as loss of the function of AGL24 (Yu et al. 2002), the closest homolog of SVP, did not induce temperature insensitivity.
Characterization of the pattern of SVP expression at different temperatures in wild-type plants by real-time PCR analyses revealed that SVP expression slightly in-[Keywords: Ambient temperature; SVP; floral repressor; FT; the thermosensory pathway] creased in the leaf at 16°C (Fig. 1B). In contrast, FT expression was strongly repressed in the leaf at l6°C. Histochemical -glucuronidase (GUS) analysis detected both SVP and FT expression throughout the expanded leaves, but SVP expression was up-regulated and FT expression was down-regulated ( Fig. 1C). Since this upregulation of SVP may not be significant in itself in explaining this dramatic down-regulation of FT expression, it is possible that the post-transcriptional regulation of SVP or altered protein-protein interaction of SVP at the lower temperature may also be responsible for the reduction in FT transcription. Considering that SVP acts as a floral repressor (Hartmann et al. 2000), these data suggested that additional flower-inhibitory factors exist in the leaf at lower temperature. Subcellular localization analysis showed that the SVP-green fluorescence protein (GFP) fusion protein localized in the nucleus at both 16°C and 23°C (Fig. 1D). Taken together, these results indicate that SVP expression is weakly temperature dependent, similar to the thermosensory genes of other species (Johansson et al. 2002).
Ambient temperature is perceived via a genetic pathway (thermosensory pathway) that requires both FCA and FVE in Arabidopsis (Blázquez et al. 2003). An analysis of the genetic interaction of svp mutants with fca and fve mutants was conducted to ascertain whether or not SVP operates within the same genetic pathway as FCA and FVE. The late flowering phenotypes observed in the fca-9 and fve-3 mutants under long-day conditions were largely masked by the loss of SVP function ( Fig. 2A), demonstrating that svp is epistatic to the fca and fve mutants. In addition, the temperature insensitivity induced by the fca and fve mutations persisted even in the absence of SVP function, which suggests that SVP functions downstream from FCA and FVE within the thermosensory pathway and that SVP mediates temperature signaling. Consistent with this view, SVP expression was elevated in the fca and fve mutants (Fig. 2B), but not in other autonomous pathway mutants ( Supplementary Fig. 3), and the flk and fpa mutants were temperature sensitive (Fig. 1A). SVP expression, however, was regulated neither by vernalization nor by CONSTANS (CO) (Samach et al. 2000), a central regulator of the long-day pathway. The observation that both the svp-32 mutants and wild-type plants responded similarly to gibberellin (GA) treatment or to differing light conditions (Supplementary Fig. 4) supports the premise that SVP functions primarily within the thermosensory pathway.
The genetic interaction of svp mutants with flc mutants was assessed in an attempt to determine whether or not SVP interacts with FLC, since FLC is an important regulator that mediates vernalization effects in the autonomous pathway (Michaels and Amasino 1999), and both FLC and SVP function downstream from FCA and FVE ( Fig. 2A,B). The results indicate that SVP acts independently of FLC at the transcriptional level within the thermosensory pathway: SVP expression was unchanged in the presence of functional alleles of FRI or FLC (Fig. 2C) and FLC expression remained unaffected by increases or reductions in SVP activity (Fig. 2D). Based on these results, we propose that SVP is very likely a thusfar unidentified repressor that mediates the temperature- dependent role of FCA and FVE (Blázquez et al. 2003). SVP appears to function, at least in part, downstream from FLC by modulating flowering time in response to ambient temperatures. The flowering of fri flc, FRI flc, and FRI FLC mutants was accelerated by the svp-32 mutation (Fig. 2E). Conversely, the temperature responsiveness exhibited by the fri flc and FRI flc mutants disappeared in the absence of SVP function, thereby suggesting that SVP exerts its effects principally within the thermosensory pathway. Interestingly, at flowering, FRI FLC plants had a similar number of leaves at 23°C and 16°C (64 vs. 67 leaves), which was also found in fca and fve mutants in which FLC levels were elevated even in the absence of functional FRI. A possible explanation of this flowering time phenotype of FRI FLC plants is that the floral repressive activity of FLC may be highly elevated in FRI FLC mutants and, consequently, further floral repression at 16°C may be masked. Consistent with this premise, temperature responses were restored in fve flc and fca flc double mutants to a level similar to that shown by flc single mutants (Blázquez et al. 2003;Balasubramanian et al. 2006). Of particular interest is that the severe late flowering of FRI FLC plants was largely suppressed by the svp-32 mutation, suggesting that FLC requires SVP to inhibit flowering. Considering that the MADS-box proteins are known to interact physically in a protein complex (Riechmann and Meyerowitz 1997), a possible scenario to explain this suppression by the svp-32 mutation is that SVP and FLC proteins may interact in a complex during temperature signaling. This proposal is supported by recent findings that FLC is a component of a multimeric protein complex in vivo and that SVP interacts with several MADSbox proteins (de Folter et al. 2005;Helliwell et al. 2006).
The conclusion that SVP functions as a floral repressor (Hartmann et al. 2000) raises an important question: On which flowering time gene does SVP exert its negative effects in the transduction of ambient temperature signaling? An analysis of the expression levels of known flowering time genes in the svp mutants revealed that the expression levels of FT (Kardailsky et al. 1999;Kobayashi et al. 1999), a floral integrator, were substantially elevated in the svp-32 mutants at both 23°C and 16°C (Fig. 3A). A similar up-regulation of FT in the svp-32 mutants was observed at a series of defined growth stages ( Supplementary Fig. 5; Boyes et al. 2001). These observations indicated that the thermosensory signaling pathway functions, at least in part, via FT (Blázquez et al. 2003;Halliday et al. 2003). A reporter assay, carried out to confirm the negative regulation of FT expression effected by SVP, revealed profound ectopic pFTϻGUS expression in both the leaves and vascular root tissues of the svp-32 mutants (Fig. 3B). This suggests that SVP is required for the stable repression of FT in the ground tissues of the leaves of wild-type plants. Considering that FT is the major output of CO (Schmid et al. 2003;Wigge et al. 2005;Yoo et al. 2005) and that FT mRNA is an important component of the long-distance signaling mechanism that triggers flowering (Huang et al. 2005), the early flowering phenotypes observed in the svp-32 mutants can be explained as follows: The absence of SVP activity induces the accumulation of FT mRNA in the leaf transportable to the shoot apex, thereby triggering floral development. Consistent with a role of FT downstream from SVP, the loss of FT function partially suppressed the early flowering of the svp-32 mutants, the constitutive expression of FT masked the phenotype in the svp-32 mutants (Fig. 3C), and FT expression was significantly reduced in 35SϻSVP plants ( Supplementary Fig. 2). Importantly, svp-32 ft-10 double mutants showed a weak temperature response, as did ft-10 mutants (flowering time ratio = 1.6 vs. 1.5, respectively), although svp-32 single mutants showed temperature insensitivity. Similar phenotypic masking by temperature-sensitive mutants has been observed in fca flc and fve flc mutants (Blázquez et al. 2003;Balasubramanian et al. 2006). One possible scenario explaining why svp-32 ft-10 mutants were more responsive than svp-32 single mutants is that svp-32 mutants display a temperature-insensitive phenotype as the result of increased FT activity, the floralpromoting effects of which are more profound at 16°C. When FT function is absent in the double mutants, the floral-promoting effect by FT at 16°C is not present and, therefore, temperature sensitivity may be restored to a level similar to that found in ft-10 single mutants. The observation that svp-32 ft-10 mutants were-albeit weakly-temperature sensitive indicates that the ambient temperature signaling mechanism of SVP requires FT and an additional downstream target(s). One possible target candidate is SUPPRESSOR OF OVEREXPRESSION OF CONSTANS 1 (SOC1), since the soc1-2 mutation additively reduced the temperature sensitivity of ft-10 mutants (flowering time ratio of ft-10 soc1-2 double mutants = 1.2) (Fig. 3C), although soc1-2 single mutants responded to temperature changes. Consistent with this redundant role of FT and SOC1, neither the ft-10 nor soc1-2 single mutations completely suppressed the early flowering phenotypes of the svp-32 plants (Fig. 3C). Rather, the early flowering of the svp-32 mutants is masked, in large part, by the ft soc1 double mutation (T. Mizoguchi, pers. comm.). Nevertheless, we cannot exclude the possibility that TWIN SISTER OF FT (TSF) and FD ) may be also the target of SVP.
SVP is a member of the MADS-box proteins, which function as transcriptional regulators via their DNA- binding motifs (Riechmann and Meyerowitz 1997). As such, it appears likely that the negative regulation of FT expression by the SVP protein can be achieved via direct binding to the FT sequence. This hypothesis was bolstered by the findings that the 1.8-kb promoter region of FT harbors six variants of CArG motifs (vCArG) (Fig. 4A; Tang and Perry 2003), the consensus binding sequences of the MADS-box proteins, and that the first intron of FT harbors a CArG motif to which FLC proteins directly bind (Helliwell et al. 2006;Searle et al. 2006). Chromatin immunoprecipitation (ChIP) assays using Arabidopsis protoplasts were carried out to evaluate this hypothesis. Using chromatin immunoprecipitated with HA antibodies, we detected amplified products from fragments harboring vCArG III/IV, vCArG V, and CArG VII (Fig. 4A), indicating that SVP and FLC proteins bind to these motifs in vivo. The vCArG III/IV and vCArG V motifs were more efficiently precipitated by SVP-HA. The CArG VII motif, which is present in the first intron of FT, was strongly enriched by FLC-HA proteins, which is consistent with previous findings (Helliwell et al. 2006;Searle et al. 2006). This motif was also precipitated by SVP-HA proteins, but SVP's binding affinity appeared to be weaker than that of FLC. It therefore appears likely that SVP preferentially binds to the vCArG motifs of the FT promoter and that FLC preferentially binds to the CArG VII of the first intron of FT. As the vCArG III/IV and V motifs were observed to bind efficiently, we verified the direct binding of the SVP proteins to these motifs in vivo by conducting a transient expression assay in protoplasts transfected with SVP-HA proteins and FT-promoterdriven luciferase (LUC) reporters (Fig. 4B). An abundance of SVP protein (35SϻSVP-HA) effected a reduction of FTϻLUC activity. This reduction disappeared when SVP protein was used without its MADS domain (35SϻSVP ⌬M-HA), thereby indicating that the reduction in luciferase activity was induced by the binding of SVP to the FT promoter via the MADS domain. A subsequent assay aimed at assessing the ability of SVP-HA to repress the activity of an FT promoter harboring a mutation in the vCArG motif revealed that SVP-HA failed to reduce the expression of FTϻLUC harboring mutations in vCArG III (m3FTϻLUC). This result suggests that vCArG III is required for the SVP-mediated negative regulation of FT expression. Coupled with the mapping of the SVP-binding site in the FT promoter, our findings support our hypothesis that SVP binds directly to CArG motifs, thereby regulating FT expression to modulate flowering time in response to ambient temperature changes.
In conclusion, based on the results reported here, ambient temperature signaling in Arabidopsis is mediated by SVP, which functions within the thermosensory pathway, but only partially within the FT pathway (Blázquez et al. 2003;Wigge et al. 2005). SVP represses FT expression via direct binding to the vCArG III motif in the FT promoter. The SVP-mediated control of FT gene expression (Fig. 4) may represent a mechanism used by the plant to adjust the timing of flower development under fluctuating temperature conditions, although we cannot dismiss the possibility that altered interactions between MADS-box proteins, including SVP, FLC, and FLM (de Folter et al. 2005;Balasubramanian et al. 2006;Helliwell et al. 2006;Searle et al. 2006), effect the adjustment of flowering times at different ambient temperatures. We propose that the genetic evidence reported here is a valuable supplement to current knowledge on the manner in which plants integrate environmental signals to modulate development.
Materials and methods
Plant materials, growth conditions, and measurement of flowering time All mutations used in this study were in the Columbia (Col) background, unless otherwise noted. svp-31 (SALK_026551) and svp-32 (SALK_ 072930), both T-DNA insertion lines of SVP, were obtained from the Arabidopsis Biological Resource Center (ABRC) (Alonso et al. 2003). To confirm the T-DNA insertion sites of these alleles, we sequenced the PCR products amplified using left border primers and gene-specific primers. SVP overexpressor plants obtained from H. Sommer (Max Planck Institut, Köln, Germany) have been described previously (Masiero et al. 2004). The mutant lines used in this study are described in Supplementary Table S1. The plants were grown in soil or MS medium at 23°C or 16°C under long-day (LD) conditions (16 h light/8 h dark) with light provided at an intensity of 120 µmol m −2 sec −1 . The homozygosity of the double mutants was verified via PCR genotyping. The details of the genotyping procedures are available on request. The flowering times of the plants are expressed as the total number of primary leaves of at least 12 plants.
Expression analysis
Expression levels of the flowering time genes were determined via semiquantitative reverse transcriptase-meditated PCR or real-time PCR. To- Figure 4. Binding of SVP protein to the vCArG III in the FT promoter. (A) A ChIP assay using protoplasts transfected with SVP-HA and FLC-HA constructs. The location of six vCArG motifs (vCArG I to vCArG VI) identified in a 1.8-kb FT promoter and the different fragments analyzed by PCR are represented. A CArG motif to which FLC is known to bind within the first intron of FT (Helliwell et al. 2006;Searle et al. 2006) is designated as CArG VII. A fourfold dilution series of the input DNA was used as a semiquantitative standard. Relative enrichment indicates the amplified signal value normalized against that of input DNA. The value of enrichment in vCArG I/II was set to 1 for SVP-HA and FLC-HA. Similar results were obtained from five independent experiments. (Input) Total input chromatin DNA; (HA) DNA selected using HA antibodies; (Myc) DNA selected using Myc antibodies. (B) The effects of SVP-HA protein on the FT promoter activities. A schematic representation of the reporters and the effectors used in this assay is shown. vCArG motifs are shaded in gray and mutations introduced in vCArG motifs are indicated in lowercase. m3FTϻLUC indicates the FTϻLUC construct harboring a mutated vCArG III. Luciferase activities were normalized by GUS activities. This experiment was repeated five times, with similar results. tal RNA was extracted using Trizol reagent (Invitrogen), and 1 µg of total RNA was used to synthesize the complementary DNA. The primer sequences and amplification conditions are available on request. The realtime PCR analysis was performed using an ABI PRISM 7900HT sequence detection system (Applied Biosystems), and expression levels were normalized against that of tubulin. For the histochemical GUS analysis, we generated a SVPϻGUS translational fusion construct. The 4.9-kb SVP genomic region was amplified using JH2929 (5Ј-GTGGTCGACACTTT TTATTTTACTCTGG-3Ј) and JH2985 (5Ј-GGATCCGCACCACCATA CGGTAAGCTGC-3Ј), and then fused with the GUS reporter gene. FTϻGUS plants (Takada and Goto 2003) were obtained from K. Goto (Research Institute for Biological Sciences, Okayama, Japan). SVP cDNA-GFP chimeric constructs were used as a reporter to examine the localization pattern of SVP. To generate the 35S promoter-driven SVP cDNA-GFP construct, the GFP sequence was in-frame-fused to the C-terminal region of a 35SϻSVP chimeric plasmid. A particle bombardment system (PDS-1000/He; Bio-Rad) was utilized for the delivery of DNA-coated tungsten particles into onion epidermal cells. After 24 h of incubation at 23°C or 16°C, the subcellular localization pattern was observed under a fluorescence microscope (Carl Zeiss).
ChIP assay
ChIP assays were conducted as described (Tang and Perry 2003) with minor modifications. The Arabidopsis protoplasts were transfected with either SVP cDNA fused to HA tags or FLC cDNA fused to HA tags and then incubated for 24 h at room temperature. The expressions of the SVP-HA and FLC-HA proteins were determined by protein blots using extracts from the protoplasts. After formaldehyde fixation, the chromatin of the protoplasts was isolated and sheared via sonication. Mouse anti-HA antibodies (Santa Cruz Biotechnology) or anti-Myc 9B11 antibodies (Cell Signaling Technology) were used to immunoprecipitate the genomic fragments. Five sequence fragments spanning six vCArG motifs within the promoter and a CArG motif in the first intron of FT were amplified from the immunoprecipitated genomic DNA. PCR products were visualized after 35 cycles using DNA purified from chromatin immunoprecipitated with antibodies against HA or Myc. Nonselected input DNA and Myc antibody-selected DNA were used as PCR templates for the positive and negative controls, respectively. Quantitation of the enrichment of CArG motifs by the SVP-HA and FLC-HA proteins were performed on PhosphorImager plates (Fujifilm BAS 2500; Fuji). The primers used for the ChIP assays are described in Supplementary Table S2. Detailed descriptions of the protocols of these experiments are available on request.
Luciferase reporter assay
To generate the FTϻLUC construct, we amplified 1.8 kb of the FT promoter fragment using JH3096 (5Ј-TGAACACTAACATGATTGAATGA CA-3Ј) and JH2865 (5Ј-GATCTTGAACAAACAGGTGGT-3Ј) and fused this to luciferase. The luciferase reporter constructs harboring the mutated vCArG motifs within the FT promoter were used as reporters to examine the effects of the vCArG motifs on the specific binding of SVP to the FT promoter. Site-directed mutagenesis was utilized to generate the FTϻLUC constructs harboring the mutated vCArG motifs, using the QuickChange II XL Site-Directed Mutagenesis Kit (Stratagene), in accordance with the manufacturer's instructions. The primers used in this mutagenesis protocol are shown in Supplementary Table S3. Mutations introduced into the vCArG motifs in these constructs were verified via sequencing. SVP-with or without its MADS domain (35SϻSVP-HA and 35SϻSVP ⌬M-HA, respectively)-was used as an effector. A reporter and an effector were cotransfected into the protoplasts. The 35SϻGUS construct was used as an internal control. Luciferase activities were normalized by GUS activities. Detailed descriptions of the protocols are available upon request. | 2018-04-03T04:25:17.718Z | 2007-02-15T00:00:00.000 | {
"year": 2007,
"sha1": "834468492a40fac6365b6dbd366c8ceffda2d981",
"oa_license": null,
"oa_url": "http://genesdev.cshlp.org/content/21/4/397.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "b4b0c5675a8df035f5e9811153d11ad81bebabe3",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
133972384 | pes2o/s2orc | v3-fos-license | Diffraction Characteristics of Small Fault ahead of tunnel face in coal roadway
outburst accidents. The study of small fault prediction has important practical significance, which is the urgent demand of coal mine safety production. The diffraction of breakpoint can be used to identify the fault. However, unlike surface seismic exploration, the diffraction is with approximately horizontal incidence when the advanced detection is carried out in the roadway. The common advanced detection system is mainly as the reference of traffic tunnel, without considering the influence of low-velocity coal seam. Considering the influence of an acoustic wave of the roadway cavity and channel wave of the coal seam, the advanced detection model of small fault ahead of tunnel face is established. Diffraction advanced observation system in which sources located in front of tunnel face is constructed, and the numerical calculation of the high-order staggered-grid finite difference is carried out. The simulation results show that: Compared with the data collected by reflection observation system, in seismic records acquired by diffraction observation system, the suppression effect of acoustic wave is appeared. The diffracted P-wave of the breakpoint of component X is clear with strong energy and short-wave group. Multiple diffractions of the breakpoint are not found, but the multiple diffraction of tunnel face endpoint is obvious. The difference between breakpoint diffraction and multiple diffractions of the endpoint is clear, and diffracted P-wave of the breakpoint is easy to identify. The multiple reflected channel wave between the fault and the tunnel face is very obvious, and the reflected channel wave of small fault is so hard to identify. Migration results show that the imaging resolution of diffracted P-wave of small fault is higher than the reflected channel wave, and breakpoint location of imaging is consistent with the actual model. ABSTRACT
La falla pequeña en la calzada de carbón de un túnel es un factor importante no visible en los accidentes por explosión en minas de gas y carbón.El estudio de la predicción de falla pequeña tiene un importante sentido práctico: la demanda urgente de seguridad en la producción carbonífera.El punto de quiebre de la difracción puede utilizarse para identificar la falla.Sin embargo, al contrario que la exploración sísmica superficial, la difracción se acerca a la prevalencia horizontal cuando se realiza la detección avanzada en la calzada.El sistema común de detección avanzada se usa principalmente para referenciar el tráfico del túnel, sin considerar la influencia de la baja velocidad en la veta de carbón.Al valorar la respuesta de la onda acústica en la cavidad de la calzada y la onda de canal de la veta de carbón se establece el modelo de detección avanzada de pequeña falla para la calzada del túnel.Se construyó el sistema de observación avanzada de difracción en el cual las fuentes se localizan en frente de la cara del túnel y se realizó el cálculo de diferencia finita en una red escalonada de orden alto.Los resultados del modelo muestran que a diferencia de la información recolectada con el sistema de observación de reflexión, en los registros sísmicos adquiridos con el sistema de observación de difracción se puede ver el efecto de supresión de la onda acústica.El punto de quiebre de la onda P difractada para el componente X es claro, con energía fuerte y en el grupo de onda corta.No se encontró el punto de quiebre para difracciones múltiples pero es evidente la difracción múltiple para el punto final de la cara del túnel.Es clara la diferencia entre el punto de ruptura de la difracción y las difracciones múltiples del punto final, mientras el punto de ruptura de la onda P difractada es fácil de identificar.Los resultados de migración muestran que la resolución de imágenes de la onda P difractada de falla pequeña es mayor que la onda de canal reflejada y la ubicación del punto de quiebre de la imagen es consistente con el modelo actual.
Introduction
In China, the deaths from 2012 to 2016 was 451 because of coal mine gas accidents (Liu et al., 2016), so coal mine gas prevention and control are very grim, and gas control is the most important work of coal mine safety production.Most accidents of coal and gas outburst in tunnel face of coal roadway occur near the geological structure (Shepherd et al., 1981;Lama et al., 1998), especially small fault (Gao et al., 2015).This paper focuses on the research of small fault ahead of the tunnel face, which is the important hidden factor of coal and gas outburst accident.It is of great practical significance to the prevention and control of gas accidents.
At present, there are a lot of advanced geophysical exploration methods adopted in the roadway, which can be divided into three classes: seismic class, electromagnetic class, and other class (Roslee et al., 2017).At present, the seismic wave method is less affected by the detection environment, which is most suitable for the prediction of geological structure (Lüth et al., 2006;Jetschny et al., 2011).The seismic wave method of advanced detection mainly includes the reflected wave method, surface wave method, scattered wave method, channel wave method, diffraction method and so on (Wang et al., 2016).
The advanced detection by using reflected wave method is started early in the field of traffic tunnel.According to the launching order, seismic advanced detection technology includes: HSP, TVSP, TSP, TRT, ISIS, TGP, TSD, USP, TSWD, etc. (Inazaki et al., 1999;Otto et al., 2002;Lüth, et al., 2005;Jetschny et al., 2010).Compared with the advanced tunnel prediction, the research on reflected wave in coal roadway is few, including RTSP, MSP, etc. (Wang et al., 2015).
In the aspect of advanced detection of surface wave and scattered wave in underground coal mine, the advanced detection technique of Rayleigh surface wave was carried out in Fangezhuang Mine (Cheng et al., 2014).Zhao et al. (2006) obtained higher image positioning accuracy based on inverse scattered imaging theory.Cheng et al. (2013) studied the seismic scattered wave imaging in coal roadway by using numerical simulation method.
In the aspect of reflected channel wave, Essen et al. (2007) analyzed the response characteristics of Rayleigh channel wave of split coal through numerical simulation; Yang et al. (2012) carried out numerical simulation research with seismic wave field of small structure, and then pointed out that the Rayleigh surface wave generated from the reflection of Rayleigh channel wave can be used as the characteristic wave of the advanced detection of small structure.Lu et al. (2013) obtained the reflected channel wave by using tunnel boring machine as the source to carry out the advanced detection survey with a vertical fault with fault throw of 8m.
In the aspect of the diffraction, Yang et al. (2010) analyzed the wavefield of the advanced detection model of the geological interface, and then pointed out that the diffraction was the powerful wave to identify the interface.Deng (2012) pointed out that the diffraction of breakpoint due to the large-scale fault was the effective wave to detect the position of the coal seam.However, the studies on advanced detection in the underground coal roadway do not include the study of diffraction characteristics of small fault; therefore, the characteristics of diffraction wavefield of small fault are studied in this paper through the numerical simulation, so as to solve the fine imaging of small fault ahead of tunnel face.
Principle
When a seismic wave meets coal seam breakpoints, these discontinuity points can be regarded as the new sources, which can produce a kind of new disturbance to propagate around the elastic medium (Lindang et al., 2017;Kamsani et al., 2017).
This wave of disturbance is called diffraction.In essence, the physical premise of diffraction generation is the wave impedance difference and minor geological structure.The diffraction follows a concept of full wavelength, and its propagation is in agreement with the "Huygens Fresnel" principle.
The relation between the incident wave and the diffracted wave is one-to-many, while the relation between the incident wave and the reflected wave is that one-to-one, as shown in Figure 1.The reflected wave is generated when the seismic wave encounters the medium of sudden change in the propagation process, which is the change of travel time and amplitude caused by large-scale heterogeneity Lai et al., (2017).Because the fault throw in coal roadway is less than 10m, it is hard to form the reflection of the fault.However, due to the small scale of the fault and the appear wave impedance difference between coal seam and surrounding rock (the average density and P-wave velocity of coal seam and surrounding rock are respectively as: ρ1=1400Kg/m3, V1=2000m/s; ρ2=2800Kg/m3, V2=3500m/s), the obvious diffraction could be formed in the process of seismic advanced detection in coal roadway, as shown in Figure 2.
High-order Staggered-grid Finite Difference
In general, the structure of Coal seam is layered.Coal seam, layered roof, and floor can be regarded as horizontal isotropic media (Lai et al., 2017).The equation of the acoustic wave propagating in two-dimensional isotropic media is as follows: ( 1 ) Where P = P(x,z,t) represents acoustic wave field, v = v(x,z,t) represents velocity field.
The difference scheme of time second-order and space 2N th -order difference accuracy is as follows: ( 2 ) The source is the spike pulse, and Ricker wavelet is used as spike pulse source and time domain formula of Ricker wavelet is as follows: ( 3 ) Where A is the amplitude, f 0 is the dominant frequency of wavelet, t 0 is the beginning time?
When the forward modeling of 2N th -order difference accuracy in the condition of the regular grid is carried out, the stability condition is as follows: ( 4 ) Taking into account the requirements of computational efficiency and simulation accuracy, time two-order and space tenth-order are adopted in this paper and c(l) is 0.541.The PML absorbing boundary is adopted as a model boundary to reduce the boundary disturbance caused by the artificial boundary.The receivers are located in the scope of X=91 ~ 169 m with trace intervals of 2 m.The depth of the receivers is 2 m in roof and floor.The two components X and Z are used for geophone receiving, and X direction denotes the tunneling direction, and Z direction denotes the direction being perpendicular to the floor (Roslee et al., 2017).The coordinate of the source is (134, 149), and specific medium parameters of the model are shown in Table 2. Figure 4 shows the seismic data of component X and component Z with 0~80 ms, while Figure 5 shows the wavefield snapshots of components X and Z with 16 ms, 40 ms, and 57 ms.According to the seismic records and the wavefield snapshots of the model, with reference to kinematics and dynamics characteristics of various types of seismic wave, all kinds of waves can be recognized from the complex seismic wavefield, and they are as follows: A direct P-wave, this direct P-wave is with circular propagation from the source to the surrounding rock.B Diffracted P-wave, is formed by the diffraction of the direct P-wave when propagating to breakpoint of the fault.C 1# Reflected channel wave, is formed by the reflection of the direct channel wave when encountering fault.D 2# Reflected channel wave, is formed by reflection of 1# reflection channel wave when encountering the tunnel face; E 3# reflected channel wave, is formed by the reflection of 2# reflection channel wave when encountering fault; F the roadway acoustic wave, is formed by the direct P-wave when propagating into the cavity of the roadway; G the diffracted P-waves at the endpoint of the tunnel face are the multiple diffractions when direct P-wave encounters the endpoint (Kamsani et al., 2017).
Wavefield Characteristics Analysis
The fundamental difference B between G diffracted P-wave and diffracted P-wave is as follows: B diffracted P-wave can also be observed from the receivers ahead of tunnel face and behind tunnel face, while G diffraction can only be observed from the receivers behind tunnel face.The diffraction B can be clearly identified based on the above difference.
It can be seen from Figure 4 that the resolution of breakpoint diffraction B is highest in seismic data of component X, and the diffraction is recorded by the receivers in the floor.The waveform of the diffraction B is clear with strong energy and short-wave group.In the figure, the reflected channel wave of the fault is also obvious, but the wave group of the channel wave is wider than diffraction, which causes the low resolution when using the channel wave.
Due to the existence of the channel wave in the coal roadway and the roadway axial being parallel with the horizontal coal seam, the common observation system of the tunnel reflected wave should be with poor applicability when detecting an advanced fault.Concerning to reflection observation system commonly used in traffic tunnel (Liu et al., 2016), the sources are arranged in the rear of tunnel face, which results in Figure 6 ( Kamsani et al., 2017) .From the figure, it is found that the acoustic wave in seismic data is specially developed and the diffraction of the breakpoint is not identified.
It can be seen from Figure 7, the lateral variation range of the fault location of reflected channel wave is about 20 m, and the distribution range of the channel wave migration imaging is large, and it is difficult to determine the position of small fault.The lateral variation range of breakpoint of diffraction is about 2 m, and the wave group of diffraction is short and high resolution, and imaging position of breakpoint of small fault is in accordance with the actual model, and the direction of breakpoint in full space is easily computed by polarization migration (Wang et al., 2016).
Conclusion
In this paper, considering the influence of the roadway cavity and coal seam with low velocity, the advanced detection model of small fault in front of heading face is established.The numerical simulation calculation is carried out by diffraction advanced observation system.The Through comparative analysis between the diffraction and reflection observation system, and there are not acoustic wave and obvious diffracted P-wave with short wave group by diffraction observation system in seismic data of component X.Compared with multiple diffractions of the endpoint, the diffracted P-wave is clear and easy to recognize and extract, and the reason is that: Diffraction of small fault can also be observed from the receivers ahead of tunnel face, but diffraction of the breakpoint should not be observed from the receivers ahead of tunnel face.The reflected channel wave of small fault is difficult to identify because there are multiple reflected channel waves between the fault and the tunnel face.Therefore, the diffraction of coal seam breakpoint is the naturalwave of small fault, and the results of the migration show that the imaging resolution of diffraction is high and breakpoint location of imaging is consistent with the actual model.
Advanced Imaging of Small Fault
Based on the above analysis, the diffracted P-wave and the reflected channel wave are both the effective waves of advanced detection of a small fault in coal roadway.Based on the principle of prestack diffraction migration, the contrastive study is carried out with diffraction of the breakpoint and the channel wave (Roslee et al., 2017;Lindang et al., 2017;Kamsani et al., 2017;Lai et al., 2017) .Figure 7 shows the imaging results of B diffracted P-wave and C 1# reflected channel wave of component X which is received on the floor.The starting position of X-axis is 91m.
Figure 1 .
Figure 1.Diagram of reflection and diffraction.
Figure 3 Figure 3 .
Figure 3 shows the advanced detection model of small fault ahead of tunnel face with the fault throw of 10m.The dip angle of fault is 45 degrees, and the model size is 300 m × 300 m.The grid spacing is set as 0.25 m × 0.25 m, and sampling interval is Δt=0.1 ms.Taking into account the source features of field survey, the seismic frequency is set as 400 Hz.The coordinate range of the tunnel face is X=130 m and Z=149 ~ 152 m, while receivers are respectively located in the roadway roof and floor.The number of receivers is 40 in the roof and floor.
(a) Z-component receiver in roof (b) Z-component receiver in floor Figure 4. Seismic data of diffraction advanced observation system.(a) X-component (b) Z-component Figure 5. Seismic wavefield snapshots.
This research has been performed by the National Natural Science Foundation Project(No.41604082, 51323004, 41474122) and Joint Funding Project of National Natural Science Foundation and ShenHua Group Corporation Ltd (No.U1261202), and the Fundamental Research Funds for the Central Universities (2014XT02).A Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions.
(a) X-component receiver in roof (b) X-component receiver in floor Figure 6.Seismic data of reflection observation system.
(a) Reflected channel wave | 2018-12-09T00:56:52.864Z | 2017-04-01T00:00:00.000 | {
"year": 2017,
"sha1": "52de9094faee1985cc6c45c3c7210b202d799070",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.15446/esrj.v21n2.64938",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "52de9094faee1985cc6c45c3c7210b202d799070",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science",
"Geology"
],
"extfieldsofstudy": [
"Geology"
]
} |
211197976 | pes2o/s2orc | v3-fos-license | New class of G-Wolfe-type symmetric duality model and duality relations under Gf$G_{f}$-bonvexity over arbitrary cones
*Correspondence: lakshminarayanmishra04@gmail.com 4Department of Mathematics, School of Advanced Sciences, Vellore Institute of Technology (VIT) University, Vellore, India Full list of author information is available at the end of the article Abstract This paper is devoted to theoretical aspects in nonlinear optimization, in particular, duality relations for some mathematical programming problems. In this paper, we introduce a new generalized class of second-order multiobjective symmetric G-Wolfe-type model over arbitrary cones and establish duality results under Gf -bonvexity/Gf -pseudobonvexity assumptions. We construct nontrivial numerical examples which are Gf -bonvex/Gf -pseudobonvex but neither η-bonvex/η-pseudobonvex nor η-invex/η-pseudoinvex.
Introduction
It is an undeniable fact that all of us are optimizers as we all make decisions for the sole purpose of maximizing our quality of life, productivity in time, and our welfare in some way or another. Since this is an ongoing struggle for creating the best possible among many inferior designs and is always the core requirement of human life, this fact yields the development of a massive number of techniques in this area, starting from the early ages of civilization until now. The efforts and lives behind this aim dedicated by many brilliant philosophers, mathematicians, scientists, and engineers have brought a high level of civilization we enjoy today. The decision process is relatively easier when there is a single criterion or object in mind. The process gets complicated when we have to make decisions in the presence of more than one criteria to judge the decisions. In such circumstances a single decision that optimizes all the criteria simultaneously may not exist. For handling such type of situations, we use multiobjective programming, also known as multiattribute optimization, which is the process of simultaneously optimizing two or more conflicting objectives subject to certain constraints. Multiobjective optimization problems can be found in various fields such as product and process design, finance, aircraft design, the oil and gas industry, automobile design, and other where optimal decisions need to be taken in the presence of trade-offs between two or more conflicting objectives.
This paper is organized as follows. In Sect. 2, we give some preliminaries and definitions used in this paper and also a nontrivial example for such type functions. In Sect. 3, we formulate second-order multiobjective symmetric G-Wolfe-type dual programs over arbitrary cones. We prove weak, strong, and converse duality theorems by using G f -bonvexity/G f -pseudobonvexity assumptions over arbitrary cones. Finally, we construct nontrivial numerical examples that are G f -bonvex/G f -pseudobonvex but neither η-bonvex/η-pseudobonvex nor η-invex/η-pseudoinvex functions.
Preliminaries and definitions
Let f = (f 1 , f 2 , f 3 , . . . , f k ) : X → R k be a vector-valued differentiable function defined on a nonempty open set X ⊆ R n , and let I f i (X), i = 1, . . . , k, be the range of f i , that is, the image of X under f i . Let G f = (G f 1 , G f 2 , . . . , G f k ) : R → R k be a differentiable function such that every component G f i : I f i (X) → R is strictly increasing on the range of I f i , i = 1, 2, 3, . . . , k.
Definition 2.1
The positive polar cone S * of a cone S ⊆ R s is defined by Consider the following vector minimization problem: Definition 2.2ȳ ∈ S 0 is an efficient solution of (MP) if there exists no other y ∈ S 0 such that f r (y) < f r (ȳ) for some r = 1, 2, 3, . . . , k and f i (y) ≤ f i (ȳ) for all i = 1, 2, 3, . . . , k.
Definition 2.3
If there exists a function η : S × S → R n such that for all y ∈ S, then f is called invex at v ∈ S with respect to η.
Definition 2.4
If there exist G f i : I f i (S) → R and η : S × S → R n such that for all y ∈ S, then f i is called G f i -pseudoinvex at u ∈ S with respect to η.
Definition 2.5
If there exist G f i : I f i (S) → R and η : S × S → R n such that for all y ∈ S and p ∈ R n ,
Definition 2.6
If there exist functions G f and η : S × S → R n such that for all y ∈ S and p ∈ R n , where f 1 (y) = y 10 , f 2 (y) = arcsin y, f 3 (y) = arctan y, f 4 (y) = arccot y, and let Let η : [-1, 1] × [-1, 1] → R be given as To show that f is G f -bonvex at v = 0 with respect to η, we have to claim that Putting the values of f i , G f i , i = 1, 2, 3, 4, into the last expression, after simplifying at the Fig. 2).
Therefore f 3 is not η-bonvex at v = 0 with respect to p. Hence f is not η-bonvex at v = 0 with respect to p. Next, where f 1 (y) = ( e 2y -1 e y ), f 2 (y) = y 3 , and To show that f is G f -pseudobonvex at v = 0 with respect to η, we have to claim that, for i = 1, 2, Let Substituting the values of η and f 1 at the point v = 0, we get φ 1 ≥ 0 for all y ∈ [-2, 2] and p.
Second-order multiobjective G-Wolfe-type symmetric dual program
Consider the following pair of second-order multiobjective G-Wolfe-type dual programs over arbitrary cones.
Let Y 0 and Z 0 be the sets of feasible solutions of (GWP) and (GWD), respectively.
Then the following inequalities cannot hold together: and R r (y, z, λ, p) < S r (v, w, λ, q) for at least one r ∈ K.
Proof Proof follows the lines of Theorem 3.2.
Concluding remarks
In this paper, we have formulated a second-order symmetric G-Wolfe-type dual problem for a nonlinear multiobjective optimization problem with cone constraints. A number of duality relations are further established under G f -bonvexity/G f -pseudobonvexity assumptions on the function f . We have discussed various numerical examples to show the existence of G f -bonvex/G f -pseudobonvex functions. The question arises whether the duality results developed in this paper hold for G-Wolfe-or mixed-type higher-order multiobjective optimization problems. This may be the future direction for the researchers working in this area. | 2020-02-13T09:20:23.003Z | 2020-12-01T00:00:00.000 | {
"year": 2020,
"sha1": "42325185b6f94e942a986ff881a4057fd52c93d6",
"oa_license": "CCBY",
"oa_url": "https://journalofinequalitiesandapplications.springeropen.com/track/pdf/10.1186/s13660-019-2279-0.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "805d922fa596f929926023820b04625ce275af3b",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
212653121 | pes2o/s2orc | v3-fos-license | The role of working memory in children's ability for prosodic discrimination
Previous research established that young children are sensitive to prosodic cues discriminating between syntactic structures of otherwise similarly sounding sentences in a language unknown to them. In this study, we explore the role of working memory that children might deploy for the purpose of the sentence-level prosodic discrimination. Nine-year old Slovenian monolingual and bilingual children (N = 70) were tested on a same-different prosodic discrimination task in a language unknown to them (French) and on the working memory measures in the form of forward and backward digit span and non-word repetition tasks. The results suggest that both the storage and processing components of the working memory are involved in the prosodic discrimination task.
Introduction
Prosody or sentence-level intonational contour plays a major role in people's language comprehension ability providing acoustic cues for identifying syntactic phrases or constituents. The underlying assumption is that there is a sufficiently close match between prosodic and syntactic constituency in the world languages [1][2]. Development of the prosodic ability at large is a key factor in children mastering the syntax of their native language [3][4], and of key importance in learning of the syntax of a second or foreign language [5][6][7].
Evidence suggests that infants are sensitive to phrasal prosody in their mother tongue already prenatally and are able to recognize progressively smaller prosodic chunks starting at 4 months old onwards (cf. [8][9][10], among others). Bilingualism and music training are known to sharpen perceptual ability to sounds [11][12][13]. This also includes better success in the prosodic domain, such as easier recognition and discrimination of prosodic patterns as well as rhythm in language or music [14][15][16][17][18][19]. Connections between children's prosodic ability and syntactic processing were also explored. [20] tested 3,5-4,5 years old French-speaking children using sentential preambles culminating in a noun/verb homophone. Children in that study successfully exploited prosodic information in the stimuli to assign the appropriate syntactic category to the target word. Even younger, 18-month old children were shown to use sentential prosody to facilitate word learning [21]. More recently, [22] found that Slovenian-speaking elementary school children make use of a number of prosodic cues to successfully differentiate pairs of short syntactically well-formed sentences in a language unknown to them. Sentences in the pairs only differed in intonational contour. That study compared prosodic discrimination in monolingual and bilingual children as well as children who received several years of musical training. The latter two groups performed better than monolingual controls indicating a greater perceptual sensitivity associated with each type of experience. However, even though it is becoming evident that children's remarkable prosodic sensitivity may be affected by linguistic (e.g. bilingualism) as well as non-linguistic (e.g. music) factors, the above studies largely leave open the issue concerning possible source(s) of this ability. As noted above, prosodic 'bootstrapping' is essential for acquiring the syntactic rules of native as well as second language (s). Therefore, clarifying the nature of the relevant cognitive mechanisms and their role in prosodic discrimination at the sentence level is of significant importance in the general context of the language acquisition task. One possibility is that comparing two nearly identical sentences on the basis of their prosodic signatures involves working memory, in particular, phonological short-term memory (p-STM) including a phonological storage and a subvocal rehearsal process [23]. p-STM is associated with many aspects of speech perception including greater success in learning foreign vocabulary and grammar from novel input (e.g. [24][25]), and acquisition of cross-linguistic phonological regularities [26]. In the context of a prosodic discrimination task comparing the input from two similarly sounding sentences, both sentences need to be actively maintained in the phonological buffer. Better success in discrimination can thus directly or indirectly be associated with a greater p-STM capacity in the general context of speech perception (cf. also [27]).
Depending on a particular theory behind same/different decision making, it is conceivable that a processing component of the working memory is engaged in this process as well (cf. [23], [28]). One type of such theories, dubbed 'accumulator' models, is based on the idea that (possibly noisy) evidence in the form of different features or dimensions is sampled from the stimuli and accumulated over time, terminating at a threshold associated with either a "same" or "different" response (see [29] for review). Accumulation takes place in a particular type of serial or parallel information-processing architecture which may or may not involve different schemes for producing each of the two response types [30][31][32]. This also implies segmentation of the input as a key component of the real time analysis based, as in our case, on prosodic cues (e.g. [33][34]), as well as a possible chunking process (see Discussion). In a prosodic discrimination task, segmented chunks of prosodic/syntactic structure of each sentence in a pair may conceivably be subjected to this kind of evidence accumulating and resolution. Therefore, processing considerations could be applicable as in any information-processing system affecting accuracy in this task. Notably, both the temporary storage and processing components of the working memory are deployed in regular sentence processing (see [35][36][37][38], among others).
The enhanced performance on the sentence discrimination task manifested by bilingual and musically trained individuals also suggests a non-trivial role of the working memory and its processing component in particular. Both population groups perform better at executive control measures than respective controls [39][40][41][42]. Bilingual experience was argued to enhance working memory-related attentional processes, if only under specific experimental conditions (for review, see [43]). Similarly, musical training was shown to enhance verbal memory for spoken words [41].
The present study explores the role of both phonological storage and processing components of the working memory for the purpose of prosodic discrimination of natural language sentences in nine-year old children. Our interest in this particular age group is two-fold. First, short-term memory at large is subject to a developmental trajectory, reaching a developmental plateau by around 8 years [44], while central executive and phonological loop processes including subvocal rehearsal seem to be at place around age 7 [45][46]. Thus, testing children around the onset of their full working memory capacity tests the boundary limits of the hypothesized relationship and also its developmental perspective. This study also contributes to exploration of perceptual abilities correlated with working memory in children close to or within the critical period for the first language acquisition, as related to brain plasticity and age (cf. [47]).
Experiment
We asked whether young children's performance on a prosodic discrimination task correlates with standard working memory measures. We used a same-different discrimination task for the participants to differentiate between pairs of phonemically identical but prosodically different pairs of syntactically well-formed sentences. In addition, participants were tested on: i) forward digit span (FDS); ii) backward digit span (BDS) and iii) non-word repetition (NWR). The FDS and NWR tasks are commonly used to test the storage component of the working memory [44,48]. The BDS task imposes a substantial processing load on the participant, hence is considered to involve the processing component of the working memory as well [49]. As previous research revealed a non-trivial role of bilingualism in prosodic discrimination task, a complementary goal of the present study was to replicate the bilingualism effect in the prosodic discrimination ability reported in [22].
Participants
Seventy Slovenian-speaking monolingual [N = 35, M age = 9.17, SD = 0.39] and closely agematched bilingual [N = 35, M age = 9.15, SD = 0.34; t(68) = 0.16, p = 0.87] children from elementary schools in Gorizia and Nova Gorica, two towns on the respective sides of the Italian-Slovenian border, were tested. Children's bilingual status was determined on the basis of a parent questionnaire. Seventeen (49%) of the bilingual children were exposed to both languages from birth, fifteen others (42%) by age 2, and three were exposed to the second language between their 3 and 6 years. One of the languages spoken by the bilinguals was always Slovenian; the other language was predominantly Italian, or, alternatively, Bosnian/Croatian/Serbian, English, Friulian, German, Macedonian, Portuguese, Russian or Spanish. None of the tested participants had a previous exposure to French. Three and seven monolingual children (8% and 20%) had up one and two years of systematic musical instrument training, respectively. The participants had normal or corrected to normal vision and no history of hearing disorders. Two children from the tested pool had a minor articulatory deficiency (mispronouncing the "r" sound), which was taken into account when decoding their results from the NWR task. Another two children's (one monolingual, one bilingual) results from the discrimination task were not included in the correlation analyses: one child's data was lost due to an experimenter error; the other's because of a near chance performance on phonemically different controls (a 50% accuracy threshold was assumed). The experimental protocol was approved by the Ethics Committee of the University of Nova Gorica (ref. no. 24-1/2017) and was carried out in accordance with the relevant guidelines and regulations. Informed consent was obtained from the participants' legal guardian/s.
Design and materials
Discrimination task. The stimuli were a subset of materials used in [22] and included pairs of short sentences in French, the language unknown to participants and notably distinct in its prosodic properties from the participants' native language, Slovenian (e.g. in the lack of lexical stress). Sentence pairs were constructed in such a way so as to exploit the high degree of cross-categorical lexical homonymity in French: words that differ in syntactic category, e.g. noun and verb, can have the same phonological and/or phonetic makeup, e.g., ferme for "farm" or "closes." Specifically, the following cross-categorical ambiguities were exploited: a) between nouns and verbs, b) between nouns and adjectives and c) between definite determiners and phonemically identical weak pronouns or clitics (le, la, les). To illustrate, the sentences [ NP Le jeune garde][ VP la voit] "The young guard sees her" and [ NP Le jeune][ VP garde la voie] "The young man guards the road" are lexically and syntactically different, but sound very similar because there are little or no phonological (segmental or word-level) difference. The only potentially detectable difference is prosodic (supra-segmental): each sentence is pronounced with a different prosodic contour involving a constellation of prosodic cues (see below). As the bracketing in the above pairs of examples indicates, there are two prosodic groupings in each sentence corresponding to respective syntactic constituents: a noun phrase (NP) and a verbal phrase (VP) separated by a prosodic boundary. In one condition, termed noun condition, a long NP is followed by a short VP (the noun condition), in the other condition, termed verb condition, a short NP is followed by a long VP (the verb condition). Prosodic cues that are likely to influence and facilitate discrimination in a given pair of target sentences include: i) phrase-final segment lengthening at the prosodic/syntactic boundary between NP and VP [50]; ii) phrase-initial articulatory strengthening, whereby the onset of the first post-boundary syllable (e.g., /g/ in garde in the above examples) is lengthened in the verb condition compared to the noun condition [51]; iii) a silent pause between the prosodic NP and VP units; iv) a pitch rise whenever a (phrase-final) word precedes a prosodic boundary, as opposed to the condition in which it does not (e.g. jeune in the noun condition compared to jeune in the verb condition above), consistently with the general pattern of the rising pitch contour toward the end of prosodic units in French. These boundary cues are typical for French, but in other combinations may also occur in other languages, suggesting that children's sensitivity in this case may be a function of general acoustic salience [1,2,52]. The prosodic signatures of a sample target sentence pair are illustrated in Fig 1. We used 16 phonemically identical-prosodically different stimulus pairs as target pairs, 16 pairs of phonemically and prosodically different sentences, 16 pairs of phonemically and prosodically identical sentences serving as controls, and 16 more pairs of identical filler sentence pairs, to ensure that "different" answers do not predominate creating a habituation among subjects (note that the suprasegmental nature of prosody which feeds on the segmental component makes it highly non-trivial to utilize a logically possible condition with phonemically different but prosodically identical sequences; we therefore used a simpler design manipulating the prosodic factor over the same phonemic units).
Acoustic analyses of duration and pitch on the phonemic segments in the area surrounding the prosodic boundary were conducted (see Fig 1). Total duration of sentences in each condition did not differ significantly (M noun = 2248ms, SD verb = 32 vs. M verb = 2263ms, SD verb = 20; t(30) = 0.25, p > 0.10). The time until the onset of the prosodic boundary demarcating the noun and verb phrases differed across the conditions (M noun = 1129ms, SD verb = 157 vs. M verb = 684ms, SD verb = 136; t(30) = 8.57, p < 0.001). The segment preceding the prosodic phrase boundary in the noun condition (e.g approche in Fig 1) was lengthened by 18% compared to the verb condition (M noun = 382ms, SD verb = 97 vs. M verb = 323ms, SD verb = 98; t(30) = 1.82, p = 0.03). The segment immediately preceding the prosodic phrase boundary in the verb condition (e.g. ne) was lengthened by 52% compared to the noun condition (M verb = 417ms, SD verb = 104 vs. M noun = 274ms, SD verb = 81; t(30) = 6.41, p < 0.001). These observed patterns are consistent with the previous literature on French intonation [50]. The prosodic boundaries in our stimuli are also marked by a (systematically present) silent pause between the noun and verb phrases, whose duration was comparable across both conditions (M noun = 472ms, SD noun = 162 vs. M verb = 520ms, SD verb = 124; t(30) = 0.92, p > 0.10).
For each pair of stimulus sentences, we also compared maximum F0s on the words to be found on different sides of a prosodic boundary in the verb condition, but on the same side in the noun condition, e.g. l'acharnée and approche in Fig 1, henceforth referred to as Word 1 and Word 2. This analysis revealed a significant rise of pitch when the word immediately precedes a prosodic boundary (Word 1: F0 max (noun condition) = 140Hz, F0 max (verb condition) = 156Hz, t(30) = 1.73, p = 0.04; Word 2: F0 max (noun condition) = 149Hz, F0 max (verb condition) = 128Hz, t(30) = 3.12, p = 0.001). The average percentage of rise between the F0 minima and F0 maxima located on Word 1 is greater in the verb condition (83.15%) than in the noun condition (40.68%). These patterns are consistent with the general tendency of French for a rising pitch contour towards the end of prosodic units.
Overall there were 64 stimulus pairs. Within target pairs, the order of presentation between the two syntactic structures was counterbalanced, with 8 stimuli beginning with the noun condition, and 8 with the verb condition. The sentences were also balanced by length which amounted to 7 ± 2 syllables. The stimuli were pre-recorded by a male native French speaker with an interval of 5 seconds between the sentences within each pair. Speech was digitized to a computer with a sampling rate of 44.1 KHz and a 16-bit sampling depth, onto two stereo channels.
Digit span tasks. We used the visual, rather than verbal, version of the tasks in order to avoid potential confounds connected with participants' bilingualism and dominant language. It was assumed that the children were able to count to 10, in line with their current level of school education. In the FDS, children were asked to repeat sequences of digits in the same order, whereas in the BDS, they repeated them in reverse order. Non-word repetition task. This task was modeled on Children's test of Nonword repetition (CNRep, [48]) which we adapted for the Slovenian language. Forty non-words were constructed while carefully controlling for the Slovenian phonotactic restrictions including restrictions on consonant clusters within a syllable and on stress patterns for words of similar length and syllable complexity, as well as a reasonably balanced distribution of the standard language's vocalic repertoire [53][54][55]. Only phonotactically admissible CVC, CCVC, and CVCC consonantal clusters within a syllable were used. Stress was placed in locations typically
PLOS ONE
occurring in actual Slovenian words with a similar syllable structure. This way we ensured that segment (phoneme) sequences within each non-word were phonotactically and prosodically legal. For consistency, all non-words began with a consonant. The length of non-words varied between 2-5 syllables long and was balanced across the stimuli (10 nonwords of each length). The stimuli were pre-recorded by a female native speaker of Slovenian and digitized into .wav files using Praat [56].
Procedure
Children were tested in a quiet room on school premises. The experiment was administered in two separate sessions, with an interval of 1-3 days between the sessions for each child. One session included the prosodic discrimination task, the other included the FDS, BDS and NWR tasks. Approximately half of the children were tested first on the discrimination task followed by the working memory tasks, whereas the other half were tested in the reversed order. Each session lasted between 15-20 minutes in total. For their contribution, the participants were rewarded a pen or a sticker.
The discrimination task. Participants heard pairs of stimulus sentences played binaurally at a comfortable listening level and were asked to determine whether these sentences sound the same or different. Stimuli were presented in a pseudo-randomized order different for each child, with no prior familiarization stage. There was a self-timed break after 32 stimulus pairs. During a practice trial, participants listened to six exemplars of each experimental condition through computer-internal loudspeakers and were given feedback by the experimenter.
The digit span tasks. Digits in each sequence appeared on the screen for 1000 milliseconds one after another with no inter-stimulus interval. There were four trials per block. Each task started with sequences of three (randomly chosen) digits, and got progressively longer by one. When the child repeated the first three trials within a block of four correctly, the task automatically continued with the next block. In case of non-contiguous succession of correct answers, all four sequences were presented. The task began with few practice trials followed by the main experiment. The experiment stopped after two or more (�50%) incorrect trials within one block.
The non-word repetition task. Non-words were played back to participants via the headphones at a comfortable listening level, with a time interval of 5 seconds between the stimuli, during which the participant had to repeat the stimulus. Stimuli were presented to the participants in the pseudo-randomized order. Participants' answers to each stimulus were recorded for later analysis.
Distribution of accuracy scores on phonemically identical-prosodically different trials per participant across the bilingualism factor is shown in Fig 2.
The digit span tasks
A scoring procedure was applied based on the number of correctly scored sequences (cf. [58]). A trial was scored with 1 point if a sequence was recalled correctly, and with 0 points if participants recalled one or more digits in a sequence incorrectly or if they omitted one or more digits. Thus, for instance, if the task stopped at the level of 5-digit sequence, a participant's score could vary between 6 and 8 (3 at the 3-digit level + 3 at the 4-digit level + 0-2 at the 5-digit level).
Non-word repetition task
Participants' answers in the recordings were manually coded for correctness of repetition by a native Slovenian speaker (one of the experimenters) who was aware of the task. A random 10% of the data were additionally coded by another Slovenian speaker who was not aware of the purpose of the task. Agreement between the raters was 94.3% and Cohen's unweighted kappa was 0.88 indicating a high degree of inter-rater reliability.
The overall success on all items was 48% (SD = 15). Types of repetition errors that lead to registering a response as incorrect included: i) consonantal and vocalic omissions; ii) assimilation to a preceding or following consonant; iii) metathesis; iv) consonantal and vocalic insertions; v) substitutions; vi) incomplete productions. The overall accuracy across the four syllable lengths is summarized in Fig 3. Bilingual and monolingual participants did not differ in their overall repetition accuracy (t(59) = 0.70, p = 0.48). Pearson's product-moment coefficient indicated a significant negative correlation between repetition accuracy and the length of non-words, as shown in Fig 4. The latter finding corroborates the results reported in previous studies of non-word repetition in various types of populations and is consistent with the models of working memory that involve a p-STM component with a rehearsal procedure, (cf. [59][60]).
Correlation analyses and models
Focusing on the subset of monolingual and bilingual children's responses on the target (phonemically same, prosodically different) pairs, significant positive correlations were observed between the success ratios in the discrimination task and each of the three other measures. In addition, correlations were also observed between the non-word repetition and each of the FDS and the BDS tasks, as well as between the FDS and BDS tasks themselves. Table 1 and Figs 5-7 show correlations in the raw and aggregated participant data, respectively (aggregation is performed by taking the mean over accuracy in discrimination task and success in each of working memory measures).
Since performance on each of the three WM tasks correlated with discrimination accuracy, we were also interested to know to what extent it predicts individual same-different responses in the discrimination task. To that end, mixed effects logistic regression models were constructed using the lme4 package in R version 3.5.3 [61][62], with accuracy on the discrimination task as the (categorical) binary dependent variable. Participant and item were treated as random factors with intercepts, with bilingualism status as a random slope. We used a stepwise upward model selection procedure whereby fixed factors were progressively added in such a way that the Akaike Information Criterion (AIC) was minimized indicating better fit. Three of the four fixed factors of interest, bilingualism, performance on the FDS and BDS tasks improved the model fit and were thus added in the final model. The NWR performance did not improve (in effect, worsened) the model fit and therefore was not added. The results of the model with the three fixed factors are reported in the upper portion of Table 2. Further As expected, the models revealed main effects of the bilingualism factor as well as performance in the FDS and BDS tasks, as well as the NWR task, with each of the corresponding estimates signaling an increase in respective probability of correct discrimination by degree indicated in the third column in Table 2. There was, however, no significant two-way interactions between the predictors themselves (respective three-way models did not converge), with the only interaction Bilingualism � FDS improving the model fit somewhat. We also run linear mixed effects models on participants' RTs using the lme4 package for R (only RTs on the correct responses were analyzed). RTs were similar across the conditions and no significant predictors emerged in this case (all ps > 0.10).
Discussion
We found that children's performance on the relevant working memory measures positively correlated with their overall performance on the discrimination task and also predicted individual discrimination decisions. These results supported the hypothesis that both the storage and processing components of the working memory are at play. However, we observed no interaction of the respective working memory measures, suggesting that a greater storage as
PLOS ONE
well as processing capacity do not necessarily lead to a better success in prosodic discrimination performance than each of these components on its own. Another important result of the present study is a replication of the bilingualism effect in prosodic discrimination [22]. The extent to which bilinguals enjoy cognitive advantages at different ages is currently under debate in the literature [43,63]. The present study supports the idea that in the domain of prosodic differentiation at the sentence level, bilingual elementary school-aged children enjoy a stable perceptual advantage, which is also consistent with studies reporting bilinguals' better success at foreign language learning (e.g. [13]). It is important to note, however, that this advantage cannot be directly associated with better working memory in bilinguals, in particular, the p-STM and/or processing component, as indicated by the absence of bilingualism effects in our administered WM measures and weak to no interactions between performance markers in the discrimination task and the WM tasks. This should not appear surprising given that the previously observed cognitive advantages related to bilingualism pertain mostly to executive control, to the extent that the latter is distinct from and/or covers different functions than p-STM (cf. Introduction).
A further question is the extent to which (specific components of the) working memory are deployed in the prosodic discrimination task. A pertinent line of inquiry explored in the literature concerns the chunking process. It has been suggested that listeners do not interpret speech on a sound-by-sound basis, but base their decisions over some perceptual unit that can span a number of elements [64]. Given the limited capacity of p-STM, it has been suggested that a chunking process applies in speech processing creating smaller chunks of input in the working memory [23,28,65]. These chunk-like units are detectable, for instance, by specific ERP components in speech processing [10,[66][67][68][69][70]. A better p-STM capacity would thus be associated with a greater number of stored prosodic units, or, given a limited number of those (as is the case in the present study), their stronger memory traces, as well as a better segmentation process.
Broadly speaking, comparing two sentence-level prosodic signatures in the present study is also similar to comparison of two short musical melodies. Brain imaging studies using different methodologies consistently indicate that, in melody discrimination tasks, brain structures associated with auditory working memory are activated to a greater extent in musically untrained children than in musically trained ones, likely due to less auditory experience with music sampling and sequencing [71][72][73]. Consequently, a greater working memory capacity is again beneficial in this regard.
Other cognitive skills that we have not controlled for in the present study could be additional predictors of the prosodic discrimination performance. One potential candidate is phonological and especially phonemic awareness, the ability to consciously recognize and manipulate specific sounds and sound combinations in an auditory input in predictable ways [74]. In addition, phonological awareness, on the one hand, and p-STM or phonological loop, on the other, may have a common cognitive underpinning [75]. Further research should explore the role of this and other related factors in better understanding the mechanisms behind prosodic discrimination.
The results of the present study are also consistent with the developmental models according to which the full working memory capacity responsible for adult-like performance on the working memory/p-STM tasks is by and large available at age 9 and later (see the references in Introduction). A further interesting question would be how the prosodic sensitivity at the sentence level is affected in various pathological circumstances (e.g. brain damage, hearing loss etc.) in the developmental context, alongside potential collateral effects on the working memory. This, together with other developmental factors that may affect performance in the sentence-level prosodic discrimination task, is among possible directions for further inquiry. | 2020-03-11T13:10:28.350Z | 2020-03-09T00:00:00.000 | {
"year": 2020,
"sha1": "fff0605a7448eed2cf3e16f9fe94aa1699bbe2bc",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0229857&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "de508752bcfb652e531748e35a262b959a3a2d33",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
21104304 | pes2o/s2orc | v3-fos-license | Acidotolerant Bacteria and Fungi as a Sink of Methanol-Derived Carbon in a Deciduous Forest Soil
Methanol is an abundant atmospheric volatile organic compound that is released from both living and decaying plant material. In forest and other aerated soils, methanol can be consumed by methanol-utilizing microorganisms that constitute a known terrestrial sink. However, the environmental factors that drive the biodiversity of such methanol-utilizers have been hardly resolved. Soil-derived isolates of methanol-utilizers can also often assimilate multicarbon compounds as alternative substrates. Here, we conducted a comparative DNA stable isotope probing experiment under methylotrophic (only [13C1]-methanol was supplemented) and combined substrate conditions ([12C1]-methanol and alternative multi-carbon [13Cu]-substrates were simultaneously supplemented) to (i) identify methanol-utilizing microorganisms of a deciduous forest soil (European beech dominated temperate forest in Germany), (ii) assess their substrate range in the soil environment, and (iii) evaluate their trophic links to other soil microorganisms. The applied multi-carbon substrates represented typical intermediates of organic matter degradation, such as acetate, plant-derived sugars (xylose and glucose), and a lignin-derived aromatic compound (vanillic acid). An experimentally induced pH shift was associated with substantial changes of the diversity of active methanol-utilizers suggesting that soil pH was a niche-defining factor of these microorganisms. The main bacterial methanol-utilizers were members of the Beijerinckiaceae (Bacteria) that played a central role in a detected methanol-based food web. A clear preference for methanol or multi-carbon substrates as carbon source of different Beijerinckiaceae-affiliated phylotypes was observed suggesting a restricted substrate range of the methylotrophic representatives. Apart from Bacteria, we also identified the yeasts Cryptococcus and Trichosporon as methanol-derived carbon-utilizing fungi suggesting that further research is needed to exclude or prove methylotrophy of these fungi.
List of supplementary Figures
.
nMDS analyses of bacterial and fungal communities in the 'heavy' and 'middle' fractions of both SIP experiments.
Figure S2.
Gene numbers of mmoX genes of treatments with different pH in the pH shift SIP experiment.
Figure S7.
Diversity and richness estimators of mxaF sequences from pyrosequencing amplicon pools at similarity level of 90%.
Figure S8.
Diversity and richness estimators of ITS gene sequences from pyrosequencing amplicon pools at similaritiy level of 97% (species level).
Figure S9.
Composition of the various mxaF phylotypes after different substrate or pH treatments. Tables Table S1. Relative abundances of labeled bacterial taxa (OTU) based on 16S rRNA gene sequences in all fractions (H, heavy; M, middle; L, light) of [ 12 C]and [ 13 C 1 ]-methanol treatment of Substrate SIP experiment. Table S2.
List of supplementary
Taxonomic affiliation of bacterial phylotypes (OTUs with family-level cutoff 90.0% based on 16S rRNA gene sequences) in numerical order. Table S3.
Relative abundances of labeled fungal taxa (OTU) based on ITS gene sequences in all fractions (H, heavy; M, middle; L, light) of [ 12 C]-and [ 13 C 1 ]-methanol treatments of Substrate SIP experiment. Table S5.
Taxonomic affiliation of fungal phylotypes (ITS gene sequences clustered at species-level 97% similarity cut-off) in numerical order. Table S6.
Relative abundances of labeled bacterial taxa (OTU) based on 16S rRNA gene sequences in all fractions (H, heavy; M, middle; L, light) of [ 12 C]and [ 13 C 1 ]-methanol treatment at pH 4 of pH SIP experiment. Table S7.
Relative abundances of labeled bacterial taxa (OTU) based on 16S rRNA gene sequences in all fractions (H, heavy; M, middle; L, light) of [ 12 C]and [ 13 C 1 ]-methanol treatment at pH 7 of pH SIP experiment. Table S8.
Relative abundances of labeled taxa (OTU) based on mxaF gene sequences in all fractions (H, heavy; M, middle; L, light) of [ 12 C]-and [ 13 C 1 ]-methanol treatments at pH 4 of pH SIP experiment. Table S9.
Relative abundances of labeled taxa (OTU) based on mxaF gene sequences in all fractions (H, heavy; M, middle; L, light) of [ 12 C]-and [ 13 C 1 ]-methanol treatments at pH 7 of pH SIP experiment. Table S10. Relative Table S13. Similarity analyses of bacterial communities (family-level with 90.1% cutoff of 16S rRNA gene sequence) of both SIP experiments based on ANOSIM (Analysis of Similarity) and NPMANOVA (non-parametric multivariate analysis of variance). Table S14. Similarity analyses of fungal communities (family-level with 97.0% cut-off of ITS gene sequence) of both SIP experiments based on ANOSIM (Analysis of Similarity) and NPMANOVA (non-parametric multivariate analysis of variance). Table S15. Similarity analyses of mxaF-possessing methylotrophic communities (90% cut-off) of both SIP experiments based on ANOSIM (Analysis of Similarity) and NPMANOVA (non-parametric multivariate analysis of variance). 1% for Bacteria (16S rRNA gene sequences, family-level; reduced dataset, for detailed information see Supplemental Materials and Methods) and 97% for Fungi (ITS gene sequences species-level). Stress values are given in brackets. All analyses are based on Bray-Curtis similarity index. Symbols according to SIP experiment: , substrate SIP; , pH 4; , pH 7. ' 12 C' indicates [ 12 C]-substrates and ' 13 C' indicates [ 13 C u ]-substrates. Symbols according to supplemented [ 13 C u ]-substrate: , methanol; , acetate +; , glucose +; , xylose +; , vanillic acid +; , CO 2 +; , CO 2 (cross indicates additional supplementation of [ 12 C]-methanol in substrate SIP experiment). Table S2) and confirmed by positioning in phylogenetic tree (data not shown) b Sequence identity with BLASTn < 90 % as well as ambiguous position in phylogenetic tree (for further information see Table S2) Percentage of labeled taxa to total fraction [%] . 24 51 a Taxonomic affiliation was done with BLASTn (November 2015) and is based on the next cultivated hit for each OTU (for further information see Table S8) b Sequence identity of next cultured hit < 90 %, phylogenetic affiliation up to order level d Sequence identity of next cultured hit ≥ 95 %, phylogenetic affiliation up to genus level e Query of next cultured hit was only 72 % with BLASTn analysis Percentage of labeled taxa to total fraction [%] . 43 48 a Taxonomic affiliation was done with BLASTn (November 2015) and is based on the next cultivated hit for each OTU (for further information see Table S8) c Sequence identity of next cultured hit < 95 %, phylogenetic affiliation up to family level d Sequence identity of next cultured hit ≥ 95 %, phylogenetic affiliation up to genus level Table S2) and confirmed by positioning in phylogenetic tree (data not shown) b Sequence identity with BLASTn < 90 % as well as ambiguous position in phylogenetic tree (for further information see S2) Percentage of labeled taxa to total fraction [%] . 75 50 a Taxonomic affiliation was done with BLASTn (December 2015; for further information see Table S2) and confirmed by positioning in phylogenetic tree (data not shown) b Sequence identity with BLASTn < 90 % as well as ambiguous position in phylogenetic tree (for further information see Table S2) 2015) and was done with a bayesian classifier implied with MOTUHR based on the best hit of consensus taxonomy after 100 bootstrapped assignments (for further reference sequences based on 'massBLASTer' of UNITE see Table S5) b Taxa in brackets dominated fungal order or family c Treatment with methanol (MeOH), acetate (Ace), glucose (Glu), xylose (Xyl), vanillic acid (Van) and carbon dioxide (CO 2 ); cross (+) indicates additional methanol supplementation d Treatment with methanol at different pH conditions (pH 4 and pH 7) a Treatment with methanol (MeOH), acetate (Ace), glucose (Glu), xylose (Xyl), vanillic acid (Van) and carbon dioxide (CO 2 ); cross (+) indicates additional methanol supplementation b Treatment with methanol at different pH conditions (pH 4 and pH 7) c Comparison between t 0 of Substrate SIP experiment and pH-SIP experiment (Sub vs pH) and between both t0 of pH-SIP (pH4 vs pH7)
Table S14. Similarity analyses of fungal communities (family-level with 97.0% cut-off of ITS gene sequence) of both SIP experiments based on ANOSIM (Analysis of Similarity) and NPMANOVA (non-parametric multivariate analysis of variance).
Values of total analyses in bold, pairwaise analyses in cursive. a Treatment with methanol (MeOH), acetate (Ace), glucose (Glu), xylose (Xyl), vanillic acid (Van) and carbon dioxide (CO 2 ); cross (+) indicates additional methanol supplementation b Treatment with methanol at different pH conditions (pH 4 and pH 7) c Comparison between t 0 of Substrate SIP experiment and pH-SIP experiment (Sub vs pH) and between both t0 of pH-SIP (pH4 vs pH7) a Treatment with methanol (MeOH), acetate (Ace), glucose (Glu), xylose (Xyl), vanillic acid (Van) and carbon dioxide (CO 2 ); cross (+) indicates additional methanol supplementation b Treatment with methanol at different pH conditions (pH 4 and pH 7) c Comparison between t 0 of Substrate SIP experiment and pH-SIP experiment (Sub vs pH) and between both t0 of pH-SIP (pH4 vs pH7) Table S2) and confirmed by positioning in phylogenetic tree (data not shown) b Sequence identity with BLASTn <90% as well as ambiguous position in phylogenetic tree (for further information see Table S2) c Treatment with methanol (MeOH), acetate (Ace), glucose (Glu), xylose (Xyl), vanillic acid (Van) and carbon dioxide (CO 2 ); cross (+) indicates additional methanol supplementation d Treatment with methanol at different pH conditions (pH 4 and pH 7) | 2017-08-15T05:50:38.436Z | 2017-07-24T00:00:00.000 | {
"year": 2017,
"sha1": "7fea56df65afe3a8a9102384964932c762f90abd",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2017.01361/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7fea56df65afe3a8a9102384964932c762f90abd",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
3995771 | pes2o/s2orc | v3-fos-license | Speech Emotion Recognition Considering Local Dynamic Features
Recently, increasing attention has been directed to the study of the speech emotion recognition, in which global acoustic features of an utterance are mostly used to eliminate the content differences. However, the expression of speech emotion is a dynamic process, which is reflected through dynamic durations, energies, and some other prosodic information when one speaks. In this paper, a novel local dynamic pitch probability distribution feature, which is obtained by drawing the histogram, is proposed to improve the accuracy of speech emotion recognition. Compared with most of the previous works using global features, the proposed method takes advantage of the local dynamic information conveyed by the emotional speech. Several experiments on Berlin Database of Emotional Speech are conducted to verify the effectiveness of the proposed method. The experimental results demonstrate that the local dynamic information obtained with the proposed method is more effective for speech emotion recognition than the traditional global features.
Introduction
As is all known, speech conveys some additional messages beyond the words, such as emotion or identity of the speaker. With the rapid development of human-computer interaction in recent years, there is a growing interest in the emotion recognition from speech. Recognizing the emotion from the speech is beneficial for machines to communicate with the human. However, this is a problem full of challenges because the expression of emotion varies from one person to another [1]. The traditional method of speech emotion recognition is as follows. In most existing approaches, low-level features of each frame in an utterance are extracted firstly. Then, the statistical features such as mean, maximum, and minimum values of these frames are calculated from the whole utterance. However, taking the features of the whole emotional utterance into account is somewhat unreasonable since human's perception of emotional speech is diverse. Considering the arithmetic capability of computers, some salient features are usually selected to represent the natures of the emotional speech. Therefore, feature selection is critical to explore features which are more effective for the expression of emotion in order to improve the recognition performance. Finally, these selected salient features are fed into a classifier to conduct the speech emotion classification.
In most of the previous works, global acoustic features of an utterance are usually adopted to eliminate the content differences and reduce the number of features [2]. However, emotional information of the speech is usually characterized by its dynamic changes [3]. In other words, the emotion-related components varies with time, rather than being constant in an utterance. Thus, the utilization of global statistical features alone, which takes statistics of the features in a whole utterance, may disregard some local dynamic information of emotion in speech.
To take such local information into consideration, segmentation is simply used to avoid the shortcomings of global features. There have been some studies working on the segmentation of utterances for the classification of speech emotion. In the work of Björn Schuller et al. [4], several segmentation schemes are proposed. The experimental results show that the combination of global and relative time interval features makes a significant improvement. Je Hun Jeon et al. [5] compare different segment units (3-words, phrases, and time-based segment) and find that using timebased subsentence segment units outperforms others. Hao Zhang et al. [6] use different segment selection approaches based on entropy, mutual information, and correlation coefficients, which yields better performances. Krothapalli Sreenivasa Rao et al. [7] report that the performance due to local prosodic features is above that of global ones. All these previous research reveals the effectiveness of segmental features on speech emotion recognition compared with the global utterance features.
Besides, the prosodic features conveying significant emotional information are utilized and analyzed in many previous studies [2], [8]. Pitch, as one of the prosodic features, has been found discriminative across different emotions, to some extent. For example, the average pitch of the speech with anger or happiness emotion is usually higher than which with sadness or fear emotion. In addition, the contour of pitch also differs among the utterances with different emotions [9].
At the aspect of classifiers utilized in previous research, some unsupervised learning methods are commonly used, such as Gaussian Mixture Model (GMM) in [10]. In addition, Support Vector Machine (SVM), which is a kind of supervised learning method, is employed more because of its capability and performance for modeling small-scale data with fewer parameters to be trained. Its target is to find a hyperplane to distinguish the data. Recently, with the development of deep learning methods, Deep Neural Network (DNN), Deep Belief Network (DBN) and some other deep learning methods, which are based on the perception mechanism of the human brain, are also utilized in speech emotion recognition [11], [12]. However, largescale datasets are necessary for the training of such kind of deep learning methods.
In this paper, time-based segmentation approach, which is to divide an utterance according to the time without taking the lexical information, is utlized to capture the temporal information of the emotional speech. The utilization of time-based segmentation achieves higher real-time capability, which can improve the audio stream processing performance to certain degree. And a novel pitch probability distribution, which is obtained by drawing the histogram, is proposed as a local dynamic prosodic feature, since pitch plays an important role in the expression of emotion and the histogram can reflect the distribution of the values to a certain degree. Firstly, the pitch histogram and other acoustic features are extracted from each segment of the utterance. After that, an optional processing of principal components analysis (PCA) is adopted for feature selection. Finally, these selected features are fed into an SVM classifier and the predicted class of emotion are obtained. The proposed framework for speech emotion recognition is illustrated in Fig. 1. Several comparative experiments are designed to validate the effectiveness of the proposed method. Based on the comparison of the experimental results, we can conclude that the combination of segmentation and the pitch probability distribution features, which considers the local dynamic information, achieves better results.
Fig. 1. Overview of the proposed method
The rest of this paper is organized as follows. In Section 2, the detailed method is provided, in which the time-based segmentation and the proposed novel pitch probability distribution features obtained by drawing the histogram are introduced. Experimental conditions and results are presented in Section 3. Discussion and conclusion are given in Section 4.
2
Time-based segmentation and local dynamic pitch probability distribution feature extraction
Time-based segmentation
The Relative Time Intervals (RTI) approach [4] is utilized for time-based speech segmentation. In addition, traditional Global Time Intervals (GTI) approach is adopted for comparison, which simply means using the whole utterance without segmentation and is usually used in traditionalmethods. Figure 2 shows the illustration of applying GTI and RTI approaches for the utterances with different In the time-based segmentation approaches, taking the average is one of the simplest techniques to divide an utterance, which also guarantees the same number of the divided segments in an utterance. Therefore, in the RTI approach, as shown in Fig. 3, an utterance is first divided into n segments with the same duration of / n , in which denotes the length of the utterance, and n keeps invariant in the whole process. Next, each segment will be divided into frames of 25 ms length with 15 ms overlap.
Local dynamic pitch probability distribution feature extraction
After the segmentation, the pitch value of each frame is calculated, and only the values within a certain range are taken into account for pitch histogram computation. As shown in Fig. 3, the horizontal axis corresponds to several bins (or intervals) of the pitch range, while the vertical axis is the occurrence frequency of the pitch falling into each bin. The pitch histogram is normalized, with the sum of the heights equaling one. When the range of pitch is set to [a, b] and the bin width is h , there will be bins for each segment in the histogram.
Finally, the value of each bin is concatenated and treated as the pitch probability distribution feature, and then is fed into the classifier for emotion recognition, together with some other features extracted from each segment, which are described in the next section. Z-score normalization is used to eliminate the difference in the scales of different kinds of features. The calculation is as follows: where and are the mean value and standard deviation of the population, respectively.
Experimental conditions
In this paper, our proposed approach is experimentally evaluated on the commonly used Berlin Database of Emotional Speech (Emo-DB), which contains 535 utterances in German covering seven emotions [13]. Ten sentences without emotional content are acted by five actress and five actors, who are all professional. [15] and is usually employed as a global feature set. The features are obtained by applying 12 functionals to several lowlevel descriptors (LLDs) including zerocrossing rate (ZCR), root mean square (RMS) energy, pitch, harmonics-to-noise ratio (HNR), and MFCC 1-12, together with their first order delta regression coefficients.
The whole LLDs and functionals in the feature set are shown in Table 1. These features are extracted automatically with the open resource toolkit openSMILE [16].
HNR, as one of the LLDs, is computed from the Autocorrelation Coefficient Function (ACF), and can be regarded as voicing probability. It is calculated as: in which N , xm, and 0 T denote the fundamental period [17], the frame length, and the m th sampling point in the n th frame, respectively. For the classification model, we used SVM with WEKA 3 Data Mining Toolkit [18]. Linear kernel is applied to avoid overfitting. Leave-one-out cross validation is performed for SVM training and testing to miximize the scale of training data.
In order to evaluate the effectiveness of our proposed method and features, several comparative experiments are conducted from different aspects. Firstly, experiment using global acoustic features of utterances without segmentation is regard as a benchmark. Then, experiments of different segmentation methods are conducted to verify the effectiveness of our proposed local dynamic pitch probability distribution features. In addition, principal components analysis (PCA) is employed for dimensionality reduction of the features.
Experimental results
In this paper, weighted average recall (WA, the number of correctly classified instances divided by the total amount of instances) and unweighted average recall (UA, the mean value of the recall for each class) are used to evaluate the performance of classification, in which the weighted average recall is able to reflect the overall accuracy for imbalanced class. Table 2 presents the speech emotion recognition accuracies of different comparative experiments. Comparing the results of Experiment 1 without segmentation with the others, we find that time-based segmentation contributes to the accuracy with significant improvements. In addition, the pitch probability distribution features are able to increase the accuracy as well. Furthermore, with segmentation and the pitch probability distribution features applied together, the performance is further improved in Experiment 4. When the dimensionality is reduced to 409 with the utilization of PCA (cumulative contribution rate: 99.0%), the best result is achieved in Experiment 5, whose relative error rate is 18.08% lower than the benchmark in terms of UA. The improvement is achieved by the local dynamic information extracted with the segmentation approach. The confusion matrices of the benchmark (Experiment 1) and the best result (Experiment 5) are given in Table 3 and 4. From the confusion matrices, we can observe that the performances of our proposed method in Experiment 5 are much better than which in Experiment 1 for most of emotions, which verified the effectiveness of our method. Table 5 gives performances of proposed method in terms of UA on each emotion state. We observe that the segmentation and local dynamic pitch probability distribution features increase the performances of recognition of the majority of emotion states, except for happiness and anger. This result is understandable because happiness and anger utterances have similar dynamic trends on the pitch features [9] and therefore tend to be confused with each other. Besise, some further experiments are also conducted to explore how the number of segments and interval in the pitch histogram affect the recognition result. The experimental results show that the combination of pitch probability distribution features and commonly-used features extracted from each segment together with dimensionality reduction using PCA (i.e. the experimental program of Experiment 5) achieves the best result in each experimental condition. Table 6 shows the results under the experimental program of Experiment 5 in terms of UA (%). In order to examine the relationship between the number of the segments and the recognition results, experiments with four and five segments for each utterance are conducted, but the results are not as good as that with three segments. In addition, the UA decreases with the increase of the number of the segments. Moreover, the result with a bin width of 50 Hz is better than that of 25 Hz. A possible reason is that with smaller granularity of the segmentation and the pitch probability distribution features extraction, some of the emotional information is counteracted by the content differences in an utterance, and therefore it is adverse to the recognition of speech emotion.
Discussion and conclusion
In this paper, a novel local dynamic pitch probability distribution feature is proposed in time-based segments to improve the performance of speech emotion recognition.
The experimental results suggest that the local dynamic information obtained by timebased segmentation and pitch probability distribution features are more effective for speech emotion recognition than those traditional global features. Some different segmentation related parameters are also examined in the experiments, the results show that too large or small granularity for the segmentation is adverse to the recognition of speech emotion. There are several emotional speech corpora in various languages being used in the studies. The common problem, however, is that the scales of them are relatively small with respect to those for Automatic Speech Recognition (ASR), which usually makes it difficult to train the classifier well. Thus, it is also an issue to be addressed that how to achieve ideal performance with small-scaled training data. In addition, pitch is selected as one of the prosodic features that convey important information related to emotion for histogram calculation in this paper. Other features can also be analyzed in the similar way to expect a better performance in our future work.
In this study, we validate the dynamic nature of emotional speech in terms of features. Actually, the classification model influences the performance of recognition in large measure as well. Therefore, in the future, dynamic classification methods such as Recurrent Neural Network (RNN) will be considered since these sequential models are suitable for the dynamic information. Hybrid hierarchical models can also be attempted. Moreover, deep learning methods, which are based on the perception mechanism of the human brain, can be introduced for feature selection instead of traditional PCA method. Also, these features and approaches need to be evaluated on large-scaled dataset in order that the models can be trained enough. | 2018-03-21T03:52:26.000Z | 2017-10-16T00:00:00.000 | {
"year": 2018,
"sha1": "ea5e2265c1de81d8638ef3184921e7f8797e0e9e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1803.07738",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ea5e2265c1de81d8638ef3184921e7f8797e0e9e",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
235414731 | pes2o/s2orc | v3-fos-license | Targeting S100A4 with niclosamide attenuates inflammatory and profibrotic pathways in models of amyotrophic lateral sclerosis
Background An increasing number of studies evidences that amyotrophic lateral sclerosis (ALS) is characterized by extensive alterations in different cell types and in different regions besides the CNS. We previously reported the upregulation in ALS models of a gene called fibroblast-specific protein-1 or S100A4, recognized as a pro-inflammatory and profibrotic factor. Since inflammation and fibrosis are often mutual-sustaining events that contribute to establish a hostile environment for organ functions, the comprehension of the elements responsible for these interconnected pathways is crucial to disclose novel aspects involved in ALS pathology. Methods Here, we employed fibroblasts derived from ALS patients harboring the C9orf72 hexanucleotide repeat expansion and ALS patients with no mutations in known ALS-associated genes and we downregulated S100A4 using siRNA or the S100A4 transcriptional inhibitor niclosamide. Mice overexpressing human FUS were adopted to assess the effects of niclosamide in vivo on ALS pathology. Results We demonstrated that S100A4 underlies impaired autophagy and a profibrotic phenotype, which characterize ALS fibroblasts. Indeed, its inhibition reduces inflammatory, autophagic, and profibrotic pathways in ALS fibroblasts, and interferes with different markers known as pathogenic in the disease, such as mTOR, SQSTM1/p62, STAT3, α-SMA, and NF-κB. Importantly, niclosamide in vivo treatment of ALS-FUS mice reduces the expression of S100A4, α-SMA, and PDGFRβ in the spinal cord, as well as gliosis in central and peripheral nervous tissues, together with axonal impairment and displays beneficial effects on muscle atrophy, by promoting muscle regeneration and reducing fibrosis. Conclusion Our findings show that S100A4 has a role in ALS-related mechanisms, and that drugs such as niclosamide which are able to target inflammatory and fibrotic pathways could represent promising pharmacological tools for ALS. Supplementary Information The online version contains supplementary material available at 10.1186/s12974-021-02184-1.
Background
Amyotrophic lateral sclerosis (ALS) is a late-onset neurodegenerative disease characterized by progressive loss of motor neurons in the brain and spinal cord. It is the most common form of motor neuron disease [6], with an onset occurring at approximately 60 years old and patients surviving on average 3 years from diagnosis. Most cases of ALS are sporadic (sALS), while 60% of familial ALS can be attributed to pathogenic variants in four genes: SOD1, TARDBP, FUS, and C9orf72 [37].
An increasing number of studies supports the concept that ALS is not a disease restricted to motor neuron pathology, but a disorder characterized by extensive involvement of the CNS, with documented causal roles exerted also by non-neuronal cells [2]. Moreover, alterations in non-nervous tissues, including skeletal muscles, adipose tissue, and even dermis have been extensively documented [42,57,73]. Fibroblasts from patients with ALS show indeed numerous abnormalities concerning autophagy, stress response [23,40,48,52], and the stability of RNA transcripts related to oxidative phosphorylation, protein synthesis, and inflammation [65]. These peripheral cells therefore share common pathogenic pathways with different CNS resident cells and represent an accessible model for studying molecular, cellular, and genetic parameters of the pathology [51].
Literature data and our previous work reported an evident upregulation of a gene called fibroblast-specific protein-1 or S100A4, in different models of ALS disease. S100A4 mRNA was found strongly increased in the lumbar spinal cord from pre-symptomatic and end-stage SOD1-G93A rats [59], in astrocytes from presymptomatic SOD1-G37R mice [64] and is among the limited number of mRNAs displaying significant changes in their stability in both C9orf72 and sALS fibroblasts [65]. Accordingly, we found that S100A4 protein is overexpressed mainly by astrocytes and microglia from SOD1-G93A rats and by fibroblasts from ALS patients carrying SOD1 mutations [59]. The functions of S100A4 can be diverse and tissue-dependent but it is recognized as a pro-inflammatory and profibrotic gene, even though in the CNS its role seems more controversial, as in acute models of neurodegeneration it has been associated with trophic effects [12]. In contrast with this beneficial role, we previously demonstrated that in activated primary microglia cells, the decrease of S100A4 obtained using its transcriptional inhibitor niclosamide is associated with a strong reduction of pro-inflammatory pathways [59]. Under this aspect, S100A4 promotes the release of cytokines at inflammatory sites and the remodeling of extracellular matrix components (ECM), and is a recognized inhibitor of autophagy, sustaining by this way inflammation and concomitant fibrotic events. Due to its properties, the protein has been implicated in the fibrosis of many organs, such as kidney, liver, lung, and heart [26]. In neurodegenerative conditions, including ALS, an interplay between fibrosis and inflammation in different organs and tissues is an emerging concept that relies on data showing alterations of the ECM components and remodeling enzymes, increase in fibrotic markers as TGF-β, as well as in profibrotic genes [11,16,33]. Hence, the comprehension of the elements responsible for the inflammatory and fibrotic pathways appears to be crucial to dissect novel aspects contributing to the pathology of ALS.
Niclosamide is an FDA-approved anti-helminthic drug, with considerable safety [18,61,68]. In the recent years, niclosamide has been repurposed for different diseases and preclinical validation proved that it has promising efficacy against solid cancers, rheumatoid arthritis, and fibrotic conditions, due to the potent anti-inflammatory and anti-fibrotic properties [7,27,61]. Niclosamide effects reside on its ability to target several signaling pathways, including S100A4, mammalian target of rapamycin (mTOR), signal transducer and activator of transcription 3 (STAT3), and nuclear factor-κB (NF-κB) [21,49,58,70], which, interestingly, have been found to be dysregulated in ALS [32,59,69], suggesting its potential use to interfere with these altered mechanisms in the pathology.
In this study, we have analyzed the role of S100A4 in cellular pathways linked to human ALS-fibroblasts activation, such as mTOR, sequestosome 1 (SQSTM1/ p62), NF-κB, α-smooth muscle actin (α-SMA), and Ncadherin. Moreover, we have tested niclosamide in vitro in ALS fibroblasts and in vivo in a transgenic mouse model of ALS overexpressing human FUS (hFUS), recapitulating pathological features of the disease, to understand its potential efficacy in ameliorating ALS pathology.
Patients
The study was approved by the ethics committee of the Università Cattolica del Sacro Cuore (Rome, Italy) on 30 July 2012, Prot nr. P740/CE/2012. A written informed consent was signed by all of the subjects. The diagnosis of ALS was made according to revise El Escorial/Airlie House Criteria. The presence of familiarity was deeply investigated. Patients with one or more affected relatives were diagnosed as familial ALS (fALS), while patients with no family history were classified as sporadic (sALS). Genetic analysis was performed on patients using massive parallel sequencing of genes associated to ALS, as previously described [24], and Repeat-Primed PCR was used to screen all patients for the C9orf72 expansion [50]. Three patients harboring the C9orf72 hexanucleotide repeat expansion (2 fALS and 1 sALS), one patient harboring the p.R51C FUS pathogenic variant (fALS), two patients carrying the p.Q303H and the p.A382T variants in TARDBP (both sALS) were included in the study as well as three ALS patients with no pathogenic variants in known ALS-associated genes (sALS) and five healthy controls.
Fibroblast primary cultures
All experiments were carried out in accordance to the approved guidelines of the ethics committee of the Catholic University. A written informed consent was obtained from patients and from healthy donors. Skin biopsies were performed using a 4-mm punch on the distal leg of the patients at NEMO Clinical Centre (Rome, Italy). Primary human dermal fibroblasts were isolated, as previously described [59]. Skin samples were dissected, transferred to a cell culture flask, and cultured in BIO-AMF-2 complete medium (Biological Industries) in a 37°C incubator. After the fibroblasts reached confluence, they were expanded up to 4th passage. Fibroblasts were maintained in Dulbecco's modified Eagle's medium (DMEM) supplemented with 20% fetal bovine serum (FBS, Euroclone) and 1% penicillin/streptomycin (Sigma-Aldrich) at 37°C, 5% CO 2 .
FUS transgenic mice
Adult Tg (Prnp-FUS) WT3Cshw/J mice expressing hemagglutinin-tagged human wild-type FUS (hFUS) were obtained from Jackson Laboratories. Animals were housed in our indoor animal facility at constant temperature (22 ± 1°C) and relative humidity (50%) with 12-h light cycle (light 7 am-7 pm). Mice were maintained in hemizygosity on the same C57BL/6 genetic background. Hemizygous FUS mice were backcrossed to obtain homozygous mice, used as experimental subjects. Food and water were freely available. When animals showed symptoms of paralysis, wet food was given daily into the cages for easy access to nutrition and hydration. Mice were genotyped by PCR analysis of tissue extracts from tail tips. Hemizygous FUS mice were identified using PCR primers: Fwr5′-AGGGCT ATTCCCAGCAGAG-3′, Rev5′-TGCTGCTGTTGTAC TGGTTCT-3′. Homozygous FUS mice were genotyped by qPCR using the following primers: Fwr5′-GCCAGA ACACAGGCTATGGAA-3′ and Rev5′-GTAAGACGAT TGGGAGCTCTG-5′. All animal experiments complied with the ARRIVE guidelines and were carried out in accordance with the European Guidelines for the use of animals in research (2010/63/EU) and the requirements of Italian laws (D.L. 26/2014). The ethical procedure was approved by the Italian Ministry of Health. All efforts were made to minimize animal suffering and the number of animals necessary to produce reliable results.
S100A4 silencing
Primary fibroblasts deriving from ALS patients with no pathogenic variants in known ALS-associated genes (n = 3, uvALS) and C9orf72 patients (n = 3, C9orf72) were seeded in 12-well plate at a density of 50,000 cells per well approximately 24 h before transfection and at the confluence of about 50%, the cells were transfected with two types of siRNAs for S100A4 (50 nM) (Thermo Fischer). A scrambled siRNA (100 nM) (Thermo Fischer) was used as a negative control. Transfection was performed using Metafectene (Biontex, Germany) following the manufacturer's instructions. After transfection for 48 or 72 h, cells were harvested for further experiments. The experiments were repeated in triplicate.
Niclosamide in vitro and in vivo treatment
The inhibitor of S100A4 niclosamide (2′,5-dichloro-4′nitrosalicylanilide, Sigma-Aldrich) was solubilized in dimethyl sulfoxide (DMSO) for in vitro experiments. Control cells were treated with the equal amount of DMSO. Niclosamide was administered to fibroblasts deriving from ALS patients with no pathogenic variants in known ALS-associated genes (n = 3, uvALS) and C9orf72 patients (n = 3, C9orf72) at the dose of 1-5-10 μM for 72 h. At these concentrations, niclosamide did not interfere with fibroblasts cell viability. The experiments were performed in triplicate.
For the in vivo experiments, niclosamide (20 mg/kg/ day, dissolved in Cremophor ®, Sigma-Aldrich) was administered daily from post-natal day 25 via intraperitoneal (i.p.) injections to hFUS mice (n = 6), when hFUS mice showed first signs of destabilized gait [34]. Control vehicle mice (n = 6) were treated with the appropriate volume of solvent solution. Survival was determined by the loss of righting reflex within 20 s after laying the mouse on its side [1].
Western blot
Ctrl (n = 5), C9orf72 (n = 3), FUS (n = 1), TARDBP (n = 2), and uvALS (n = 3) fibroblasts were lysed on plates in 2xLaemmli buffer and the lysates were boiled at 100°C for 5 min. Spinal cords, sciatic nerves, and gastrocnemius muscles of n = 4 animals per group were dissected [1] and lysed in homogenization buffer (50 mM Tris HCl pH 7.4, 250 mM NaCl, 1 mM EDTA, 5 mM MgCl 2 , 1% Triton X-100, 0.25% Na-deoxycholate, 0.1% SDS, protease inhibitor cocktail from Sigma-Aldrich). After 2 × 10″ sonication cycles, samples were incubated on ice and then centrifuged at 15,000×g for 20′ at 4°C. Supernatants were then quantified with Bradford protein assay (Bio-Rad) and resuspended in Laemmli Buffer before SDS-PAGE (Sigma-Aldrich). Proteins were separated on 10% SDS-PAGE and transferred to nitrocellulose membranes, followed by incubation with 5% skimmed milk for 1 h and with primary antibodies at 4°C overnight. HRP-conjugated secondary antibodies (1:2,500, Jackson ImmunoResearch) were applied at RT for 1 h. ECL solution (Roche) was used for chemiluminescent detection. GAPDH was used as a control for equal loading. Following densitometry-based quantification and analysis using ImageJ software, the relative density of each identified protein was calculated.
Immunofluorescence and confocal analysis FUS mice and age-matched controls were euthanized by CO 2 and decapitated. Spinal cords were immediately dissected and post-fixed in 4% paraformaldehyde (PFA) for 12 h, incubated in 30% sucrose in PBS solution for 24 h at 4°C, and then cut into 30-μm-thick slices with a freezing cryostat. Lumbar spinal cord slices from n = 4 animals per group were blocked for 1 h in 10% normal donkey serum (NDS) in PBS, 0.3% Triton X-100, and then incubated 3 days at 4°C with primary antibodies diluted in 2% NDS in PBS, 0.3% Triton X-100, and then for 3 h at room temperature with appropriate secondary antibody, diluted in the same solution. After two rinses, 10 min each in PBS, nuclei were stained with 1 μg/ml DAPI (Sigma-Aldrich) for 10 min.
Whole mount sciatic nerves from n = 4 animals per group were post-fixed in 4% PFA for 24 h, incubated with PBS at 4°C for 48 h, and blocked with blocking buffer of 10% NDS in PBS, 0.3% Triton X-100 for 6 h at RT. Nerves were then incubated 3 days at 4°C with primary antibodies diluted in 2% NDS in PBS, 0.3% Triton X-100, and then for 3 h at RT with appropriate secondary antibody, diluted in the same solution. After two rinses, 10 min each in PBS, nuclei were stained with 1 μg/ml DAPI for 10 min. Images were visualized by Nikon Eclipse TE200 epifluorescence microscope (Nikon, Florence, Italy) connected to a CCD camera. Images were captured under constant exposure time, gain, and offset. After creating a region of interest, background was subtracted, and the average pixel intensity was determined. All image quantifications were done using ImageJ software (NIH, Bethesda, USA).
RT-PCR and qPCR
Total RNA from C9orf72 (n = 3) and uvALS (n = 3) fibroblasts and from gastrocnemius skeletal muscle tissue (n = 4 animals per group) was extracted with Trizol (Invitrogen) using a standard protocol, and treated with RNAse-free DNase (Promega), according to the manufacturer's instruction. RNA was retro-transcribed using random primers with Im-Prom II reverse transcription system (Promega), following the manufacturer's indication. qPCR was performed with iTaq Universal SYBR Green Supermix (Bio-Rad) using 20 ng cDNA and 350 nM of specific primers, following manufacturer's indications. qPCR reactions were performed using the CFX Connect Real-Time PCR Detection System (Bio-Rad), and Cq values were determined from the system software using 'single threshold' mode. Relative expression values were normalized to the housekeeping genes GAPDH or β-actin for human fibroblasts and murine muscle tissues, respectively. The primers used for human fibroblasts were GAPDH FOR: TCTTTTGCGTCGCCAG CCGAG, GAPDH REV: TGACCAGGCGCCCAATAC GAC; S100A4 FOR: GTACTCGGGCAAAGAGGGTG, S100A4 REV: GCTTCATCTGTCCTTTTCCCC; α-SMA
Statistics
Data are reported as mean ± standard error of the mean (SEM). Statistical differences were verified by two-tailed student's t test if the normality test was passed, or by the Mann-Whitney rank sum test, if the normality test failed. One-way analysis of variance (ANOVA) followed by post hoc Tukey's was used for multiple comparisons. The software package GraphPad Prism 6.0 (GraphPad Software, San Diego, CA, USA) was used for all statistical analysis with differences considered significant for p < 0.05. Animals were randomly used for experiments. The sample sizes were chosen on the basis of similar experiments reported in our previous papers and papers published by other groups [1,28,48,55,59].
Results
ALS fibroblasts show aberrant levels of S100A4, mTOR, SQSTM1/p62, and NF-κB In a previous study, we demonstrated that S100A4 was increased in fibroblasts from patients with different SOD1 pathogenic variants [59]. To investigate whether an augmented expression of S100A4 is a common trait of fibroblasts derived from patients with ALS, we have analyzed the protein expression in primary fibroblasts from ALS patients without known variants in ALSassociated genes, and from patients carrying pathogenic C9orf72 expansions, the most common cause of familial and sporadic ALS found to date. As shown, primary fibroblasts derived from both groups of patients display a strong increase in S100A4 protein levels, compared with those obtained from healthy subjects (Fig. 1). Furthermore, S100A4 shows a trend to increase also in a fibroblast line derived from a patient carrying the FUS p.R521C pathogenic variant (Additional file 1: Figure S1a) and from patients with the TARDBP p.Q303H and p.A382T mutations (Additional file 1: Figure S1b), suggesting that S100A4 is upregulated in fibroblasts regardless of the ALS condition and gene mutation carried.
Since the overexpression of S100A4 is correlated with autophagy impairment and inflammation, we also analyzed key markers related to these pathways in ALS fibroblasts. Cells from ALS patients show increased mTOR expression and an accumulation of SQSTM1/ p62, compared to cells from healthy controls (Fig. 1). Moreover, although fibroblasts from ALS patients with no known ALS variants (uvALS) do not show significant differences in both total and p-NF-κB levels compared to controls, fibroblasts carrying the C9orf72 expansions display increased total and activated NF-κB (Fig. 1). These findings indicate that ALS-derived primary fibroblasts show features of autophagic and inflammatory pathway alterations, which may suggest an activated phenotype.
S100A4 silencing inhibits activation markers in ALS fibroblasts
In order to directly assess the contribution of S100A4 in supporting the autophagic and inflammatory dysregulated pathways shown by ALS fibroblasts, we silenced S100A4 expression in patient-derived cells. We found that a 60% downregulation of S100A4 is sufficient to strongly decrease the levels of mTOR and SQSTM1/p62 proteins in fibroblasts from ALS patients (Fig. 2a, b), as well as the expression of p-NF-κB in C9orf72 cells (Fig. 2b), with respect to scramble silenced cells.
The transformation of fibroblasts into activated cells as profibrotic myofibroblasts is characterized by the upregulation of several distinctive markers, including S100A4, α-SMA, N-cadherin, and by the activation of the STAT3 pathway. To explore whether the inhibition of S100A4 may affect the expression of these markers, we adopted the conditions of S100A4 silencing described before, and tested the levels of these proteins. As shown, S100A4 silencing leads to a decreased expression of STAT3, N-cadherin, and α-SMA, in ALS fibroblasts (Fig. 3a, b) compared to scramble silenced cells. These findings thus suggest that S100A4 is directly involved in aberrant pathways related to autophagy and inflammation and contributes to the phenotypic transition of ALS fibroblasts toward a profibrotic and activated state.
Niclosamide decreases S100A4, mTOR, and profibrotic markers in ALS fibroblasts
Previous studies reported that niclosamide, a pleiotropic drug recognized as a transcriptional inhibitor of S100A4, can induce canonical autophagy via feedback downregulation of mTOR [30] and can exert a potent inhibitory activity on STAT3 [9], in line with its well-recognized anti-fibrotic action [7]. Thus, we tested the effects of different doses of niclosamide on ALS fibroblasts to evaluate its ability to reverse the aberrant pathways observed. Niclosamide treatment decreases S100A4, α-SMA, N-cadherin, p-mTOR, and p-STAT3 levels in fibroblasts from ALS patients (Fig. 4a, b). Moreover, in both patient-derived cells, niclosamide decreases mRNA levels of S100A4 and α-SMA (Fig. 4c, d), without significantly affecting N-cadherin transcript (data not shown). Overall, these data show that niclosamide reverses several parameters linked to inflammation, impaired autophagy, fibrosis and activation of ALS fibroblasts.
Niclosamide reduces ALS pathology in transgenic mice carrying hFUS mutation
It is established that S100A4 is upregulated in mutant SOD1 transgenic rat and mouse models of ALS during the disease course [59,64]. To understand whether the increase of S100A4 is a common trait in rodent models originating from different ALS-associated genes, here we analyzed its protein expression in wild-type human FUSoverexpressing mice. The hFUS model recapitulates all key features of ALS such as motor neuron degeneration, muscle atrophy, physiological decline, cachexia, and neuroinflammation [28]. Notably, S100A4 is increased in the lumbar spinal cord (Additional file 1: Figure S2a and b) of end stage hFUS mice. This result indicates that the protein expression is commonly deregulated in different in vivo models of ALS, and prompted us to test the effects of S100A4 inhibition on disease phenotypes. To this aim, we treated hFUS mice with niclosamide at the dose of 20 mg/kg [56,72], starting from the early symptom onset and analyzed the efficacy of the compound to restore several aberrant parameters occurring in these mice (Fig. 5a). At the employed dose, niclosamide slightly but significantly increases the disease duration, compared to vehicle-treated mice (Fig. 5b). Further, spinal cord pathology is improved, as indicated by the decrease in the levels of S100A4, as well as of GFAP and α-SMA in hFUS treated mice, compared to vehicle- (Fig. 5c). As shown in Fig. 5d, while spinal cord sections from non-Tg mice show PDGF Rβ-positive cells (indicating cells of mesenchymal origin) in the meninges and around blood vessels, in hFUS mice PDGFRβ staining is infiltrated into the white matter parenchyma, suggesting the presence of fibrotic regions. Interestingly, PDGFRβ-positive infiltrates in the white matter are reduced after niclosamide treatment (Fig. 5d). Next, since peripheral nerves are strongly affected in the hFUS model [28], we investigated the effects of niclosamide on sciatic nerves. As observed, the sciatic nerve of hFUS mice shows an axonal impairment as demonstrated by the decrease in β-III tubulin-positive fibers and the concomitant upregulation of GFAP, in accordance with a Wallerian degeneration, evidencing a disorganization of Schwann cells compared with sciatic nerve from control littermate mice (non-Tg) (Fig. 5e, f). Niclosamide treatment partially restores the levels of β-III tubulin and GFAP and, in the niclosamide group both β-III tubulin and GFAP expression appear flatter and less frayed with respect to the vehicle group, suggesting that the treatment ameliorates axonal impairment in hFUS mice sciatic nerves (Fig. 5e, f).
Niclosamide ameliorates muscle atrophy and fibrosis in hFUS mice
Finally, we explored the effects of niclosamide treatment on hFUS skeletal muscle pathology. At first, we found that hFUS mice show a strong increase in S100A4 protein in the gastrocnemius muscle compared to healthy mice and that niclosamide strongly inhibits its level (Fig. 6a). We next assessed the expression of a key myogenic transcription factor MyoG a marker of muscle differentiation [14] and we found that, compared to vehicle-treated mice, niclosamide administration increases MyoG expression (Fig. 6a), suggesting an improved myogenic differentiation. To support the observations that niclosamide ameliorates muscle atrophy, we performed a qRT-PCR gene expression analysis of the myogenic factors MyoD, Pax7, and myosin heavy chain 3 (MHC) in addition to MyoG [14,66], demonstrating a strong increase of all these markers in niclosamide treated hFUS mice (Fig. 6b). Importantly, muscles from hFUS mice show increased levels of p-STAT3, p-mTOR, and SQSTM1/p62 (Fig. 6c) which suggest that pathways involved in skeletal muscle atrophy and fibrosis [10,32] are activated in this tissue; further, profibrotic markers, such as PDGFRβ and α-SMA, are also upregulated in hFUS compared to non-Tg mice (Fig. 6d). Remarkably, niclosamide decreases the expression of all aforementioned molecules (Fig. 6c, d), indicating that, besides modulating S100A4, it targets multiple signaling pathways (Fig. 6e).
Discussion
In this work, we provide evidence for the contribution of S100A4 in ALS pathogenesis and the potential repurposing of niclosamide for preclinical trials in the disease. Indeed, we have demonstrated here that S100A4 is upregulated in fibroblasts derived from different ALS patients as well as in the ALS model represented by hFUS mice. These data are consistent with our previous results, showing an increase of S100A4 in the microglia and astrocytes from a SOD1 rat model in vivo and in mutant SOD1 fibroblasts in vitro [59] and suggest that an increased level of S100A4 is a common pathological trait of ALS, shared by different experimental models and disease-associated gene variants. Remarkably, in a recent paper, S100A4 mRNA was identified together with other 333 transcripts, out of 22,977 annotated transcripts, among those whose stability is altered in C9orf72 ALS and sALS fibroblasts [65], sustaining our hypothesis that S100A4 dysregulation is a pathological hallmark of the disease shared by different cell types independently Values are mean ± SEM, n = 3 individuals, experiments repeated in triplicate; two-tailed t test. t value: 9.961, degrees of freedom: 4 (S100A4); t value: 3.331, degrees of freedom: 4 (α-SMA). *p < 0.05 and **p < 0.001 vs. untreated (Ctrl) cells. d mRNA levels of S100A4 and α-SMA in C9orf72 fibroblasts treated with 10 μM niclosamide for 72 h. Data are normalized to GAPDH. Values are mean ± SEM, n = 3 individuals, experiments repeated in triplicate; two-tailed t test with Welch's correction. t value: 14.52, degrees of freedom: 2.094 (S100A4); two-tailed t test. t value: 14.16, degrees of freedom: 4 (α-SMA). *p < 0.01 and **p < 0.001 vs. untreated (Ctrl) cells from their genetic variants and thus possibly reflecting a general reactive cellular state. Furthermore, recent studies show that S100A4 is one of the 88 upregulated genes of the pan-neurodegenerative signature obtained from the meta-analysis of human CNS transcriptomic datasets from Alzheimer's and Lewy body diseases and ALSfrontotemporal dementia patients, suggesting that S100A4 represents a common substrate driving neurodegeneration [38,39]. S100A4 belongs to the S100 superfamily, constituted by small proteins that are generally secreted by cells under stressful conditions, and that are undergoing extensive research as biomarkers in different fields, such as oncology, cardiology, fibrosis, and inflammation as well as brain injury pathologies [15,63]. Within the limitations of our analysis, which is mainly based on a small number of patients for each ALS subgroup, our results, showing S100A4 upregulation as a common hallmark in ALS fibroblasts, make S100A4 a potential candidate to be tested as a biomarker in the disease. Recently, primary skin fibroblasts derived from patients have been extensively used as a model to study ALS because they share pathological alterations with neural cells, concerning stress-responses, autophagy, inflammation, and RNA processing [51]. Under this aspect, they are useful tools to explore new pathogenic mechanisms and perform preliminary assessments of novel potential treatments. Since they display overt limitations, in further experiments patient-derived models, including iPSCderived neurons and glia, as well as transdifferentiated somatic cells [35,60], should be necessary to examine the specific role of S100A4 in the different ALS cell phenotypes.
Moreover, fibroblasts represent a cell type that can become resident in the nervous system during inflammation [45], as well as in skeletal muscle. Indeed, activated fibroblasts (deriving from endothelial cells, pericytes, immune cells) can be accounted as cellular players in the development of fibrosis and inflammation during several neurodegenerative conditions, including ALS [5,11,43,71]. Thus, the identification of the molecules and pathways involved in the transition of fibroblasts from a quiescent to an activated phenotype might unveil pathogenic mechanisms occurring in CNS and peripheral tissues. Fibroblast activation could represent a response to counteract and repair damage, that eventually evolves into a detrimental process as disease accelerates, leading to a non-permissive environment to cell regeneration. As recently reported [13], the robust fibrotic response to both injury and inflammation may be a common pathogenic mechanism across many different neurological disorders that should stimulate future research.
Extensive studies have shown that the transformation into activated fibroblasts is an extremely complex process involving numerous signaling pathways and that depends on the physiological or pathological status of the cells and on their specific cellular contexts [44]. Among these, recent studies indicate that mTOR and the substrate of autophagy SQSTM1/p62 contribute to mesenchymal transition and that autophagy enhancers can attenuate fibroblast activation [29,44,46]. Moreover, the NF-κB pathway also plays an important role in inducing a myofibroblast-like phenotype, especially under inflammatory conditions, elicited for instance by TNF-α or IL-6 [19]. We have demonstrated here that high levels of S100A4 in ALS-fibroblasts correlate with signs of impaired autophagy and inflammation, as suggested by high expression of mTOR, SQSTM1/p62, and NF-κB. It is well known that an increase in S100A4 (See figure on previous page.) Fig. 5 Niclosamide ameliorates pathology in hFUS symptomatic mice. a Schematic illustration of niclosamide treatment in hFUS mice. Male mice were intraperitoneally injected daily with 20 mg/kg niclosamide from postnatal day (PND) 25 until death and spinal cord, sciatic nerves and skeletal muscles tissues were then analysed. b Niclosamide-treated hFUS mice (hFUS Nic) show a significant difference in the disease duration with respect to vehicle-treated hFUS mice (hFUS veh). Data are presented as means ± SEM. n = 6 mice/group. Two-tailed t test. t value: 4.719, degrees of freedom: 10. ***p < 0.001 vs. vehicle-treated hFUS mice. c Protein lysates from lumbar spinal cord of non-transgenic (Non-Tg) (~40 days), vehicle (hFUS veh) and niclosamide-treated hFUS mice (hFUS nic) at end stage of the disease were assayed by western blot with anti-GFAP, anti-S100A4, and anti-α-SMA. Data represent mean ± SEM of n = 4 mice/group. One-way ANOVA with Tukey correction between Non-Tg, hFUS veh and hFUS nic. F value (DFn, DFd): (2, 9) = 11.26 (GFAP), (2, 9) = 11.73 (S100A4), (2, 9) = 5.721 (a-SMA). *p < 0.05 and **p < 0.01 vs. Non-Tg mice or # p < 0.05 and ## p < 0.01 vs. hFUS veh mice. d Representative fluorescence images of PDGFRβ (green) in the lumbar spinal cord of Non-Tg, hFUS veh, and hFUS nic mice at end stage of the disease. Scale bars: 50 μm. Immunofluorescence intensities were calculated by densitometric analyses. Data represent mean ± SEM. n = 4 mice/group, four sections per animal. One-way ANOVA with Tukey correction between Non-Tg, hFUS veh, and hFUS nic. F value (DFn, DFd): (2, 9) = 15.75 ***p < 0.001 vs. Non-Tg mice or # p < 0.05 vs. hFUS veh mice. e Representative fluorescence images of β-III tubulin (blue) and GFAP (purple) in the sciatic nerves of Non-Tg, hFUS veh, and hFUS nic mice at end stage of the disease. Scale bars: 50 μm. Immunofluorescence intensities were calculated by densitometric analyses. Data represent mean ± SEM. n = 4 mice/group, four sections per animal. One-way ANOVA with Tukey correction between Non-Tg, hFUS veh, and hFUS nic. F value (DFn, DFd): (2, 9) = 18.48 (β-III tubulin), (2, 9) = 17.94 (GFAP). *p < 0.05; **p < 0.01; and ***p < 0.001 vs. Non-Tg mice or # p < 0.05 and ## p < 0.01 vs. hFUS veh mice. f Protein lysates from sciatic nerves of Non-Tg, hFUS veh, and hFUS Nic mice at end stage of the disease were assayed by western blot with anti-GFAP. GAPDH served as loading control. Relative densitometric values are reported on the right. Data represent mean ± SEM of n = 4 mice/group. One-way ANOVA with Tukey correction between Non-Tg, hFUS veh, and hFUS nic. F value (DFn, DFd): (2, 9) = 12.72. *p < 0.05 vs. Non-Tg mice or ## p < 0.01 vs. hFUS veh mice characterizes profibrotic activated fibroblasts, as those induced by TGFβ [67]. Therefore, the dysregulation of these markers points to an activated pro-inflammatory and fibrotic phenotype of fibroblasts derived from patients with ALS compared to cells from healthy donors.
To evaluate the effects of S100A4 downregulation by a pharmacological approach, we employed niclosamide, a well-known S100A4 transcriptional inhibitor, which is also recognized as a multi-target drug that promotes autophagy and inhibits STAT3 and NF-κB and as a potent blocker of fibrotic signaling [4]. Our results demonstrate that the drug is able to reduce inflammatory/autophagic/ fibrotic pathways in ALS fibroblasts, thereby interfering with different mechanisms characterized as pathogenic in ALS. Most interestingly, our in vivo results demonstrate that niclosamide relieves ALS-related pathological features in spinal cord, sciatic nerve, and skeletal muscle of hFUS mice. Central and peripheral nerve pathology with inflammation and fibrosis is a major harmful mechanism contributing to degeneration [25,74]. In ALS, neuronal regeneration and axonal growth may be limited by a hostile environment characterized by extensive gliosis and aberrant remodeling of ECM components [11]. Accordingly, gene ontology analysis of differently expressed genes in the spinal cord of hFUS mice show ECM matrix disorganization and increased expression of proteoglycans [47,53]. Treatment with niclosamide in vivo clearly reduces the levels of S100A4, α-SMA, and PDGFRβ in the spinal cord, as well as inflammation in central and peripheral nervous tissues, together with axonal impairment. These data are consistent with the in vitro results, demonstrating the anti-inflammatory and anti-fibrotic properties of niclosamide toward activated CNS glial cells, such as microglia and astrocytes [36,59], and toward ALS-activated fibroblasts. Overall, these results show that niclosamide can control the excessive gliogenic/fibrotic environment and enhance neural repair in vivo in the hFUS model of ALS. Interestingly, skeletal muscles of hFUS mice display a strong increase in S100A4 expression, accompanied by augmented levels of α-SMA, PDGFRβ, and STAT3, all proteins that have been widely demonstrated to be involved in muscle fibrosis and atrophy in both mutant SOD1 mouse models and in ALS patients [17,32]. We have shown here that niclosamide displays positive effects also on muscle atrophy by promoting muscle regeneration and inhibiting muscle fibrosis, indicating that the targeting of multiple pathways in addition to S100A4 such as mTOR, STAT3, and NF-kB, can affect disease also in muscle tissue.
Our findings deserve further research to validate this new mechanism of action of niclosamide in preclinical experiments, performing dose-response treatments and testing the drug in additional ALS models, besides FUS mice, recapitulating key pathologies and biological processes seen in sporadic ALS [3], as, for instance, the typical hallmark of TDP-43 mislocalization [62].
In conclusion, our findings show that S100A4 plays an important role in ALS-related mechanisms, and suggest that the use of a pleiotropic compound such as niclosamide, capable of affecting inflammatory, autophagic, and profibrotic mechanisms in several tissues of an ALS model, can meet the requirements of a possible treatment for ALS, that necessarily must be multifunctional and multitarget. | 2021-06-13T13:26:54.458Z | 2021-06-12T00:00:00.000 | {
"year": 2021,
"sha1": "4297f39dc213efb7833dcfe02b50219f3f59a35b",
"oa_license": "CCBY",
"oa_url": "https://jneuroinflammation.biomedcentral.com/track/pdf/10.1186/s12974-021-02184-1",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "900fd239ad152021f130dc689ed049de389e8930",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55667135 | pes2o/s2orc | v3-fos-license | The Effects of Calcium Chloride and Ascorbic Acid Treatment on Ready-to-use Carrot Shreds
This study aimed to evaluate the effect of calcium and ascorbic acid treatments on the quality of carrot shreds during storage. Towards this aim, carrot shreds were dipped into a 5 L solution of 2 g/L ascorbic acid containing 1%, 3%, or 5% CaCl2 (Ca + AA) for 3 min at room temperature (~20 °C). In case of the control group (control, C), samples were dipped into distilled water for the same time interval. Subsequent to treatment, carrot shreds were stored in a cold room at 5 ± 1 °C, 85-90% RH for a period of 11 days. Color values (L*, a* b*), whiteness index, saturation index, hue angle values, visual quality, firmness scores, bitterness scores, total soluble solids (TSS) and electrolyte leakage measurements were conducted at various sampling dates. The results from this study demonstrated that brightness of carrot shreds was augmented by calcium and ascorbic acid treatments irrespective of the dosage used. Whiteness index values for the 5% Ca + AA treated samples were observed to be low whereas saturation indices of 5% Ca + AA and 3% Ca + AA treated carrot shreds were higher as compared to other treatments. This study concludes that treatment with calcium at high doses improves the color quality of carrot shreds under storage conditions. Visual quality and firmness of carrot shreds was maintained till day 4 of storage, thereafter it declined as compared to the control group. Bitterness of carrot shreds was also observed to increase upon treatment with calcium and ascorbic acid. However, calcium treatment of the test carrot shreds was seen to decrease weight loss and cause an increase in the TSS under storage conditions.
Introduction
Fresh-cut vegetables are vegetables that are available in a ready-to-use format.They are minimally-processed plant products that are peeled, trimmed and/or cut prior to being packaged in a way that retains freshness whilst being convenient to the end user.Lettuce and pre-prepared salads are the most common types of fresh-cut vegetables available commercially, although fresh-cut carrots, tomatoes, broccoli, cauliflower, and cabbage can also be found [1].
In recent years, Turkey has witnessed an increase in the demand and availability of fresh-cut vegetables as well as fruits; the examples include pre-washed and trimmed spinach, sliced carrots, leeks, apples, etc.
The basic premise for obtaining high quality Corresponding author: Rezzan Kasim, Dr., associate professor, research field: postharvest technology of vegetables.fresh-cut vegetables is minimal processing such that the produce retains fresh-like texture, color, flavor, and safe-to-use quality.However, injuries that occur during processes such as peeling, slicing, cutting, shredding, etc. result in stress at the tissue cellular, subcellular and biochemical levels leading to several undesirable changes in the vegetables during the course of storage and transportation [2].
In the case of fresh-cut carrots, the most significant problem faced is surface whitening.It is a phenomenon that arises as a result of dehydration and lignin synthesis.Several treatments, such as application of edible coatings [3], treatment with citric acid [4,5] or ascorbic acid [6] are available to prevent the whitening.
Results from previous studies have indicated that treatment of carrot shreds with ascorbic acid is successful in preventing the appearance of surface whitening.However, as this treatment results in softening of the shreds, the application of a firming agent has been suggested for maintaining the crispness [6].Calcium treatments that use either calcium chloride (CaCl 2 ) or calcium lactate have been shown to be effective in maintaining the firmness of several fresh-cut fruits and vegetables during storage [7].It is also known that treatment with Ca 2+ has the potential to maintain the textural qualities of carrot for as long as up to 10 days of storage [2].As softening and other undesirable textural changes in fresh-cut products are related to their tissue calcium levels, application of calcium salts (calcium-chloride, -carbonate, -lactate, -propionate, -pectate, etc.) to fruits and vegetables, such as pears, strawberries, kiwis, shredded carrot, honeydew melon discs, nectarines, peaches and melons, helps in retaining tissue firmness [8].Calcium, in a 1% CaCl 2 formulation, and ascorbic acid dips have been employed as firming agents that aid in extending the postharvest shelf life of sliced pears and strawberries that have been stored in a controlled atmosphere [9].
The objective of this study was to determine the effect of calcium chloride and ascorbic acid treatments on the quality parameters of shredded carrots.
Plant Material and Sample Preparation
Carrots were obtained from the Kocaeli Wholesale Distribution Center.They were transported immediately to the laboratory, thoroughly washed, peeled, trimmed of tap root and stem plate prior to preparation.A grate was used to prepare carrot shreds (about 5 mm wide, 40 long, and 2 mm thick).
Processed carrots (100 g for each replicate) were dipped into the following calcium and ascorbic acid solutions: (1) 1% Ca + AA: 5 L solution of 1% CaCl 2 containing 2 g/L ascorbic acid for 3 min.
(4) C: The control group samples were dipped in distilled water for 3 min.All treatments were carried out at room temperature (~20 °C).Treated carrot shreds were dried by first using a salad spinner (2 min, room temperature) so as to remove excessive surface solution and then at room temperature (15 min).
Packaging and Storage Condition
The samples of shredded carrots (100 g) were placed in covered plastic boxes 110 × 110 × 50 mm in size.Triplicates of each treatment were stored for 11 days at 5 ± 1 °C with relative humidity of 85-90%.
Color Measurements
Color measurements (L*, a* and b* values) were performed using a chroma meter CR-400 (Konica Minolta Inc. Osaka, Japan) with an illuminant D65 with 8 mm aperture.The instrument was calibrated with a white reference tile (L* = 97.52,a* = -5.06,b* = 3.57) prior to measurements.The L* (0 = black, 100 = white), a* (+red, green) and b* (+yellow, -blue) color coordinates were determined as per the CIELAB coordinate color space system.
Whiteness index [WI, Eq. ( 1)], saturation index [SI, Eq. ( 2)] and hue angle [H, Eq. (3)] were calculated using L*, a* and b* values that were computed as described below; these values were used to compare the color changes of the test samples with that of the control (fresh-cut carrot shreds) [10].
Visual Quality, Firmness and Bitterness Assessments
Visual quality was evaluated by grading the freshness, appearance, color, uniformity and brightness of the test samples on a five-point Likert scale: 5, excellent; 4, good quality; 3, fair quality; 2, poor quality; 1, extremely poor quality.
Firmness of the carrot shreds was scored as a subjective variable; the perceived hardness or softness experienced when carrot shreds were taken between two fingers and pressure was applied and was graded on a five-point Likert scale: 5, very firm; 4, firm; 3, partially firm/soft; 2, soft; 1, very soft (not usable).
The judging panel for sensory evaluation was composed of nine food-science students enrolled at the university.All the students had prior classroom training and experience in the sensory evaluation of food items.
Electrolyte Leakage Measurement
Electrolyte leakage (EL) was measured in the test carrot shreds.Distilled water was used for washing as well as immersion of test sample shreds and conductivity was measured 2 h after immersion.Total electrolyte conductivity of the carrot shreds was measured after they had been frozen and thawed.EL was calculated as percentage of the conductivity after 2 h [11].
Loss of Weight
The weight of the triplicate samples was recorded on the day of harvest and after the designated sampling dates.The loss in weight was calculated using the following formula: weight loss (%) = (W i -W s /W i ) × 100; where W i = initial weight; W s = weight at sampling period.
Total Soluble Solids (TSS)
For each of the test replicates, TSS was determined for two parallel using an Atago DR-A1 digital refractometer (Atago Co. Ltd., Japan).The experiment was conducted at 20 °C and the results were expressed as percent value.
Statistical Analysis
Experiments were conducted in a completely randomized design with a minimum of three replications per treatment per sampling date.The resultant data were analyzed by application of the ANOVA test and differences between mean values were determined using Duncan's multiple range test.The results were regarded as significant when P < 0.05 and P < 0.001.
L* values and Whiteness Index
L* values of treated carrot shreds were observed to have increased irrespective of the type of treatment applied.The highest value was observed on day 4 for shreds treated with 5% Ca + AA (59.757), which was followed by those treated with 3% Ca + AA (57.790), 1% Ca + AA (57.003) and C (control, 56.287).The difference between the treatments was statistically significant (P < 0.05).Post day 4, L* values of samples were observed to be changing whilst in storage: Initially a decrease was observed (day 8) subsequent to which L* values started increasing again (Fig. 1).
The whiteness index of carrot shreds stored after treatment with 3% Ca + AA and 5% Ca + AA was significantly lower than that of shreds treated with 1% Ca + AA as well as the control samples (P < 0.05).Previous studies have determined that treatment with ascorbic acid has a positive effect on the brightness of carrot shreds [6] as well as carrot cubes [10].Congruent with these findings, the present study observed that during the first four days of storage, treatments that combined ascorbic acid with CaCl 2 (at all doses) were observed to increase the brightness of carrot shreds as compared to the control group.Although, L* values of samples were observed to be decreasing by day 8, brightness of carrot shreds were maintained when treated with 3% Ca + AA as well as 5% Ca + AA.Therefore, it can be concluded that treating carrot shreds with a combination of calcium chloride and ascorbic acid was effective in maintaining their brightness.
Previously Rico et al., (2007) have reported that in instances of a colorimeter being used to analyze color, increases in luminosity can be correlated with the development of whiteness in the test samples.In this study, however, the whiteness index (WI, Fig. 1) values of high calcium treated samples (3% and 5% Ca + AA) were lower than those of control and 1% Ca + AA treatments.Therefore, on account of the decrease in WI values of the samples, it can be concluded that treatment with a high dose of calcium prevented the whitening of samples.Similar results were reported for ascorbic acid treatments of carrots [6,10].Interestingly, in case of carrot shreds calcium treatments alone did not impact white tissue formation and WI values were observed to increase [2].However, ascorbic acid alone was effective in inhibiting white color formation on surface of carrots [6].Therefore, as a result of this study, it can be concluded that the combined use of calcium and ascorbic acid enhances the color quality and also prevents whitening of carrot shreds.Similar results were found [12] in case of nectarine halves.
Hue Angle and Saturation Index Values
Fig. 2 shows the hue angle (h*) values measured at day 1 and day 11 of storage.During storage, it was seen that treatment with a combination of ascorbic acid and calcium, irrespective of dose, resulted in a reduction in h* values.This conclusion was arrived at because significantly higher values were found in control samples compared to those that were treated with Ca + AA (P < 0.05).Saturation index (SI) values of samples subjected to 3% Ca + AA and 5% Ca + AA were higher than those treated with 1% Ca + AA and control (Fig. 2; P < 0.05).On day 4, the highest value of SI (53.756) was found in fruits treated with 3% Ca + AA.By contrast, the lowest value was observed in carrot shreds treated with 1% Ca + AA (44.303).However, from 4th day of storage till the conclusion of the study, carrot shreds in the control group had the lowest measures of SI values.
The main physiological effect of fresh-cut processing in the case of carrots is surface whitening which results from a combination of dehydration and lignin formation.This leads to significant loss of quality.In this study, the orange color of carrot shreds was observed to be maintained by treatment with calcium and ascorbic acid.This was especially true when samples of the control group were compared with those that received high dose treatments.In the case of minimally processed cabbage, results clearly demonstrated that while treatment with ascorbic acid did not lead to significant differences between test and control samples with respect to color or general appearance, treatment with 2% CaCl 2 at 20 °C resulted in consistent maintenance of high quality with less intense browning and the best general appearance [13].These results were also confirmed in the present study.
Visual Quality, Firmness and Bitterness Scores of Carrot Shreds
Visual quality scores of test samples, regardless of type of treatment, were observed to be decreased by day 4. Subsequently, scores increased until the 8th day of storage after which they continued decreasing till the end of the storage (Fig. 3).However, the appearance of samples treated with calcium and ascorbic acid was superior as compared to control during the first eight days of storage with the difference between treatments assuming statistical significance at day 4 (P < 0.05).Therefore it can be said that as compared to the control samples, the visual quality scores of test carrot shreds were highest during the first 4 days of the storage, but subsequent to that the effectiveness of the treatments decreased such that by the end of the storage period the visual quality scores of the Ca + AA treated samples were much below that of the control group.
According to the firmness scores, the texture of the shredded carrots was retained best in the control group followed by samples treated with 1% Ca + AA, 3% Ca + AA and 5% Ca + AA (Fig. 3).Also, statistically significant differences amongst the treatments were observed at day 4 and 11 of storage (P < 0.05 and P < 0.001).Fresh-cut vegetables that maintain a firm, crunchy texture are highly desirable because consumers associate such textures with freshness and wholesomeness of produce [14].The development of such undesirable textural changes in minimally processed products can be reduced by the application of calcium salts (calcium-chloride, -carbonate, -lactate, -propionate, -pectate, etc) because the rate of softening was directly related to the reduction of calcium levels in fruit tissues [11].Studies have shown that application of Ca salts to pears, strawberries, kiwifruits, shredded carrots, honeydew discs, nectarines, peaches and melons helps in retaining tissue firmness [15].Firmness scores of treated carrot shreds across all Ca + AA doses were lower than that obtained for control samples during storage, and differences amongst the different treatments were observed to be statistically significant at day 4 (P < 0.05) and day 11 (P < 0.001).Also, results clearly showed that the firmness scores of samples treated with 1% CaCl 2 were higher than that of other calcium treatments leading to the conclusion that CaCl 2 treatments were not effective in improving the texture of carrot shreds during storage.But, as per the weight loss results obtained in this study (Fig. 4), the recorded weight loss of the 3% Ca + AA and 5% Ca + AA treatments were lower than those obtained for 1% Ca + AA and control.Hence, the high firmness values of samples in control and 1% Ca + AA treatments can be potentially explained as a byproduct of water loss.Bitterness of carrot shreds increased upon increasing CaCl 2 dose (Fig. 3) with the highest (least bitter) values being obtained by 1% Ca + AA treatment (4.33) followed by 3% Ca + AA (2.0) and 5% Ca + AA as recorded on day 4, and these results continued during the storage.Differences amongst the treatments were statistically significant at the level of P < 0.001 (4 and 8 days of storage) and P < 0.05 (11th day of storage).Studies [16] determined that exogenous administration of CaCl 2 in form of a solution can reduce browning as well flesh softening in case of zucchini squash slices.However, CaCl 2 , when used in high concentrations (> 0.5%), has been known to cause a detectable off-flavor.The results of present study corroborate the above mentioned results.
Weight Loss
Weight loss of all the treated samples was observed to increase during storage (Fig. 4).The highest weight loss was observed in the control group (0.18 and 0.29) and followed by the 1% Ca + AA (0.10 and 0.26), 5% Ca + AA (0.09 and 0.18) and 3% Ca + AA (0.008 and 0.14) groups as noted on day 4 and 8. Also, statistically significant differences were observed between the various treatment groups while in storage.Therefore it can be concluded that CaCl 2 -ascorbic acid treatments have a significant effect on weight loss especially at higher doses.
Peel or skin is a very important barrier against desiccation and loss of turgor.Several fruits and vegetables have a protective waxy coating that makes them highly resistant to water loss.Mechanical injury to the skin brought about by peeling, cutting, slicing, shredding, etc. makes fresh cut products highly susceptible to weight loss as the protective peel is no longer intact [11,15].In the present study, water lost by carrot shreds was reduced when treated with a combination of calcium and ascorbic acid.Izumi and Watada [2] have previously reported that Ca has no observable effect on weight loss in case of carrot slices and sticks but is effective in preventing the same in case of carrot shreds.Their results also proved that carrot shreds have almost two and three times more Ca content as compared to sticks and slices respectively.Additionally, Ca has widely been reported to play an important role in preserving the structural integrity and mechanical strength of cell walls [9].The basis for the reduced weight loss observed in case of carrots shreds treated with Ca + AA in the present study, can be accounted for by the Ca absorbed by the samples under test.
Total Soluble Solids
Fig. 5 shows total soluble solid (TSS) values for carrot shreds subjected to different treatments.TSS of carrot shreds was observed to be decreased on day 4 but this decrease was higher in the control group as compared to the calcium and ascorbic acid treated samples.In quantitative terms, TSS of the control group was 1.2%; for the Ca treated carrot shreds it ranged from 3% to 7%.Subsequent to day 4, TSS of the control group was observed to steadily decrease till the end of the storage period whereas in case of the Ca treated carrots it was observed to increase.It was also seen that TSS of samples treated with a combination of Ca and AA was higher than that of control and that the higher values were observed in correlation with high doses of CaCl 2 treatment during storage.Differences in TSS values amongst the various treatments were statistically significant (P < 0.001).
The edible portion of carrot contains about 10% carbohydrate with the soluble carbohydrate composition ranging from 6.6 to 7.7 g per 100 g [14].In the present study, the initial TSS content of carrot shreds was observed to be 8%.This value was seen to decrease across treatments under storage conditions.Interestingly, the maximum decrease in TSS values was observed in case of the control group where values fell from 1% on day 4 to below 1% on days 8 and 11.In contrast, TSS of calcium treated carrot shreds maintained constant high values especially in the case of shreds treated with 5% calcium where values ranged from 7.1% to 7.7%.TSS of samples in 1% Ca + AA and 3% Ca + AA treatment groups was also found to be high as compared to the control.Therefore it can be concluded that calcium and ascorbic acid treatments prevent loss of TSS in case of carrot shreds especially when used in high doses.
Electrolyte Leakage
On day 4, it was observed that electrolyte leakage (EL) from carrot shreds in the control group and 1% Ca + AA treated group decreased, whereas in the 3% Ca + AA and 5% Ca + AA treated samples it was increased (Fig. 6).The EL values of the control samples continued to decline but an increase was noted in the 1% Ca + AA and 3% Ca + AA treated groups.Moreover, it showed variations between decrease and increase in 3% Ca + AA treatments during the storage.Differences in EL values among the treatments were statistically significant (P < 0.001).Leakage of electrolytes or cellular content is commonly used as an index for evaluating changes in membrane integrity arising due to ripening, stress damage or mechanical injury [7].Electrolyte leakage is considered as an indirect measure of plant cell membrane damage [17].In the present study, EL values for calcium and ascorbic acid treated samples were observed to be higher than that of the control group.Therefore, it can be concluded that CaCl 2 treatments have no membrane stabilizing effect conferred by exogenous calcium ions.
Conclusions
This study aimed to determine the impact of calcium and ascorbic acid on the quality of carrot shreds during storage.For this purpose, carrots were grated and treated with solutions containing varying doses of calcium along with 2 g/L ascorbic acid.The carrots were then stored for 11 days in a cold room at 5 ± 1 °C and 85-90% RH.According to the results obtained from this study, calcium was found to improve color quality and brightness while decreasing the development of whiteness on carrot shreds.As a cautionary note, however, it was observed that calcium, especially at higher doses, could cause bitterness of carrot shreds.Weight losses of carrot shreds treated with calcium and ascorbic acid was found to be higher than that of the control group; also firmness values of these test samples were low compared with the control.While calcium treatment was observed to improve the visual quality of the produce during the first eight days of storage, it was found to lose its efficacy after that.In addition, the calcium treatments showed no membrane stabilizing effect.
The Effects of Calcium Chloride and Ascorbic Acid Treatment on Ready-to-use Carrot Shreds | 2018-12-11T07:33:17.673Z | 2016-01-28T00:00:00.000 | {
"year": 2016,
"sha1": "5d3067d4e0ff8e7d0eccdb30aa88ae3a378b3c20",
"oa_license": "CCBYNC",
"oa_url": "http://www.davidpublisher.com/Public/uploads/Contribute/5715cafe9580f.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "5d3067d4e0ff8e7d0eccdb30aa88ae3a378b3c20",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
269643719 | pes2o/s2orc | v3-fos-license | Plasmapheresis combined with rituximab treatment of a case of thrombotic thrombocytopenic purpura with Sjögren syndrome and renal impairment: A case report
Rationale: Thrombotic thrombocytopenic purpura (TTP) is a rare thrombotic microangiopathy caused by reduced activity of the von Willebrand factor-cleaving protease (ADAMTS13), which can be life-threatening. The patient reported in this case study also had concurrent Sjögren syndrome and renal impairment, presenting multiple symptoms and posing a great challenge in treatment. Patient concerns: A 25-year-old woman in the postpartum period visited the hospital due to indifference in consciousness for more than 1 day following cesarean section 8 days prior. Diagnosis: Notable decreases were observed in platelets, hemoglobin, creatinine, and ADAMTS13 levels. After a consultative examination by an ophthalmologist, she was diagnosed with retinal hemorrhage in the right eye and dry eye syndrome in both eyes. Interventions: Having been diagnosed with TTP with Sjögren syndrome and renal impairment, she received repeated treatments with plasmapheresis combined with rituximab. Outcomes: Following treatment and during the follow-up period, the patient’s platelet counts and bleeding symptoms significantly improved. Lessons: TTP has a high mortality rate, and when combined with Sjögren syndrome and renal impairment, it poses an even greater challenge in treatment. However, after administering standard plasmapheresis combined with rituximab treatment, the treatment outcome is favorable.
Introduction
Thrombotic thrombocytopenic purpura (TTP) is a rare thrombotic microangiopathy caused by a deficiency in the activity of ADAMTS13, a von Willebrand factor-cleaving protease.This leads to widespread microvascular thrombosis, causing microangiopathic hemolytic anemia, consumptive thrombocytopenia, and organ dysfunction in areas such as the heart, brain, and kidneys.The annual incidence of TTP is 2 to 6 per million people, with a female-to-male ratio of about 2:1, The disease typically peaks between the ages of 30 and 50.The majority of TTP patients have a rapid onset of symptoms and a critical condition, [1,2] posing a serious threat to their lives.This report presents a case of TTP combined with Sjögren syndrome and renal involvement and discusses the efficacy of plasmapheresis combined with rituximab treatment for TTP combined with Sjögren syndrome and renal involvement through relevant literature.
Case information
The patient was a young woman, 25 years old, who was admitted to the emergency department of Guizhou Provincial People's Hospital on February 8, 2022, due to "indifference in consciousness for more than 1 day following cesarean section 8 days prior."Present Illness History: 8 days ago, the patient was full-term pregnant and had a platelet count of 21 × 10 9 /L at a local hospital.She was subsequently advised to transfer to our hospital's obstetrics department for an "intraperitoneal cesarean section with transverse uterine incision + pelvic adhesion separation surgery."The surgery went smoothly, with an estimated blood loss of about 400 mL.Postoperatively, her platelet count was 27.0 × 10 9 /L.However, she and her family refused further examination and treatment and insisted on signing out against medical advice.Three days prior, the patient experienced fatigue without an obvious cause, and a bruise appeared on the back of her left hand, approximately 6 × 3 cm² in size.Two days prior, she experienced a slight nosebleed, with no other symptoms such as gum bleeding, headache, dizziness, or blurred vision.She did not seek medical attention.One day prior, the patient developed indifference in consciousness without a clear trigger, accompanied by vomiting of gastric contents once and palpitations.There was no hematemesis, hematochezia, syncope, chest pain, limb spasms, or trismus.She sought emergency treatment at our hospital with "1.Suspected thrombotic thrombocytopenic purpura? 2. Severe hemolytic anemia?3. Post cesarean section" and was admitted to the obstetrics department.Past medical history: she denied any history of infectious diseases such as hepatitis or tuberculosis, denied any history of cardiovascular diseases such as hypertension or coronary heart disease, denied any history of diabetes, cerebrovascular diseases, or mental illness, and vaccination history was unknown.She also denied any history of trauma or transfusion and any history of food or drug allergies.Upon admission, the physical examination revealed a temperature of 38.6°C, pulse of 127 beats/ minute, respiration rate of 21 times/minute, blood pressure of 102/70 mm Hg, a 6 × 3-cm² bruise on the back of the left hand, and a 2 × 3-cm² bruise on the left buttock, with scattered petechiae on the skin.The abdominal wound was healing well; the abdomen was soft without tenderness, rebound tenderness, or muscle tension; the uterine fundus was 2 transverse fingers above the pubic symphysis, with scanty lochia without odor.Blood tests performed immediately after admission to obstetrics showed: white blood cells 16.69 × 10 9 /L, hemoglobin 54.0 g/L, platelets 4.0 × 10 9 /L, schistocytes 3%; coagulation tests showed: fibrin degradation products 16.1 μg/mL, D-dimer 5.31 μg/mL; antinuclear antibody spectrum showed: antinuclear antibody (Hep-2 cells + liver cells) -nuclear particle type +, antinuclear antibody titer 1:3200 +, anti-SS-A antibody positive (+++) RU/ mL +, Ro-52 positive (+++) +, antiproliferating cell nuclear antigen antibody weakly positive RU/mL +.Abdominal gynecological ultrasound showed an enlarged uterus with no abnormalities of the appendages on both sides.Routine electrocardiogram showed: sinus tachycardia with a heart rate of 124 beats/minute and low voltage QRS waves in limb leads.Cranial and pulmonary CT scan showed pericardial effusion; lower density of the heart chamber and large blood vessels, suggesting anemia; no obvious abnormalities on cranial CT scan, consider MRI if necessary.As the patient's platelet count was extremely low, which could lead to critical organ bleeding or spontaneous intracranial hemorrhage at any time, endangering her life, TTP could not be ruled out, prognosis was extremely poor, condition was critical, mortality was high, and treatment was difficult, so she was transferred to the ICU for continued treatment.
On February 8, 2022, at 21:11, the patient was admitted to the ICU.On examination, the patient had a temperature of 36.6°C,heart rate of 128 beats/minute, blood pressure of 111/67 mm Hg, respiratory rate of 25 breaths/minute, and an oxygen saturation level of 98% on nasal cannula oxygen.The patient appeared drowsy but was able to open eyes when called loudly.The patient was breathing spontaneously without obvious distress.There was a 6 × 3 cm² bruise on the back of the left hand and a 2 × 3-cm² bruise on the left buttock, with scattered petechiae on the skin.The pupils were round and equal in size, with brisk reaction to light.The breath sounds were clear in both lungs without significant dry or wet rales.The heart rhythm was regular.The abdomen was soft and non-tender.No enlargement of liver or spleen was palpable under the ribs, bowel sounds were weak, and there was no edema observed in both lower extremities.Repeat blood tests showed a platelet count of 4.0 × 10 9 /L, hemoglobin concentration of 49 g/L.High-sensitivity cardiac troponin I was 1.5420 μg/L, lactate dehydrogenase was 2091 U/L, and renal function showed a creatinine of 190 μmol/L with an estimated glomerular filtration rate of 31 mL/min/1.73m².Liver function tests showed a total bilirubin of 36.2 μmol/L, direct bilirubin of 12.9 μmol/L, indirect bilirubin of 23.3 μmol/L, and albumin of 31.0 g/L.The erythrocyte sedimentation rate was 87 mm/h.After consultations with the departments of hematology, nephrology, rheumatology, and immunology, a strong possibility of TTP with renal failure was considered, and connective tissue diseases such as SLE or Sjögren syndrome were also suspected.Therefore, on February 9, 2022, intensive treatment was initiated: daily plasma exchange (2000 mL of fresh frozen plasma) to remove pathogenic antibodies, pulse therapy with methylprednisolone (1000 mg intravenous drip for 3 days, then gradually reduced to 40 mg daily to maintain), intravenous immunoglobulin (PH4) at 20 g daily for 5 days to block pathogenic antibodies, rituximab at 375 mg/m² intravenous drip weekly to induce apoptosis of B cells, and blood transfusions (washed red blood cells 2 units intermittently) to improve anemia.During the treatment, ADAMTS13 activity was tested, revealing inhibitory antibodies (+) and ADAMTS13 activity at 2.15%, which confirmed the diagnosis of TTP.Additional tests including antineutrophil cytoplasmic antibody spectrum, autoimmune hepatitis antibody spectrum, antistreptolysin O, anticyclic citrullinated peptide antibody, CD55, CD59, direct antiglobulin test, acid hemolysis test, infectious disease screening, and tuberculosis T-SPOT did not show any remarkable abnormalities.Following the intensive treatment, on February 16, 2022, a repeat complete blood count showed hemoglobin of 82.0 g/L and platelets of 264.0 × 10 9 /L.The anemia and thrombocytopenia showed significant improvement, and the patient's mental state became clearer.To further clarify the causes of connective tissue disease and renal failure, the patient was transferred to the Nephrology, Rheumatology, Immunology Department for further treatment on February 16, 2022.Up until this point, the patient had received 2 days of 1000 mg daily methylprednisolone, 3 days of 80 mg daily, 2 days of 40 mg daily, had undergone 4 plasma exchange sessions, and had received 2 doses of rituximab at 600 mg intravenous drip weekly according to the treatment plan.
After being transferred to the Nephrology, Rheumatology, Immunology Department, further tests were conducted.Complete blood count: hemoglobin: 89.0 g/L; platelets: 274.0 × 10 9 /L; erythrocyte sedimentation rate: 21 mm/h; immunoglobulin G: 20.2 g/L.Other tests including routine urinalysis, complements C3, C4, and C1q, C-reactive protein, tumor markers, schistocytes (fragmented red blood cells), procalcitonin, coagulation profile, serum protein electrophoresis, and lupus anticoagulant were all within normal ranges.Ophthalmology consultation: Right eye: dot-and-blot hemorrhages noted adjacent to optic disc.Tear break-up time: 5 seconds for the right eye, 8 seconds for the left eye.Schirmer test (tear secretion test): 6 mm for the right eye, 4 mm for the left eye.Diagnosis: Hemorrhage in the retina of the right eye, dry eye syndrome in both eyes.The differential diagnosis considered are TTP; connective tissue disease -Sjögren syndrome, systemic lupus erythematosus (SLE); and underlying cause of renal insufficiency remains to be investigated.On 20, 2022, a repeat CBC revealed a sharp decline in platelets to 11.0 × 10 9 /L, which was thought to be associated with uncontrolled TTP.The patient was then given a 3-day course of pulse therapy with methylprednisolone (200, 300, and 300 mg), IV immunoglobulin at 20 g daily for 3 days, along with plasma exchange (2000 mL of fresh frozen plasma) every other day.Hydroxychloroquine sulfate was also administered orally at a dose of 0.2 g twice a day to modulate the immune system.On February 24, a follow-up revealed platelets at 34.0 × 10 9 /L, 1% schistocytes, ADAMTS13 activity at 1.63%, and negative ADAMTS13 inhibitory antibodies, suggesting that TTP was not completely controlled.A reduced immune-related platelet count was also not excluded; thus, cyclosporine capsules at 50 mg were added orally twice a day as immune suppression.Due to fluctuating platelet counts in the past few days, plasma exchanges were continuously conducted from February 26 to 28, using 2000 mL of fresh frozen plasma each day.On the same days, hemoglobin was measured at 75.0 g/L and platelets at 90.0 × 10 9 /L.For the consolidation of the treatment's effect, plasma exchanges were continued on March 1 and 2, and results showed hemoglobin at 79.0 g/L and platelets at 190.0 × 10 9 /L with schistocytes < 1%.On March 3 and 4, the therapy persisted with plasma exchanges, and on March 4, tests revealed ADAMTS13 activity at 63.22%, negative ADAMTS13 inhibitory antibodies, hemoglobin at 81.0 g/L, and platelets at 273.0 × 10 9 /L, with schistocytes < 1%.In accordance with the therapeutic regimen, a final dose of rituximab injection at 600 mg was administered for immune suppression.On March 8, the patient's hemoglobin level improved to 93.0 g/L and platelets to 497.0 × 10 9 /L.The patient's condition became relatively stable, and they were discharged with advice to maintain a low-salt, low-fat diet, rest sufficiently, avoid cold exposure, enhance nutrition, and not to stop or reduce medication on their own.Follow-up visits were scheduled to monitor CBC, liver and kidney function, antinuclear antibody spectrum, and other relevant indicators.After discharge, the patient revisited the hospital for follow-up on March 17, April 28, and June 7.The patient exhibited significant improvements in platelets, hemoglobin, and kidney function (Fig. 1A and B).
Outcomes
From February 8, 2022, to February 16, 2022, the patient received methylprednisolone 1000 mg QD for 2 days, 80 mg QD for 3 days, and 40 mg QD for 2 days, plasma exchange for 4 times, and rituximab 600 mg intravenous QW was used for 2 times according to the treatment course.At the same time, from February 16, 2022, to March 4, 2022, the patient received multiple plasma exchanges combined with rituximab treatment in the Department of Renal Rheumatology and Immunology.After treatment, the platelet and hemoglobin increased significantly (Fig. 1A), D-dimer and fibrin degradation products decreased significantly (Fig. 1B), renal function improved significantly (Fig. 1C), reticulocyte percentage improved (Fig. 1D), broken red cell count decreased (Fig. 1E), and ADAMTS13 increased significantly (Fig. 1F).
Discussion
TTP is a rare and severe microvascular thrombotic disease, [3] primarily caused by a severe deficiency of ADAMTS13.The deficiency or decreased activity of ADAMTS13 leads to the formation of excessive ultra-large von Willebrand factor multimers in the blood, resulting in endothelial damage.This can trigger the aggregation of platelets and further lead to the formation of microthrombi, causing TTP.Based on the mechanism of ADAMTS13 deficiency, TTP is classified into hereditary TTP (congenital TTP, also known as Upshaw-Schulman syndrome) and immune-mediated TTP.Although the incidence of TTP is relatively low, most patients have a rapid onset and critical illness.Without prompt recognition and treatment, the mortality rate can be as high as 90%. [4]Some patients may ultimately progress to end-stage renal disease. [5]he main clinical manifestations of TTP include the classic "pentad" of fever, microangiopathic hemolytic anemia (MAHA), thrombocytopenia, renal dysfunction, and neurological symptoms.However, this "pentad" is not often seen clinically; a "triad" of MAHA, thrombocytopenia, and neuropsychiatric symptoms is more common.The "2022 Chinese Guidelines on the Diagnosis and Treatment of Thrombotic Thrombocytopenic Purpura" state that the laboratory diagnosis includes varying degrees of anemia, fragmented red blood cells (>1%) seen on peripheral blood smear, with most cases showing an increased proportion of reticulocytes.Platelet counts are significantly reduced (often lower than 20 × 10 9 /L), with a marked dynamic decrease, and plasma ADAMTS13 activity is significantly reduced (<10%).In this case, the patient's initial hemoglobin was 54.0 g/L, platelets were 4.0 × 10 9 /L, fragmented red blood cells were 3%, reticulated red blood cells percentage was 17.51%, and the patient had positive inhibitory antibodies to ADAMTS13 with an ADAMTS13 activity level of 2.15%, thus confirming the diagnosis of TTP.Additionally, the guidelines recommend plasma exchange as the first-line treatment to remove ADAMTS13 inhibitors or IgG antibodies and other pathogenic factors, optionally combined with corticosteroids.Rituximab, a humanized anti-CD20 monoclonal antibody, may reduce the production of autoantibodies by decreasing B lymphocyte numbers and is increasingly used in early treatment, improving disease remission and disease-free intervals.When plasma exchange and corticosteroids alone are not effective, combining treatment with rituximab is a safe and effective method to prevent acute relapse. [6,7]fter treatment with intravenous immunoglobulin, steroid pulse therapy, and red blood cell transfusion, although the patient's hemoglobin level increased, it did not return to normal, and the platelet count showed no significant change, remaining low and increasing the risk of visceral and intracranial bleeding.The presence of a positive antinuclear antibody spectrum suggested immune-related thrombocytopenia, possibly related to connective tissue disease.Following 4 consecutive days of plasma exchange, the platelet count rose to 154.0 × 10 9 /L, indicating that the combination of plasma exchange, immunoglobulin, and steroid pulse therapy can be effective for acute phase patients.However, 3 days later, a retest showed that the platelet count had dropped to 11.0 × 10 9 /L, suggesting that the volume of plasma exchange was insufficient and had not completely removed autoantibodies.Therefore, after an additional ten plasma exchanges, the platelets increased to 497.0 × 10 9 /L.Upon follow-up on June 7, 2022, the candidate's platelet count was stable at 497.0 × 10 9 /L.This case suggests that for TTP combined with autoimmune connective tissue diseases, adopting plasma exchange in conjunction with biologics and adequate doses and courses of steroid pulse therapy provides a new approach.The definitive efficacy of this treatment strategy requires validation through further clinical trials.
Limitations
During the hospital stay, the patient was positive for antinuclear antibodies and lupus anticoagulant tests: lupus anticoagulant screening at 27.7 seconds, and lupus coagulant confirmation at 28.1 second.Due to the patient's poor platelet and coagulation functions, the risk associated with biopsy was increased, and therefore, bone marrow aspiration, renal biopsy, and lip gland biopsy were not performed.This study is only a case report, more clinical application evidence is required, which calls for long-term observations and evaluations in large-sample case studies. | 2024-05-11T05:09:38.285Z | 2024-05-10T00:00:00.000 | {
"year": 2024,
"sha1": "fef29d3ad6ae69f6f936662eb360bc1dc369aa4b",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "fef29d3ad6ae69f6f936662eb360bc1dc369aa4b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258841262 | pes2o/s2orc | v3-fos-license | Darwin: A DRAM-based Multi-level Processing-in-Memory Architecture for Data Analytics
Processing-in-memory (PIM) architecture is an inherent match for data analytics application, but we observe major challenges to address when accelerating it using PIM. In this paper, we propose Darwin, a practical LRDIMM-based multi-level PIM architecture for data analytics, which fully exploits the internal bandwidth of DRAM using the bank-, bank group-, chip-, and rank-level parallelisms. Considering the properties of data analytics operators and DRAM's area constraints, Darwin maximizes the internal data bandwidth by placing the PIM processing units, buffers, and control circuits across the hierarchy of DRAM. More specifically, it introduces the bank processing unit for each bank in which a single instruction multiple data (SIMD) unit handles regular data analytics operators and bank group processing unit for each bank group to handle workload imbalance in the condition-oriented data analytics operators. Furthermore, Darwin supports a novel PIM instruction architecture that concatenates instructions for multiple thread executions on bank group processing entities, addressing the command bottleneck by enabling separate control of up to 512 different in-memory processing units simultaneously. We build a cycle-accurate simulation framework to evaluate Darwin with various DRAM configurations, optimization schemes and workloads. Darwin achieves up to 14.7x speedup over the non-optimized version. Finally, the proposed Darwin architecture achieves 4.0x-43.9x higher throughput and reduces energy consumption by 85.7% than the baseline CPU system (Intel Xeon Gold 6226 + 4 channels of DDR4-2933). Compared to the state-of-the-art PIM, Darwin achieves up to 7.5x and 7.1x in the basic query operators and TPC-H queries, respectively. Darwin is based on the latest GDDR6 and requires only 5.6% area overhead, suggesting a promising PIM solution for the future main memory system.
I. INTRODUCTION
In the era of big data, data-intensive applications such as artificial intelligence [27] and data analytics [8] proliferate by utilizing extremely large datasets. As these applications mainly consist of low compute-to-memory ratio operations, memory operations dominate over the compute operations, causing the "memory wall" [6]. Furthermore, the bottleneck on the memory side exacerbates as the improvement of memory technology in speed falls behind the logic technology. Continual efforts in increasing the off-chip bandwidth of recent DRAM technologies [17], [28] result in higher IO speed and more pins, but they come at the cost of expense and power consumption.
To minimize the data movement overhead in data analytics, the database backend is relocated from storage to main memory [26], [35], [44], avoiding expensive disk IO access. Additionally, some of analytical query operators (e.g., select, aggregate, sort, project, and join) are converted into vector operations, increasing the throughput of the query processing [5]. Since the query operators iteratively compute on sequences of data stream, vector type processing can easily accelerate by computing many data at a time. However, even with the high-performance CPU, the vectorized query operations with low compute-to-memory ratio cannot be sped up due to the ever-growing data size and limitation of the off-chip bandwidth.
Previous research propose accelerating data analytics on different hardware platforms, such as field-programmable gate array (FPGA) [36], [46], [50] and graphics processing unit (GPU) [33], [42]. However, these approaches focus on improving the computation capability, while leaving the essence of the memory bottleneck issue that occurs in computing data-intensive applications. Therefore, the paradigm shift Vectorized Operations Fig. 1. Column-oriented Database Management System from computation-centric to memory-centric architecture is unavoidable in such data-intensive applications.
As a result, inevitable memory bottleneck problem drives both industry and academia to reassess the DRAM-based near-memory-processing (NMP) [7], [12], [19], [25], [50] and processing-in-memory (PIM) [13], [14], [16], [21], [22], [30], [31], [32], [34], [38], [48], [49] architectures that increase the internal bandwidth by integrating computational logic and DRAM device/cells closely. NMP architectures integrate homogeneous processing unit (PU) per vault in the base logic die of hybrid memory cube (HMC), supporting flexible dataflow for query operations. However, these approaches, only integrating PUs external to memory, lose the opportunity to fully benefit from the wide internal bandwidth of DRAM memory. On the other hand, PIM architectures can fully exploit the abundant internal bandwidth of DRAM. However, they cannot efficiently compute complex query operations with complicated internal data movement since they are only capable of processing bulk-wise data processing such as vectorvector and matrix-vector multiplications with fixed data path. Furthermore, PIM architectures that integrate compute logic closer than the bank level (e.g., in cell and near subarray) are impractical that reduces the density of cells significantly.
To address the limitations of the previous approaches, we propose Darwin, a practical LRDIMM-based multi-level PIM architecture for data analytics. First, Darwin reuses the conventional DRAM hierarchical architecture to save the additional interconnect resource for internal data movement occurred for PIM computation. It exploits bank, bank group, and rank levels for multi-level parallelisms with reduced internal data movement. It utilizes single instruction multiple data (SIMD) fixed-point unit exploiting wide bank-level parallelism while separating the control granularity to bank group for flexible execution.
Second, Darwin supports in-memory control unit to support seamless workload balancing after condition-oriented query operators where the output data size is not predictable in static compile time. Third, Darwin modifies the command interface to avoid the command bottleneck in individually controlling multiple PUs simultaneously. We introduce a new PIM instruction architecture that concatenates multi-bank group command that enables independent but concurrent operations in multiple bank groups.
In this paper, we make the following contributions: • We propose a multi-level PIM architecture for data analytics which fully exploits the internal bandwidth from the bank, bank group, and rank levels within a commodity DRAM architecture, reducing any additional overhead for practicality. • We propose bank group-level processing units to support irregular data analytics operations, enabling dynamic runtime execution on the condition-oriented operations and low-overhead workload balancing. • We propose a new command interface to support parallel but individual executions of in-memory PUs while avoiding the command bottleneck. • We evaluate TPC-H and basic query operators on Darwin over the baseline CPU and state-of-the-art PIM.
A. Data Analytics
Most current database systems used in finance and business are relational database management system (RDBMS) [10], where data are stored in the form of relations, which comprise lists of tuples and attributes. A tuple represents a row, and an attribute represents a column in a relation. RDBMS can be divided into two types depending on how data are stored: roworiented [2] and column-oriented [18], [45]. The row-oriented database organizes data by the record, sequentially storing attributes of records. It is optimized for reading and writing rows, such as online transactional processing (OLTP). On the other hand, a column-oriented database such as MonetDB [18] stores data into an array of each attribute, which benefits in reading and computing on columns as in online analytical processing (OLAP), as shown in Figure 1. Furthermore, the basic operators can turn into vectorized query executions by using column-oriented storage, where the execution is iteratively performed in a batch of input data. As a result, sequential memory accesses are prevalent in column-oriented database, where PIM are suitable as it promotes high data parallelism and wide bandwidth utilization of the memory.
In the column-oriented storage, project is the dominant operator among the others. Kepe et al. [20] analyze the latency breakdown of MonetDB in the TPC-H benchmark. The result shows that the project takes up 58% of the overall latency, while each of the other operator (e.g., select, aggregate, sort, and join) takes only up to 11%. The project, which materializes intermediate tables, occupies the majority because it occurs after every query operator in a query plan. This is because the column-oriented storage generates sets of object-IDs (OIDs), representing the address of each tuple which is unique for each relation, as the output of query operators. Using the OID result, the project can connect the previous and following operators by generating the intermediate tables, which is used in the following operator as inputs.
B. Architecture of Main Memory System
In order to properly exploit the PIM architecture, we need to understand the control granularity and the internal bandwidth of main memory system. The internal bandwidth can be understood through the logical structure of the main memory. It adopts the multi-drop tree topology, where the highest level starts from the memory channel controlled by a memory controller of the host. A channel comprises multiple dual inline memory modules (DIMMs). Within a DIMM, several DDR chip packages are placed forming a rank. The number of chip packages in a rank is determined by the number of DQ pins per package, where DQ pins are used for data input and output. The total number of DQ pins per DIMM is 64 bits to match the JEDEC specification. Then, multiple bank groups make up a rank where four banks form a bank group. Due to the multi-drop tree topology, only a single bank of the DDR packages within a rank can be accessed at a time across a channel. Getting the same command and address from the memory controller, each DDR package in a rank operates simultaneously, contributing to a part of the combined DQ pins. Even though DRAM can only access a bank at a time, it is efficient for DRAM since it can exploit the bank interleaving scheme to hide the internal delays, such as activation and precharge delays, increasing the bandwidth utilization. However, the multi-drop topology unavoidably limits the internal bandwidth of multiple banks for simultaneous read/write.
III. CHALLENGE OF DATA ANALYTICS A. Internal Data Movement Overhead in Single-Level PIM
The conventional single-level PIM (SLPIM) architectures, such as proposed in [7], [12], [16], [30], incur significant internal data movement in accelerating condition-oriented operators (e.g., join and project) and merge phase of sort. SLPIM refers to an architecture that places PUs only at a single level (e.g., subarray, bank, or rank). We demonstrated to see the overhead of internal data movement in SLPIM when executing the basic data analytics operators. Figure 2 (a) shows the latency breakdown of the basic operators executed in SLPIM. The result shows two distinct trends since the characteristics of the operators is different. First, the execution of sort, project, and join are dominated by the internal data movement. As project and join are condition-oriented operators, they generate workload imbalance across the PUs and induce additional internal data movement. When executing these operators, dataflow and input/output sizes are decided at runtime. In other words, the input can be evenly mapped on different memory nodes for the balanced workload, but the intermediate data are distributed unevenly as the different outputs are generated among the PUs. It leads to underutilization and performance degradation, especially in the parallel computing of the inmemory PUs. Thus, it requires balancing the workload among the PUs to maximize hardware utilization and performance, which additionally generates data movement. Furthermore, the merge phases of sort and join cause significant internal data movement because data in different nodes are accessed frequently to merge separate partitions into one. Second, the data movement are negligible in select and aggregate as they are not condition-oriented nor have heavy merge phases. Instead, the main overhead is computation where they require high computation throughput for vectorized query execution. As shown in Figure 2 (b), SLPIM's data access mechanism is inherently inefficient when accessing PU's neighbor memory node. SLPIM's data access is degraded since each PU shares global buffer for inter-node data movement which causes the bottleneck in the global buffer. Since data analytics incurs significant overhead in internal data movement, PIM architecture should be capable of moving data inside DRAM efficiently. However, the conventional DRAM structure does not have a specific interconnect for internal data movement. While Rowclone [40] proposes bulk data copy of a row data across different banks, significant data movement induced in data analytics are flexible that Rowclone cannot be effectively utilized. TransPIM [51] and GearBox [31] propose specific network-on-chip (NoC) for efficient DRAM internal data movement for target applications. However, their NoCs consume a large area overhead considering the DRAM area constraint. In particular, utilizing the customized NoC within a commodity DRAM is unscalable and impractical for the flexible data movement required by data analytics.
Depending on target operations, SLPIM can place PUs at different level (e.g., subarray, bank, or rank). Having a PU at a high level, such as rank [7], [12], enables the direct use of data in multiple memory nodes. However, the advantage of PIM is decreased because the high data parallelism cannot be achieved by placing PU far from the memory nodes. Conversely, placing a PU at a lower level [16], [30] (e.g., bank) can maximize the advantages of PIM, but the performance is degraded by the frequent internal and off-chip data movement overhead caused by the data analytics operator. When accelerating data analytics, SLPIM is not the most practical architecture since flexibility in data movement and high computation parallelism can not be achieved at the same time.
B. Command Bottleneck
The conventional DRAM command protocol causes a bottleneck in PIM since it is only dedicated to utilize offchip bandwidth efficiently. It exploits bank interleaving by alternatively accessing data from different banks since it can only send a command (e.g., activation, read, write, precharge, and refresh) to a single bank at a time. It is not suitable for executing PIM operations involving more than one PUs across multiple banks. Regardless of how fast DRAM receives the commands, the shortest latency is saturated at t CCDS , the minimum interval time between the memory column read.
To address the command bottleneck, previous research [16], [30] proposed all bank mode. Instead of controlling a single bank, it sends a command to control every bank with the same command. As a result, it can efficiently address the command bottleneck for matrix-vector multiplications that only requires a homogeneous dataflow. However, data analytics incurs irregularity across the PUs requiring different dataflow due to the condition-oriented workload. Thus, all bank mode is not suitable for irregular query operators due to the low control granularity and poor flexibility in which each PU needs to compute the different workload. For example, when processing join and project in a PIM with multiple PUs, each PU computes a different workload depending on the partitioned input data. Since the required dataflow varies depending on the input data, all bank mode, which is only efficient for applications with homogenous dataflow, is not suitable for the PIM architecture that targets data analytics.
IV. DARWIN ARCHITECTURE
We propose Darwin, a practical multi-level PIM (MLPIM) architecture with the concatenated instructions, multiple threads (CIMT) to address the challenges of in-memory data analytics processing. Darwin is capable of handling the complex dataflow and imbalanced workload problems, with keeping conventional DRAM's hierarchical structure for practicality. Regarding the PIM-host interface, CIMT addresses the command bottleneck for the irregular operators, maximizing the command density and the hardware utilization.
A. Multi-Level PIM Architecture
As explained in Section III-A, SLPIM is ineffective in accelerating data analytics. To this end, Darwin integrates heterogeneous processing units at different levels of the DRAM hierarchy to achieve flexible internal data movement. Although this approach seems to increase hardware and software complexity, MLPIM is a practical solution to reduce the complexity of hardware for data analytics which requires complicated dataflow. First, by integrating hardware units at both high and low levels of DRAM, Darwin reduces hardware overhead and eliminates additional NoC costs. This is achieved by reusing the conventional DRAM network and leveraging efficient memory access across multiple memory nodes. Furthermore, to enable PIM feasible, Darwin integrates optimized PUs that are located no closer than the bank-level. Second, Darwin reduces software complexity by integrating a hardware controller inside DRAM to manage the parallel processing of irregular operators.
As shown in Figure 2 (c), the major difference of SLPIM and MLPIM is that the levels where the operators are offloaded make different memory access. Therefore, the levels of PUs must be placed depending on the data access patterns of the operators. When performing regular operators (e.g., aggregate, select, and sort), they are handled most effectively when processed at L1, where the highest bandwidth gain is guaranteed. Since their output data size is determined at the static time, they do not incur workload imbalance. In addition, each PU requires few memory accesses from its neighbor memory nodes. On the other hand, irregular operators (e.g., join, and project) are handled most effectively when processed at a higher level where the efficient irregular memory access can be provided. These operators cause frequent irregular memory access to several nodes by a PU. If the PU is placed at a lower level the memory accesses slow down. Furthermore, the workload imbalance occurred by these and merge operators generate additional data movement across memory nodes for balancing the workload.
The MLPIM architecture of Darwin, as depicted in Figure 3, incorporates computing units and control across different levels, such as rank, chip, bank group, and bank. The bandwidth gain and corresponding operations at each level are summarized in the table. The major PUs are placed at the bank and bank group levels, with the bank processing unit (BPU) supporting regular operators to maximize data parallelism, while the bank group processing unit (BGPU) handles irregular operators that require data from broad memory nodes by addressing the internal data movement across the banks within a bank group. To efficiently use all compute units, Darwin has a PIM command scheduler at the chip level that supports bank group-level threading. Each bank group acts as an independent processing entity that executes a thread together, and multiple processing entities can Fig. 4. Concatenated Instructions, Multiple Threads Architecture execute multiple threads simultaneously. As depicted in the figure, Darwin supports multi-level data movement within a rank at each level, including inter-bank, inter-bank group, and inter-chip communication.
B. CIMT: Concatenated Instructions, Multiple Threads
It is challenging to offload data analytics workload that comprises irregular and regular operators to PIM as stated in Section III-B. To control in-memory PUs separately in computing irregular operators, the conventional DRAM command protocol, which sends one command at a time using the narrow command and address (C/A) pins, cannot provide enough bandwidth to control Darwin. All bank mode can provide wide bandwidth to execute in-memory PUs simultaneously, but the control granularity is extremely low that it cannot process complicated data analytics operations efficiently. To this end, Darwin supports CIMT that handles multiple in-memory PUs with fine control granularity without the command bottleneck. CIMT is optimized for the physical layout of the main memory system that has multiple DRAM chips in a rank. Unlike the conventional command protocol in which different DRAM chips have to receive the same command, each bank group in the different DRAM chips receives different instructions. Darwin utilizes 64-bit DQ pins when sending CIMT using the write command for wider bandwidth. Thus the timing constraints follow the same as the write command.
Each of the four DRAM chips in Figure 4 has 16-bit DQ pins, forming a rank with the 64-bit DQ pins for offchip data transmission. With the burst length of 8, the total data transaction size of 64-byte is sent per a write command. To match the data size, the CIMT instruction comprises 8 different 64-bit PIM instructions being concatenated. Each PIM instruction is divided into four 16-bit slices, and the 16bit slices of the different PIM instructions are placed in an interleaved manner forming eight 64-bit interleaved instructions. Then, 64-bit interleaved instructions are streamed into Darwin through the 64-bit DQ pins as it is formatted to send the corresponding instruction to each chip. As a result, each chip receives a complete 64-bit instruction. It takes eight cycles to send all 8 PIM instructions with a burst length of 8.
Each PIM instruction is decoded at the bank group level to generate up to 64 sequential PIM commands to relieve the burden of sending commands through the off-chip bandwidth. The PIM command, generated from the PIM instruction, is a DRAM readable command (e.g., activate, precharge, read, and write) paired with control signals for in-memory PUs. To increase the throughput of BPUs, all banks within the same bank group receive the same PIM command simultaneously, enabling concurrent execution of BPUs. Thus, the CIMT instruction architecture enables separate control of up to 512 different BGPUs simultaneously. The CIMT architecture is applicable to other DRAM configurations. Figure 5 shows Darwin's overall operation flow that is mainly divided into data preparation, computation, and output stages. The regular operators have a fixed data flow in computing vectorized operations. When each thread has the same input amount, the computation flow are equal in all threads. An example operation flow of select is shown in Figure 5 (a). In the data preparation stage, the PIM instructions are sent separately to each bank group, and the required input data are transferred to BPU in each bank. To reduce the latency, BPU can directly use the data for one input operand from its own memory. The scalar data of a SIMD operand is sent only once in the preparation stage since the attribute data of the other operand can be transferred directly from memory during the computation stage. In the computation stage, BPUs execute SIMD in parallel. Only one PIM instruction is required to compute the select operator on a row of data since 64 sequential PIM commands can be generated. The computation stage continues until the register is filled with the generated output. Having a 512-bit size bitmask register, BPU can compute select on one row, which generates a 512-bit output bitmask, and move on to the output stage. In the output stage, the generated output data are stored back in memory. Due to the limited size of the registers in BPU, the output data cannot be stored in the register for the entire operation. For the select operator, four write commands are required per a row for 512 bits bitmask data to memory.
C. Darwin Operation
The irregular operators have a much more complicated data flow and imbalanced workload among the threads even with the same input amount. An example operation flow of the project operator is shown in Figure 5 (b). In the preparation stage, the tuple number and the initial OID are set initially. Then, the 512 bits bitmask data, generated from the previous select operator, are transferred to the BGPU. Then, the BGPU receives the PIM instruction to generate corresponding read commands based on the bitmask data for the input attributes. In the computation stage, each BGPU receives different amounts of workload due to the different bitmask that each has. By generating commands internally with the CIMT architecture, each BGPU executes an individual command without the command bottleneck. Furthermore, the computation of the BGPU is rate-matched to the peak bandwidth of the bank group for the streaming execution flow. In the output stage, the selected data are stored in the output register of the BGPU. Once the register is ready, the write commands are generated to store back the output attribute. To balance the workload seamlessly, the BGPU generates the write commands in a bank interleaving manner to write data on each bank memory evenly. It guarantees the shortest latency between the write commands for maximum bandwidth utilization while exploiting the bank group-level parallelism. This process is repeated until the end.
V. IN-MEMORY LOGIC DESIGN
A. Bank-Level Processing Units Figure 6 shows the BPU's microarchitecture specialized for processing regular operators. Different from the previous PIM architectures designed for matrix-vector multiplication [16], [30] with straightforward data read and accumulation path, Darwin supports a long sequence of data processing for data analytics, including data read, sort, select, and aggregate. To make this in a streaming processing without data re-writing, the BPU is composed of row registers, a SIMD unit, two permute units, and OID processing engine (OPE).
Select and Aggregate The BPU receives 32B of attribute data from the bank's I/O sense amplifiers (IOSAs) and saves in the row register A or B. The SIMD unit comprises eight sets of 4-byte fixed-point adders and multipliers, supporting the addition, multiplication, min, and max on eight 4-byte data, which matches the bandwidth of a bank. The SIMD unit outputs bitmask, max, min, and result, and based on the operator's opcode, they are multiplexed into the permute unit. For the aggregate operator, the result is accumulated in the row register A. For the select operator, the bitmask is used in which each 1-bit indicates whether the input tuple is selected or not. Instead of using the 32-bit OID as an output, the bitmask reduces 32x the memory footprint for the output data. The output bitmask is saved in the bitmask register for the selected data, which later can be used for the project operator.
Sort We utilize the Bitonic merge-sort algorithm [4] to accelerate the rather compute-intensive sort operator with the help of BPU, which is known to work well with SIMD hardware. The Bitonic sort network comprises ten stages that require a total of 4 instructions for each stage, including input Fig. 6. BPU Microarchitecture permutation, min, max, and output permutation instructions [9]. In addition, the addresses of the data (i.e., OID) must be sorted as well, which incurs doubling the instructions. To minimize computation latency and the number of instructions, we have incorporated two permute units before and after the SIMD unit. The permute unit shuffles sixteen 4B data as input with pre-defined patterns, reducing area overhead by optimizing the permute unit circuitry for seven permutation patterns, as shown in Figure 6. The output generated by the permute unit is sent to the SIMD unit for comparison operations. The SIMD unit generates both min and max data simultaneously, reducing the two instructions for min and max operations. The 16 output data are then sent to the permute unit for output permutation. The BPU supports the OPE, which performs the permutation of addresses that are tagged along with the data result. It eliminates the need to shuffle the OIDs separately as OIDs are shuffled simultaneously with the data.
B. Bank Group-Level Processing Unit
As described in Section III, computing condition-oriented operators in memory incurs challenges.To this end, we propose BGPU, as shown in Figure 7. BGPU leverages the efficiency of the project and join unit the most by optimizing the execution flow and the workload balancing overhead together. It comprises the bank group controller, data analytical engine (DA engine), and PIM command generator.
Data Analytical Engine The DA engine is composed of two vector registers, the project and join unit, and the output FIFO. For the project operator, the OIDs and bitmask are stored in the vector register A, and the attribute is stored in the vector register B. The project unit can decode either OIDs or bitmask, which indicate the selected tuples in the projected attribute. All the operators except select generate a set of OIDs as a result, while the select operator generates a simple bitmask, as described in the BPU microarchitecture. Based on the preconfigured addresses and the initial OID values, the project unit first sends the bitmask or OIDs to the PIM command generator, so that it can generate the memory read command for the input attribute, and the memory write command for the output attribute. The index selector of the project unit selects projected tuples among eight 4B tuple data at t CCDL period to rate-match the peak bandwidth at the bank group level, assuming DRAM configures 16-bit DQ pins with the Cmp.
Instruction
Decoder Fig. 7. BGPU Microarchitecture burst length of 16. Then, the selected output data are stored in the output register. Depending on the selectivity, the index selector selects less than eight data. If a set of complete eight 4B data are prepared in the output register, it is dispatched to the output FIFO, which eventually goes to the banks. To reduce the read and write turnaround latency, the output FIFO holds on to 256B data and sequentially write them back to the bank.
For the merge phase of join, the two sorted attributes are fetched in the vector register A and B, while the two OID sets are stored in the OID registers in the join unit. The join unit receives the two input attributes to merge them by comparing each tuple sequentially, executed by the join controller. The join controller sends addresses for the required attribute data to the PIM command generator which generates the memory read and write command for the next input. In order to ratematch the peak bandwidth of the bank group, it includes two comparators for processing two sets of tuples at a time, or a total of four sets of 4B data and 4B OIDs. The OIDs of the output data which satisfy the join-merge condition are selected by the output OID selector. Then, the OIDs of the two tuples are sent to the output FIFO. Same as the project operator, the output FIFO holds on to a set of output data and sequentially writes them back to the bank.
Bank Group Controller Instead of sending the intermediate results of the condition-oriented query operators to the host CPU for checking the condition, Darwin supports the bank group controller to take the CPU's role and remove off-chip data movement for host-pim communication. The bank group controller is responsible for managing the PIM commands within bank group. The sources of the PIM commands are either from the PIM instruction or the PIM command generator. The PIM instruction is decoded into PIM commands in the instruction decoder. The PIM commands are generated sequentially and stored in the command queue. On the other hand, the PIM command generator in the bank group generates PIM commands based on the initial configurations received from the BGPU instructions when executing the project and join operators. When the PIM commands are stored in the queue, it receives issuable signal from the command scheduler and sends out the PIM command.
PIM Command Generator The PIM command generator conditionally generates PIM commands that are determined by the condition-oriented operations. Since DRAM is a timing deterministic device that the control signal is managed by the strict timing rules, the host memory controller cannot decide when to properly send the next command if the execution flow is non-deterministically decided inside the memory by the bank group controller. This issue can be easily addressed with the simple hand-shaking protocol between the CPU and the PIM device. The host CPU holds the next PIM instruction if the ready signal is not asserted from the device side.
C. Chip-level Command Scheduler
Darwin includes the PIM command scheduler at the chip level to oversee all the bank group command queues considering the inter-bank timing constraints such as row-to-row activation delay (t RRD ) and four-bank activation window (t FAW ). Along with the inter-bank timing constraints, the scheduler manages each bank state and counters, which indicates the remaining latency for each command to be issued. It pops the available PIM commands from each bank group command queue. The overhead of the command scheduler is extremely low since the PIM commands generated for data analytics operators have sequential memory access patterns.
D. Rank-Level Buffering
Data movement is unavoidable across the banks, such as workload balancing after the irregular operators, aggregating, projecting, and merging two attributes among the banks. To this end, Darwin supports inter-bank, inter-bank group, and interchip communication to move data across the banks efficiently. It supports inter-chip communication by integrating a simple circuit in the buffer chip of the LRDIMM. Figure 3 shows that the instruction decoder, the permute unit, and DQ aligner are only added. When moving data from one chip to another chip, the rank buffer first receives the instruction indicating data read from a bank, waiting t CAS to retrieve data from the chip. The PIM instruction also indicates the number of data read in nCMD, which generates a sequence of read commands. The rank buffer can receive these data in series and store them in the buffer. Then, the rank buffer receives the instruction indicating data write to a bank in another chip. The instruction includes the permute index which enables re-ordering the data in the rank buffer and writes back to the destination bank. The same process is supported at the chip level and bank group level, which enables inter-bank group and inter-bank communications, respectively. The DQ aligner can change the form of transaction data for the various types of DRAM that Darwin supports to be compliant with the JEDEC's 64-bit DQ Table pins for the main memory system. It can receive any form of 64-byte data (e.g., 32-bit with 16 burst length) into 64-bit with 8 burst length, which matches the width of the conventional main memory system.
A. Execution Flow of Darwin
The software stack for Darwin supports the execution with multiple threads, as shown in Figure 8. The query operation and the corresponding tuples are evenly partitioned into several threads. Darwin runtime library receives query operations from the query plan that are offloaded to Darwin. After receiving query operations, Darwin instruction generator first maps operand data to the memory space in a way that can exploit internal bandwidth the most. Darwin operates within a separate memory region that is distinct from the memory region utilized by the host, allowing it to bypass coherence issues. This designated area is made uncacheable and enables contiguous mapping of virtual memory addresses to contiguous physical memory addresses. By providing the driver with the starting address of each bank and bank group, the host can map data to a continuous memory space. Darwin does not engage in virtualization at a lower level than DIMM, such as the rank or bank group level, as it would be both impractical and unnecessary. Since it is typical to use multiple DIMMs to serve a database in data analytics, Darwin supports virtualization at DIMM-level with much flexible control and large memory capacity. Then, it converts query operations into the form of Darwin instruction format, which is shown in Figure 9. Darwin memory manager manages the memory allocated by Darwin device driver for the proper memory addresses. When enough instructions are generated to form a CIMT, the CIMT encoder generates a CIMT, which is then sent to the hardware. In the Darwin hardware, a CIMT instruction is divided into the number of bank groups where each thread is offloaded. Then, each BGPU, including a PIM instruction decoder and a command queue, receives different commands simultaneously. Then, each thread is computed separately on each PU, exploiting the intra-thread and inter-thread data parallelism. Figure 9 illustrates the 64-bit PIM instruction format. It contains an opcode to determine the type and an ID option to indicate the thread that the instruction is executed on. Depending on the opcode, the instruction format is divided into three categories: BPU, BGPU, and data movement.
The BPU instruction format and its configurations are shown in Figure 9 (a) and (b), respectively. It supports three different input sources (e.g., memory, row register A, and OID register A) for one input operand of the SIMD unit while the other operand is fixed to the row register B. The permute case is used to control the BPU's permute network. The metadata is used to generate sequential PIM commands using the nCMD, step1, and step2. The nCMD determines the number of sequential PIM commands, up to 64, generated by the PIM instruction, while the step1 and step2 determine the offset of column addresses for the first and second input sources, respectively. For example, if the column address, nCMD, step1, and step2 are configured 0, 4, 1, and 2, respectively, four sequential PIM commands with column addresses of both input operands are configured (0,0), (1,2), (2,4), and (3,6) while the bank and the row addresses are fixed.
The BGPU instruction format and its options are shown in Figure 9 (a) and (b). The instructions are mainly for initializing BGPU. The instructions send BGPU with the initial configurations (i.e., input and output OIDs, tuple number, and memory address) to generate PIM commands for computing project and join operators as well as allocating addresses for intermediate data. After setting the initial configurations, the start instruction initiates the project and join operators.
The data movement instructions are configured enabling data transfer between the memory and other levels. However, sending PIM instructions for the data movement occupies DQ pins and leaves less room for data transfer. The switching overhead of writing PIM instructions and reading data through the DQ pins becomes even worse if more data transfer occurs. To this end, the nRD and step1 options are enabled for the data movement, generating sequential PIM commands in BGPU and relieving stresses on the DQ pins by the PIM instructions. In addition, the permute index determines the data shuffle order for the permute unit in the rank buffer for inter-chip data movement. Fig. 11. Darwin's Simulation Framework
C. Data Mapping
Darwin adopts the column-oriented DBMS to exploit the data parallelism and DRAM's bandwidth maximally. In the column-oriented DBMS, the attributes are stored separately as array structures to accelerate the analytical query operators, which perform element-wise vector operations on the attributes. In addition, this column-oriented mapping leads to sequential memory access where Darwin can exploit the minimum memory access latency. Furthermore, it adopts relation partitioning to process an analytical query in parallel. Figure 10 shows the relation mapping layout of Darwin, assuming DIMM configures 4 chips and 1 bank group per chip with 2 banks per bank group. Since one bank group corresponds to a thread, 4 different threads can be generated. In order to balance the workload across different threads for maximum utilization, columns of attributes and OIDs are evenly partitioned into 4, the total number of threads, then each partition of the attribute is mapped to the corresponding thread.
A. Benchmarks
We use the TPC-H benchmark [1] with scaling factor 1 to generate a database whose relations include up to 6,001,215 tuples. For the overall performance of query processing, we evaluate with TPC-H Query 1, which mainly performs project and aggregate operators on low-cardinality data, and 6, which mainly performs select on high-cardinality data. We further evaluate Darwin to see the individual performance of each query operator, such as select, project, aggregate, sort, and join. For the dataset, we extract 8,388,608 tuples from lineitem relation of TPC-H at scaling factor 10. In addition, we use the datasets from Balkensen et al. [3] for join, where two relations, R and S, have 8,388,608 tuples. Note that, for the dataset for join, S is a foreign key to R, which means that every tuple in S has exactly one match to a tuple in R. The dataset assumes a column-oriented model, and both data and OID pairs are 4B integers.
B. Darwin Simulation Framework
Performance and Functional Simulation We modified the DRAMSim2 [39] simulator to support computation of the proposed multi-level functions and CIMT instructions for Darwin, as shown in Figure 11. MonetDB first converts SQL into an optimized query plan. Receiving a query plan and relations as input, the CIMT instruction generator generates a trace file of CIMT instructions for the operation of Darwin. It also receives the information on the memory system for Darwin configuration and DRAM device parameters to generate addresses and data order of various DRAM devices properly. The instruction trace file is sent to the Darwin's performance and functional simulator. As a result, we can obtain the Darwin's performance results, such as bandwidth utilization and latency, as well as the overall power and energy consumption using the measurement results from the circuit simulator, in executing the query operators.
Area, Power, and Energy Measurement The area and power are measured using the Synopsys Design Compiler with a 28nm CMOS technology at 500MHz operating frequency. The power is scaled considering the V DD difference. We scale up the area by 80% [24] considering the difference between the logic and DRAM process technology, then scale to match the process node. The rank buffer, PIM command scheduler, BGPU, and BPU are synthesized separately, and each logic is scaled with the number of PUs used in the target DRAM device. The energy consumption by the in-memory PUs and internal data movement is measured by the event-driven method that accumulates energy consumption per command to obtain the overall result. The average energy consumption per each PIM command of BPU and BGPU is measured using the PrimePower tool. The power for internal data movement is scaled and modeled from the fine-grained DRAM [37]. The energy parameters are integrated into the performance simulator to measure the total energy consumption.
C. System Configuration
Hardware Baseline The TPC-H queries and basic operators are evaluated on four different state-of-the-art architectures: baseline CPU, Mondrian [12], Newton [16], and Darwin. For the evaluation of PIM architectures, we use the latest GDDR6 configuration for the maximum speedup as the previous research [28], [29] has shown the feasibility of GDDR6based Darwin in the main memory system for AI applications. The DDR4 configuration is also used for the performance comparison over GDDR6-based Darwin.
CPU The baseline CPU is an Intel(R) Xeon(R) Gold 6226R CPU with 512 GB of four channels of DDR4-2933 with a peak bandwidth of 93.84 GB/s. We measure the runtime of TPC-H queries using MonetDB with 64 threads and exclude the query optimization step. For comparison with the basic operators, we implement each operator in C/C++ using the Pthread library [11] to maximize the computation throughput using multiple threads. The sort function in the C++ standard library and hash-join algorithm are evaluated. Previous PIM architectures We evaluated the performance of Newton and Mondrian which are representative bankand rank-level SLPIM architectures, respectively. Newton places PUs at bank-level, while Mondrian integrates PUs at the logic die of HMC. In order to fairly compare Darwin with Newton and Mondrian, we implemented Newton and Mondrian using the same simulation framework as Darwin. We matched the configurations of hardware components for Newton and Mondrian with Darwin to show the benefit of multilevel architecture and CIMT. Both Newton and Mondrian are implemented with the same configurations, such as memory type, frequency and configuration of PU, as Darwin. Newton's microacrhicture of PU is not dedicated for data analytics so we replace Newton's PU with BPU and BGPU at the bank level for the fairness. While Mondrian is SIMD-based architecture that we matched the width of SIMD equal to Darwin.
Darwin Table I summarizes the DRAM parameters used for Darwin. We follow typical GDDR6 and DDR4 settings. The GDDR6 has two pseudo-channels (PC) in a package (i.e., chip), where each PC has 16 DQ pins with a burst length of 16. Thus, a total of 64 bytes of data can be transferred per one read or write command. It allows only one package to constitute a rank. To fairly compare the performances of database operators on real CPU hardware and the traced-based simulator, we only compare runtime. The execution time on the baseline CPU is evaluated without query pre-processing steps, such as generating query plan and optimization. For the traced-based simulation, the execution time is evaluated by using a trace file with pre-generated CIMT instructions. In addition, as the simulator always runs optimally, we ensure that the baseline CPU can run optimally by choosing the thread number with the best performance for each operator.
VIII. EXPERIMENTAL RESULTS
In this section, we evaluate the benefits of Darwin for various analytics operators and queries. We first compare the performance of Darwin over the state-of-the-art PIM architectures and the baseline CPU and give a detailed analysis of its performance gain on each optimization, bank grouplevel unit placement, performance comparison over DDR4, and
A. Darwin Performance
Comparison to CPU Figure 12 shows the speedup of Darwin over the baseline CPU by evaluating the basic query operators. For the comparison over baseline CPU, Darwin achieves 9.3x, 9.0x, 17.8x, 43.9x, and 4.0x faster in the select, aggregate, sort, project, and join, respectively. The speedup comes from the MLPIM architecture of Darwin that exploits internal parallelism and optimized data movement. The select, aggregate, and sort operators are executed by BPUs utilizing the bank-level parallelism, while the project and join is executed by BGPUs utilizing the bank group-level parallelism. We further evaluate TPC-H queries for end-to-end query processing. Darwin is 5.4x and 13.5x faster than the baseline CPU in Query 1 and 6, respectively.
Comparison to previous PIM Architecture Mondrain and Newton are evaluated as shown in Figure 12. For evaluating the basic query operators and TPC-H queries, both SLPIM architectures show significantly less speedup compared to Darwin. On average, Newton and Mondrian achieve 9.2x and 4.6x higher throughput than the baseline CPU, respectively, while Darwin achieves a much higher throughput of 15.3x. Mondrian shows the least speedup due to the limited bandwidth gain for much further integration of logic. On the other hand, Newton shows no degradation on select and aggregate operators as compared to Darwin since these operators can be accelerated easily with all bank mode of Newton where all PUs execute on the identical operations. However, sort, project, and join are not simply sped up by having in-memory PUs, requiring further optimization schemes in dataflow with CIMT and multi-level data movement. The performance of Newton and Mondrian are further degraded for the end-to-end TPC-H queries due to a large number of project operators on intermediate data and limit the performance gain of only 1.6x and 1.7x, respectively, while Darwin achieves 9.5x higher throughput than the baseline CPU.
B. Effect of Optimizations
To show the effect of each optimization scheme, we evaluate the speedup when each scheme is applied to the non-optimized Darwin (No-opt), as shown in Figure 13. More specifically, Noopt has BPUs that is not optimized for the bitmask register, sort, and CIMT instructions and has the BGPU placed at the rank level. Based on this, we gradually add each optimization: 1) Each of the schemes significantly improves the performance of Darwin. The CIMT instruction, which applies to all operators, shows the most speedup than the other optimizations by significantly reducing the command bandwidth requirement. The other optimizations are accumulated, and the Darwin with the entire optimizations achieves 9.2x, 5.2x, 14.7x, 3.3x, and 5.8x speedup for each operator compared to the No-opt version, which shows 11.2x, 10.9x, 18.1x, 5.6x, and 2.0x higher performance than the baseline CPU.
Optimization on BPU The effect of the all bank mode and PIM instruction on BPU are negligible. The reason is that the BPU receives the same instruction for each of the regular operators to leverage the data parallelism. The BPU is rather optimized to reduce computation for the speedup. First, bitmask is used instead of OID for the select operator where the memory footprint for the output is reduced by 32x. In addition, the throughput of the select with the bitmask (Opt-BM) increases by 11.2x than the No-opt due to the reduced number of writing the output. Second, the compute circuit for the sort is optimized to reduce the computation overhead. The optimizations on the sort logic (i.e., SIMD and permute units, Opt-sort) and OPE (opt-OID) reduce the number of compute commands for the bitonic sort by 4x and 2x, respectively. The Opt-sort and Opt-OID achieve 15.5x and 18.1x higher throughput than the No-opt, respectively.
Benefit of PIM Instruction In data analytics, the command bandwidth is critical as the performance can be bounded by the command bottleneck. The select, aggregate, and sort operators, where the BPU can perform with massive data parallelism, the conventional DRAM command scheme can not provide any speedup to Darwin since each bank receives a command separately. Both all bank mode and PIM instruction show benefit since all BPUs can receive the same command due to
C. Internal Data Movement of MLPIM
To see the benefit of exploiting BGPU at the bank group level over the bank and rank levels, we further evaluate the performances of project units depending on its location. Figure 14 shows the evaluation results of BGPU at the different levels performing the project operator and workload balancing on eight different data distributions (i.e., normal, gamma, chisquared and uniform distributions). The project operator is performed at each memory level, and the additional data movement is applied to evenly distribute the output data, which will later be fed as input to the following operator. The latency reduces as the placement of BGPU goes lower to the bank level, but the data movement latency increases since the project operation is only executed within the level. In the uniform distribution, there is no additional data movement overhead since the workload is perfectly balanced. Therefore, there is only the latency for the execution of the project operator, achieving the highest speedup in the bank level. However, the uniform distribution is an ideal case that is unlikely to occur. Thus, with the normal, gamma, and chi-squared distributions, the BGPU in the bank group level shows the highest speedup of average 4.0x than the rank level. Table I. A single chip of GDDR6 is compared with four chips of DDR4, forming a rank, to match the number of banks, BPUs, and BGPUs between them. The capacity of Darwin-DDR4 is 4 times larger than the Darwin-GDDR6. However, Darwin-GDDR6 achieves up to 3.0x higher throughput than Darwin-DDR4 since GDDR6 provides about two times faster column access and two time wider I/O bit width than DDR4. Figure 15 (b) shows the speedup of Darwin for the basic query operators as we vary its memory configuration among 1 PC with 4 banks, 1 PC with 16 banks, and 2 PC with 16 banks. The internal bandwidth and the PUs increase linearly as the number of banks increases. However, the speedup dampens as the number of banks increases due to the increased amount of internal data movement. The average speedup of Darwin when the bank number increases from 4 to 16 with 1 PC is 3.2x. On the other hand, the average speedup when increasing the PC from 1 to 2 with 16 banks is 1.6x.
F. Area, Frequency, Power, and Energy
The areas of BPU, BGPU, and rank buffer are measured 0.104mm 2 , 0.043mm 2 , and 0.078mm 2 . Once scaled and summated, the total area overhead is measured 3.752mm 2 which is only 5.6% of the GDDR6's die area [23].
The energy consumption is evaluated as shown in Figure 16. In the ideal CPU, the energy consumption is measured only by the data movement between the CPU and the memory. Since we assume the CPU is ideal with unlimited computation capacity and speed, it does not take any delay or energy for any operator execution. This guarantees that the peak bandwidth is utilized, and there is no redundant read or write on the same data address. The background energy is increased in Darwin due to the longer execution time than the ideal case, however, due to the reduced off-chip movement, the overall energy drops. As a result, the reduced I/O movement brings all operators to achieve energy savings significantly. Darwin reduces its energy consumption by 45.4%, 10.2%, 67.4%, 47.1%, and 84.7% in join, sort, project, aggregate, and select, respectively.
Accelerating Data Analytics On-chip accelerator Q100 [47] exploits pipeline in query processing with heterogeneous processing cores to minimize the memory access. Mondrian and Polynesia [7], [12] integrate circuits in logic die of 3D stacked memory, which is much further from the DRAM cells and loses the internal bandwidth. To the best of our knowledge, Darwin is the first proposal that accelerates data analytics operators reducing the overhead by reusing the hierarchical structure of the main memory system.
X. DISCUSSION
Applying Darwin to other workloads Although Darwin is designed to target data analytics, it can still accelerate other similar workloads that have large memory footprint, memory bounded, and especially having wide data dependency. Deep neural network (DNN) workloads, such as transformer and LSTM, are applicable to Darwin. Its memory-bounded algebra operations, such as matrix vector multiplication and matrixmatrix multiplication, can be easily accelerated with BPU's SIMD. Furthermore, multi-level characteristic of Darwin can accelerate the frequent output and partial sum data movement of these workloads where they are incurred from data partitioning among the memory nodes due to the large sized weight. Especially, Darwin is applicable to accelerate sparse matrix multiplication with its efficient in-memory network to gather non-zero elements and perform multiplication in BPU. In addition, Darwin can accelerate recommendation system where it includes gather-and-reduction and fully-connected workloads. Darwin can gather data from wide range of memory space in parallel using BGPU, while fully-connected layers are accelerated using BPU's SIMD.
XI. CONCLUSION
We propose Darwin, a practical LRDIMM-based multi-level PIM architecture for data analytics. We addressed the issues in adopting PIM for data analytics through the three contributions. First, Darwin reduces overhead of integrating additional logic in DRAM by reusing the conventional DRAM architecture that fully exploits the multi-level of DRAM, while maximizing the internal bandwidth. Second, Darwin places the BGPU to mitigate the data movement overhead and load balancing across the banks, while performing the condition-oriented memorybounded data analytics operators. Third, CIMT instruction is adopted to address the command bottleneck to enable separate control of multiple PUs simultaneously. The simulation results on the major five data analytics operators, such as select, aggregate, sort, project, and join, show that the GDDR6 based Darwin achieves up to 43.9x over the baseline CPU. Darwin is more energy-efficient than the ideal CPU systems by 85.7%, while the additional area overhead is only 5.6%. | 2023-05-24T01:16:32.833Z | 2023-05-23T00:00:00.000 | {
"year": 2023,
"sha1": "3eae4656851d4d37f71b1134877b6c51f81a6e49",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "3eae4656851d4d37f71b1134877b6c51f81a6e49",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
} |
230527632 | pes2o/s2orc | v3-fos-license | Spontaneous Symmetry Breaking and Its Pattern of Scales
: Spontaneous Symmetry Breaking (SSB) in λ Φ 4 theories is usually described as a 2nd-order phase transition. However, most recent lattice calculations indicate instead a weakly 1st-order phase transition as in the one-loop and Gaussian approximations to the effective potential. This modest change has non-trivial implications. In fact, in these schemes, the effective potential at the minima has two distinct mass scales: (i) a first mass m h associated with its quadratic curvature and (ii) a second mass M h associated with the zero-point energy which determines its depth. The two masses describe different momentum regions in the scalar propagator and turn out to be related by M 2 h ∼ m 2 h ln ( Λ s / M h ) , where Λ s is the ultraviolet cutoff of the scalar sector. Our lattice simulations of the propagator are consistent with this two-mass picture and, in the Standard Model, point to a value M h ∼ 700 GeV. However, despite its rather large mass, this heavier excitation would interact with longitudinal W’s and Z’s with the same typical coupling of the lower-mass state and would therefore represent a rather narrow resonance. Two main novel implications are emphasized in this paper: on m h , SSB could originate within the pure scalar sector regardless of the other parameters of the theory (e.g., the vector-boson and top-quark mass) (2) if the smaller mass were fixed at the value m h = 125 GeV measured at LHC, the hypothetical heavier state M h would then naturally fit with the peak in the 4-lepton final state observed by the ATLAS Collaboration at 700 GeV.
Introduction
Spontaneous Symmetry Breaking (SSB) through the vacuum expectation value Φ = 0 of a fundamental scalar field, the BEH field [1,2], is an essential element of the Standard Model. This original idea has been recently confirmed by the discovery at LHC [3,4] of a narrow scalar resonance with mass m h ∼ 125 GeV whose characteristics fit well with the theoretical expectations. This has produced the widespread belief that any change of this general picture could only originate from new physics.
However, this conclusion might not be entirely true. In fact, at present, only the gauge and Yukawa interactions of the 125 GeV resonance have been tested. Instead, the possible effects of a genuine scalar self-coupling λ = 3m 2 h / Φ 2 are still below the precision of the observations. This suggests that some uncertainty on the origin of SSB may still persist.
Originally, the underlying mechanism was identified in a classical double-well, scalar potential. However, later, after Coleman and Weinberg [5], the classical potential was replaced by the quantum effective potential V eff (ϕ) which includes the zero-point energy of all fields in the theory.
Yet, SSB could still originate within the pure λΦ 4 sector if the other fields give a negligible contribution to the vacuum energy. To fully appreciate this point, we must start from scratch and consider one aspect which has still to be clarified: the nature of the phase transition in a pure λΦ 4 scalar theory in 4D. More precisely, is it a 2nd-order phase transition or a (weakly) 1st-order transition? Surprising as it may be, this apparently minor change can have substantial phenomenological implications.
To this end, in Sections 2-4 we will give a general overview of the problem and argue that SSB in pure λΦ 4 theory is a weak 1st-order phase transition. Then, in this picture, besides the known resonance with mass m h ∼ 125 GeV, we expect a new excitation of the BEH field with a much larger mass M h ∼ 700 GeV. Since vacuum stability depends on this larger M h , and not on m h , SSB could well originate within the pure scalar sector regardless of the remaining parameters of the theory (as the vector boson or top-quark mass).
However, despite such large mass, this heavier state would interact with longitudinal W's and Z's with the same typical strength of the lower-mass state. As such, it would represent a rather narrow resonance. On this basis, in Sections 5 and 6, we will consider these more phenomenological aspects and their implications for the present LHC experiments.
SSB: 2nd-or (Weak) 1st-Order Phase Transition?
To introduce the problem, let us start with the classical potential (λ > 0) Here, there is no ambiguity. As one varies the m 2 parameter, one finds a 2nd-order phase transition occurring for m 2 = 0. However, in the full quantum theory is this conclusion still so obvious? To this end, one should look at the effective potential and study vacuum stability depending on the physical mass, say m 2 Φ , in the symmetric vacuum at ϕ = 0 Clearly, this is locally stable if m 2 Φ > 0. However, for m 2 Φ > 0, is this symmetric vacuum also globally stable? Or, instead, could the SSB transition be 1st-order and occur for some very small but still positive m 2 Φ = m 2 c > 0? Then, if this were true, the lowest-energy state for the classically scale-invariant case m 2 Φ = 0 would correspond to the broken-symmetry phase with an expectation Φ = 0. This dilemma, on the nature of the phase transition, goes back to the pioneering work of Coleman and Weinberg [5]. After subtracting a ϕ− independent constant and quadratic divergences, in this massless limit of λΦ 4 , their original 1-loop result was where Λ s is a large ultraviolet cutoff. As it is well known, this 1-loop form could equivalently be expressed as the sum of classical background + zero-point energy of a field with a ϕ−dependent mass M(ϕ) given by namely By using this notation, there are non-trivial minima for those values, say ϕ = ±v, where Therefore, since the massless theory exhibits SSB, the 1-loop potential indicates a 1st-order phase transition. Actually, it is a weak 1st-order transition because, in units of the M 2 h in Equation (6), the mass m Φ in the symmetric phase is bounded to be smaller than a critical mass [6] With such extremely small critical mass, SSB emerges as an infinitesimally weak 1st-order transition which could hardly be distinguished from a 2nd-order transition unless one looks on an extremely fine scale.
As is well known [5], though, the standard Renormalization-Group (RG) improvement of the 1-loop potential contradicts this scenario. Indeed, leading-logarithmic terms entering the effective potential are re-absorbed into an effective coupling λ(ϕ) giving a re-summed expression Thus, by restricting to λ(ϕ) > 0, the 1-loop minimum disappears and we would again predict a 2nd-order transition at m 2 Φ = 0. The standard view is that it is this latter point of view to be reliable. To see why things are not so simple, let us consider another approximation scheme. Specifically, the Gaussian effective potential [7,8]. Diagrammatically, this corresponds to the infinite re-summation of all one-loop bubbles with mass M(ϕ) and has a variational nature by exploring the Hamiltonian operator within the Gaussian functional states. For this reason, it is a very natural alternative because a Gaussian set of Green's functions would fit with the "triviality" of λΦ 4 theory in 4D. An early calculation [9] of the Gaussian effective potential for the one-component λΦ 4 theory confirmed the 1st-order scenario in agreement with the 1-loop potential. This is because the existing corrections beyond 1-loop reproduce the some functional form and thus support the same 1st-order picture.
Further calculations, by Bryhaye and one of us [10,11], confirmed that by imposing V Gauss (ϕ = 0) = 0, the Gaussian effective potential for the O(2) and O(N)-symmetric scalar theories exhibits SSB thus again supporting the weak 1st-order picture. In particular, it was noted the non-uniformity of the two limits N → ∞ and ultraviolet cutoff Λ s → ∞.
To fully appreciate the substantial equivalence with the one-loop potential, we observe that the infinite additional terms in the Gaussian effective potential can be expressed in a form analogous to Equation (5) with a simple redefinition of the classical background and of the ϕ−dependent mass in the zero-point energy, i.e., and This shows that the 1-loop potential also admits a non-perturbative interpretation. In fact, by displaying the same basic structure of classical background + zero-point energy, it represents the prototype of all gaussian and post-gaussian calculations [12,13]. At the same time, it also explains why 1-loop and Gaussian approximations, although differing in terms of the bare parameters, can become identical in a suitable renormalization scheme [14,15].
This concordance among various approximations may cast some doubts on the re-summation in Equation (8) and its 2nd-order scenario. Nevertheless, at the time of those works, the precise motivation for the discrepancy was not understood. Thus, the whole problem of SSB in pure λΦ 4 theories did not attract much attention, also due to the lack of definite phenomenological implications.
However, two subsequent theoretical developments, producing new evidence in favor of the 1st-order scenario, have refreshed anew the interest into the whole problem: (i) the first development was concerning the physical mechanisms [6] underlying SSB as a 1st-order transition. In fact, once SSB really coexists with a physical mass 0 < m 2 Φ ≤ m 2 c for the elementary quanta of the symmetric phase, these quanta, the "phions" [6], should be considered to be real particles although, being "frozen" in the broken-symmetry vacuum, they would not be directly observable (like quarks). Now, the conventional picture of λΦ 4 corresponds to a repulsive interaction. Only its strength decreases at large distance. However, then, this is somewhat mysterious. In fact, if the interaction remains always repulsive, how could this broken-symmetry vacuum with Φ = 0, a Bose condensate of phions, have a lower energy than the Φ = 0 empty state with no phions? Here, a crucial observation [6] was that phions, moreover the +λδ 3 (r) contact repulsion, also feel a −λ 2 e −2m Φ r r 3 attraction arising at 1-loop and which becomes more and more important when m Φ → 0 (From the scattering amplitude M, computed from Feynman graphs, one can define an interparticle potential which is nothing but the 3D Fourier transform of M, see Feinberg et al. [16,17].). By including both effects, one can now understand [6] why, for small enough m Φ , the attraction can dominate and the lowest-energy state becomes a state with a non-zero density of phions Bose-condensed in the zero-momentum state.
However, then, if SSB is produced by these two competing effects (short-range repulsion and long-range attraction) we now understand the failure of the standard RG-analysis. In fact, the attractive term originates from the ultraviolet-finite part of the 1-loop graphs. Therefore, to correctly include higher-order effects, one should renormalize both the tree-level contact repulsion and the 1-loop, long-range attraction, as if there were two different coupling constants in the theory. This different procedure has been adopted by Stevenson [18], see Figure 1. By avoiding double counting, he has shown that the simple 1-loop result and its RG-improvement, in this new scheme, now agree very well so that the weak 1st-order scenario is confirmed. Figure 1. The re-arrangement of perturbation theory introduced by Stevenson [18] in his alternative analysis of V eff (ϕ). The quanta of the symmetric phase with mass m Φ , besides the contact +λδ 3 (r) repulsion, also feel a −λ 2 e −2m Φ r r 3 attraction from the Fourier transform of the ultraviolet-finite part of the 1-loop term [6]. Its range diverges in the m Φ → 0 limit and, for m Φ below a critical mass m c , the attraction will dominate and induce SSB. Since higher-order contributions simply renormalize these two basic effects, the resulting RG-improvement, in this new scheme, now confirms the 1st-order phase transition scenario as at 1-loop.
(ii) recent lattice simulations of pure λΦ 4 in 4D [19][20][21], obtained with different algorithms in the Ising limit of the theory (and on the present largest available lattices), indicate that the SSB phase transition is weakly 1st-order.
Since the above arguments (i) and (ii) confirm the 1st-order picture of SSB, and the general validity of the 1-loop and Gaussian approximations to the effective potential, we will now consider in Section 3 some important physical implications of this scenario.
Two-Mass Scales in the Broken Phase
To explore the physical implications of a 1st-order scenario of SSB, we will restrict to the one-loop approximation Equation (5) of V eff (ϕ) which is equivalent to the Gaussian approximation result Equation (9). Equation (5) is just a different way of re-writing Equation (3) but intuitively supports the traditional view where the broken-symmetry phase is a simple massive theory with mass M h as in Equation (6). Thus, one expects that up to small perturbative corrections, this is the mass parameter entering the scalar propagator.
To see why, again, things are not so simple, let us compute the quadratic shape of the effective potential, i.e., its second derivative at the minimum. This other quantity, say m 2 h , has the value . Now, the derivatives of the effective potential are just (minus) the n-point functions for zero external momentum. In particular, one finds Therefore, by expressing the inverse propagator as This means that apparently, it is this smaller mass m 2 h , and not M 2 h , which enters the (low-momentum) propagator. However, now, in the λ → 0 limit, m 2 h and M 2 h are vastly different scales (i.e., do not differ by small perturbative corrections). Thus one may ask: which is the right mass?
To better understand this point, let us sharpen the meaning of M h by using the general relation which expresses the zero-point-energy ("zpe") in terms of the trace of the logarithm of G −1 (p), i.e., Thus, after subtracting a constant and quadratic divergences, to match the 1-loop Equation (5), we can impose appropriate limits in the logarithmic divergent part (i.e., This relation indicates that M 4 h reflects the typical, average Π 2 (p) at non-zero p 2 . Therefore, if we trust in the 1-loop relation , we should observe large deviations in the propagator if we try to extrapolate to higher-p 2 with the 1-particle form In other words, in a 1st-order picture of SSB, the idea of a simple massive propagator seems to be wrong.
To show that these are not just speculations, let us compare with lattice calculations of the scalar propagator in the broken-symmetry phase. The simulation was performed [22] in the 4D Ising limit which has always been considered a convenient laboratory to exploit the non-perturbative aspects of the theory. It is the λΦ 4 in the limit of an infinite bare coupling λ 0 = +∞, as sitting exactly at the Landau pole. As such, for a finite cutoff Λ s , it represents the best possible definition of the local limit for a non-zero, low-energy coupling λ ∼ 1/L (where L = ln(Λ s /M h )). For the convenience of the reader, we will report here the main results of [22].
In the Ising limit, the broken-symmetry phase corresponds to values of the basic hopping parameter κ > κ c , with the critical κ c = 0.0748474(3) [19,20]. We computed the field vacuum expectation value (16) and the connected propagator where with ... we are indicating the average over lattice configurations.
In terms of the Fourier transform of the propagator, the extraction of m h is straightforward, i.e., Instead M h had to be extracted from the data for the Fourier transformed propagator at higher momentum. To this end, we first fitted the data to the 2-parameter form (19) in terms of the lattice squared momentump 2 withp µ = 2 sin p µ /2. The quality of this fit was then studied to understand how reliable the determination M h ≡ m latt is from the higher-momentum region. Finally, the propagator data were re-scaled by the factor (p 2 + m 2 latt ). In this way, deviations from a straight line will show up clearly if a fitted mass M h ≡ m latt fails to describe the lattice data when p → 0.
The results in the symmetric phase, see Figure 2, show that there, with just a single lattice mass one can describe all data down to p = 0. The data for the re-scaled lattice propagator ref. [22] in the symmetric phase at κ = 0.074 depending on the square lattice momentump 2 withp µ = 2 sin p µ /2. In this case, the mass fitted from higher-p 2 , M h = m latt = 0.2141 (28), describes well the data down to p = 0. The dashed line is the fitted Z prop = 0.9682(23).
In the broken phase, for κ = 0.0749, the results for the largest lattice 76 4 are reported in Figures 3 and 4. The larger mass obtained from the higher-momentum fitp 2 > 0.1 was M h ≡ m latt = 0.0933 (28). As one can see from Figure 3, this fitted mass describes the data for not too small momentum. But for p → 0 the deviations from a straight line become highly significant statistically. In this low-p 2 limit, in fact, the data would require the other mass m h = |Π(p = 0)| 1/2 = 0.0769, see Figure 4. The difference between M h = 0.0933 (28) and m h = 0.0769 has the high statistical significance of 6 sigma. More importantly, once m 2 h is directly computed from the zero-momentum limit of G(p) and M h is extracted from its behavior at higher p 2 , the extrapolation of the results toward the critical point [22] is well consistent with the expected increasing logarithmic trend M 2 h ∼ Lm 2 h .
The Relative Magnitude of m h , M h and Φ
As summarized in Section 3, our lattice simulations supports the idea of a scalar propagator which, in the broken phase, interpolates between two different mass scales m h and M h (Two-mass scales also require some interpolating form for the scalar propagator in loop corrections. Since some precise measurements, e.g., A FB of the b-quark or sin 2 θ w from NC experiments [23], still favor a rather large BEH particle mass, this could help to improve the present rather low quality of the overall Standard Model fit). The lattice data are also consistent with the trend M 2 h ∼ m 2 h ln(Λ s /M h ) predicted by the one-loop and Gaussian approximations to the effective potential. Since the two masses do not scale uniformly in the Λ s → ∞ limit (This non-uniform scaling is crucial not to run in contradiction with the "triviality" of λΦ 4 in 4D [22]. In fact, this implies a continuum limit with a Gaussian set of Green's functions and therefore with a massive free-field propagator. Thus, in an ideal continuum theory, there can only be one mass depending on the unit of mass (m h or M h ) adopted for measuring momenta), the question naturally arises about the extension to the Standard Model and their relationship with the fundamental weak scale Φ ∼ (G Fermi √ 2) −1/2 ∼ 246.2 GeV. In fact, it seems that we should now introduce two different coupling constants, say m 2 h / Φ 2 and M 2 h / Φ 2 . However, then, since M 2 h ∼ Lm 2 h m 2 h , are we faced with a weak-or a strong-coupling theory? To approach the problem in a systematic way, let us first return to the one-loop relations Equations (5) and (6) in Section 2 and observe that the vacuum energy depends on M h , not on m h , namely This means that the critical temperature to restore the symmetry, k B T c ∼ M h , and the whole stability of the broken-symmetry phase will depend on M h , not on m h . This remark will be crucial to understand the cutoff dependence of the various scales and to formulate a description of SSB which in principle can be extended to the Λ s → ∞ limit. In fact, since for any non-zero low-energy coupling λ there is a Landau pole Λ s , we will consider the entire set of pairs (Λ s ,λ), (Λ s ,λ ), (Λ s ,λ )...with larger and larger cutoffs, smaller and smaller couplings but all with the same vacuum energy as in Equation (20). This amounts to impose a condition which can be derived from the more general requirement of RG-invariance for the effective potential in the (ϕ, λ, Λ s ) 3-space In fact, for ϕ = ±v, where (∂V eff /∂ϕ) = 0, Equation (21) follows directly from (22). It is important that in this RG-analysis, besides a first invariant mass scale I 1 = M h , if we introduce an anomalous dimension for the vacuum field there will be a second invariant [22] associated with the RG-evolution in the (ϕ, λ, Λ s ) 3-space, namely This invariant fixes a particular normalization (The anomalous dimension of ϕ reflects the fact that from Equation (6), the cutoff-independent combination is λv 2 ∼ M 2 h = I 2 1 and not v 2 itself implying γ = β/(2λ) [22]. This somewhat resembles the definition of the physical gluon condensate in QCD which is g 2 F a µν F aµν and not just F a µν F aµν .) of ϕ and is then the natural candidate to represent the weak scale I 2 (v) = Φ ∼ 246.2 GeV. The minimization of the effective potential is then translated into a proportionality of the two invariants through some constant K, say Such guiding principle indicates that M h and Φ scale uniformly while at the same time, M 2 h ∼ Lm 2 h and Φ 2 ∼ Lm 2 h . Therefore, by assuming the theoretical predictions for the ratio m h / Φ , and computing the M h /m h ratio from our lattice data for the propagator, we have extracted the constant K. As shown in [22] such procedure, where the cutoff-dependent L drops out, leads to a final estimate K = 2.92 ± 0.12 or M h ∼ 720 ± 30 GeV (26) which includes various statistical and theoretical uncertainties and updates the previous work of refs. [24,25]. We emphasize that the relation M h = K Φ does not introduce a new large coupling 3K 2 = O(10) which modifies the phenomenology of the broken phase. This 3K 2 is clearly quite distinct from the other coupling λ = 3m 2 h / Φ 2 ∼ 1/L but should not be viewed as a coupling producing observable interactions. Since M 4 h reflects the magnitude of the vacuum energy density, it would be natural to consider K 2 ∼ λL as a collective self-interaction of the vacuum condensate which persists when Λ s → ∞. This original view [14,15] can intuitively be formulated in terms of a scalar condensate whose increasing density ∼ L [6] compensates for the decreasing strength λ ∼ 1/L of the two-body coupling (This view of SSB has some analogy with the occurring of superconductivity in solid-state physics. There, the superconductive phase occurs even for an arbitrary small two-body attraction between the two electrons in a Cooper pair. However, the energy density and the collective quantities of the superconductive phase (as energy gap, critical temperature, etc.) depend on a much larger coupling N obtained by re-scaling with the large density of states at the Fermi surface. This means that the same macroscopic description could be obtained with smaller and smaller and Fermi systems with suitably larger and larger N. In this analogy λ is the counterpart of and K 2 of N).
Instead, λ ∼ 1/L is the right coupling for the individual interactions of the vacuum excitations, i.e., the BEH field and the Goldstone bosons. Consistently with the "triviality" of λΦ 4 theory, these interactions will become weaker and weaker when Λ s → ∞.
With this description of the scalar sector, and by using the Equivalence Theorem [26,27], the same conclusion applies to the high-energy interactions of the BEH field with the longitudinal vector bosons in the full g gauge = 0 theory. In fact, the limit of zero-gauge coupling is smooth [28]. Therefore, up to corrections proportional to g gauge , a heavy BEH resonance will interact exactly with the same strength as in the g gauge = 0 theory [29]. For the convenience of the reader, this point will be summarized in Section 5. In Section 6, we will instead consider some phenomenological implications for the present LHC experiments.
Observable Interactions for a Large M h
As anticipated, the quantity 3K 2 should be understood as a collective self-coupling of the scalar condensate whose effects are re-absorbed into the vacuum structure. As such, it is basically different from the coupling λ defined through the β−function For β(x) = 3x 2 /(16π 2 ) + O(x 3 ), whatever the bare contact coupling λ 0 at the asymptotically large Λ s , at finite scales µ ∼ M h this gives λ ∼ 16π 2 /(3L) with L = ln(Λ s /M h ). It is this latter coupling which governs the residual interactions among the fluctuations with very small deviations from a purely quadratic potential for Λ s → ∞.
By introducing the W-mass M w = g gauge Φ /2 and with the notations of [30], a convenient way [29] to express these residual interactions in the scalar potential is (r = M 2 h /4M 2 w = K 2 /g 2 gauge ) The two parameters 1 and 2 , which are usually set to unity, take into account the basic difference λ = 3K 2 , i.e., Then, one can consider that corner of the parameter space [29], namely large K 2 but M h Λ s , that does not exist in the conventional view where one assumes λ = 3K 2 .
A possible objection to this scenario might concern its validity in the full gauge theory. In fact, the original calculation [31] in the unitary gauge could give the impression of the opposite view. Specifically, that with a heavy Higgs resonance of mass M h , longitudinal W L W L scattering is indeed governed by the large parameter K 2 = M 2 h / Φ 2 . Since this is an important point, we will repeat here the main argument of [29].
In the unitary-gauge calculation of W L W L → W L W L high-energy scattering, the lowest-order amplitude A 0 is formally O(g 2 gauge ) but one ends up with In this chain, g 2 gauge comes from the vertices. The 1/M 2 w originates from the external longitudinal polarizations (L) µ ∼ (k µ /M w ) and the factor M 2 h emerges after expanding the Higgs field propagator Then the leading 1/s contribution cancels against a similar term from the other diagrams (which otherwise would give an amplitude growing with s) and the M 2 h from the expansion of the propagator is effectively "promoted" to the role of coupling constant. In this way, one gets exactly the same result as in a pure λΦ 4 theory with a contact coupling λ 0 = 3K 2 .
However, this is only the tree approximation. To obtain the full result, let us observe that the Equivalence Theorem is a non perturbative statement which holds to all orders in the pure scalar self-interactions [28]. Therefore, we have not to worry to re-sum the infinite series of higher-order vector-boson graphs. However, from the χχ → χχ amplitude at a scale µ for g gauge = 0 we can deduce the result for the longitudinal vector bosons in the g gauge = 0 theory, i.e., Then, in the present perspective of a large but finite Λ s , where m h and M h now coexist and could be experimentally determined, at µ ∼ M h the putative strong interactions proportional to λ 0 = 3K 2 should actually be viewed as weak interactions controlled by the much smaller coupling Analogously, the conventional very large width into longitudinal vector bosons computed with the coupling λ 0 = 3K 2 , say Γ conv (M h → W L W L ) ∼ M 3 h / Φ 2 , should instead be re-scaled by 2 1 In this way, through the decays of the heavier state, the scalar coupling λ = 3m 2 h / Φ 2 ∼ 1/L could finally become visible.
Some Predictions for the LHC Experiments
Let us take seriously the idea of a BEH field with two vastly different mass scales, namely m h ∼ 125 GeV and M h ∼ 700 GeV. Is there any experimental signal from the LHC experiments?
If so, what kind of phenomenology should we expect?
To address these questions, we will use a small but definite experimental evidence: the peak in the 4-lepton final state which is presently observed by the ATLAS Collaboration [32] for an invariant mass µ 4l = 700 GeV. We emphasize that this should be taken seriously. In fact, an independent analysis of these data and their combination [33] with the corresponding ones of the CMS Collaboration indicates an evident excess, over the background, at the level of about 5 sigma.
Of course, the 4-lepton channel is only one decay channel of a hypothetical heavier BEH resonance and, for a more complete analysis, we should also consider the other final states. For instance the decay into two photons, a channel that in the past has been showing other intriguing evidence for the near energy µ γγ ∼ 750 GeV. However, the 4-lepton channel, has the advantage of being experimentally very clean and, just for this reason, is called the "golden" channel to detect a possible heavy BEH resonance. At the same time, as in ref. [34], the main effect can be analyzed at a very simple level. For this reason, one can meaningfully start from here.
Let us consider the peak in the number of events observed by ATLAS in the 4-lepton channel for an invariant mass µ 4l = 700 GeV (l = e, µ). From Figure 4a of [32] this corresponds to 3 n peak [4l] 9 ATLAS − 700 GeV (36) above the very small background n bkg ∼ 1 event. By subtracting this background, we get Since the ATLAS efficiency for reconstructed 4-lepton events at large transverse momentum is about 100%, for the given luminosity of 36.1 f b −1 , we obtain a peak cross-section σ peak (pp → 4l) ∼ (0.14 ± 0.08) f b (38) For our estimates, we will assume the invariant mass µ 4l = 700 GeV to be the same pole mass M h = 700 GeV of our heavier excitation of the BEH field. Moreover, if we consider this as a relatively narrow resonance, the corrections due to its virtual propagation should be small [35] and one could approximate the result in terms of on-shell branching ratios as In this relation, the Z−boson branching fraction into charged leptons is known precisely and one finds 4B 2 (Z → l + l − ) ∼ 0.0045.
Concerning the other branching ratio B(M h → ZZ), for M h = 700 GeV, the only unconventional aspect of our picture concerns the coupling of the heavy BEH resonance to longitudinal vector bosons which is proportional to λ = 3m 2 h / Φ 2 ∼ 1/L and not to 3M 2 h / Φ 2 . Therefore, given a decay width Γ(M h → ZZ), we could use the conventional estimate for M h = 700 GeV [36,37] Γ conv (M h → ZZ) ∼ 56.7 GeV and, by replacing instead obtain m h as Equivalently, given a value of m h we can compute Here, we will follow this latter strategy and assume m h = 125 GeV which gives Thus, to obtain B(M h → ZZ), we only need to estimate the total decay width. Here, we will retain exactly the other contributions reported in the literature [36,37] for M h = 700 GeV and the same dimensionless ratio These input numbers (which have very small uncertainties) will then produce a total decay width and a branching ratio Let us now consider the total cross-section σ(pp → M h ), for production of a heavy BEH resonance with mass M h ∼ 700 GeV. Here, the two main contributions derive from more elementary parton processes where two gluons or two vector bosons VV fuse to produce the heavy state M h (here VV = WW, ZZ would be emitted by two quarks inside the protons). For this reason, the two process are usually called Gluon-Gluon Fusion (GGF) and Vector-Boson Fusion (VBF) mechanisms, i.e., The traditional importance of the latter process for large M h is understood by noticing that the VV → M h process is the inverse of the M h → VV decay and therefore σ(pp → M h ) VBF can be expressed [38] as a convolution with the parton densities of the same BEH resonance decay width. Thus, once its coupling to longitudinal W's and Z's were proportional to K 2 = M 2 h / Φ 2 , with a conventional width Γ conv (M h → WW + ZZ) ∼ 172 GeV for M h ∼ 700 GeV, the VBF mechanism would become important. However, this coupling is not present in our model, where instead we expect For this reason, the whole VBF will also be correspondingly reduced from its conventional value This is much smaller than the uncertainty in the pure GGF contribution and will be ignored in the following.
In the end, the GGF term. Here, we will separately adopt two slightly different estimates. On the one hand, the value σ(pp → M h ) GGF = 800(80) fb of ref. [36] and on the other hand, the value σ(pp → M h ) GGF = 1078(150) fb of ref. [37]. These values refer to √ s = 14 TeV and will be re-scaled by about −12% for the present center of mass energy √ s = 13 TeV. In the two cases, the errors take into account uncertainties in the normalization scale and in the parametrization of the parton distributions.
Altogether, for B(M h → ZZ) = 0.054 and 4B 2 (Z → l + l − ) ∼ 0.0045, our predictions for the 4-lepton cross-section and the number of events (for luminosity of 36.1 f b −1 and 139 f b −1 ) are reported in Table 1. Table 1. For M h = 700 GeV and m h = 125 GeV, we report our predictions for the peak cross-section σ(pp → 4l) and the number of events at two values of the luminosity. The two total cross sections are our extrapolation to √ s = 13 TeV of the values in [36,37] for √ s = 14 TeV. As explained in the text, only the GGF mechanism is relevant in our model. From this comparison we deduce that without introducing any free parameter, our model can easily reproduce the presently observed number of events n[4l] ∼ 5 ± 3. This is why, our hypothetical new resonance could naturally fit with the ATLAS peak. At present, this is the only possible conclusion and a real test of our picture is postponed to the analysis of the entire statistics L = 139 f b −1 . If the new M h ∼ 700 GeV were really there, the peak should become four times higher but remain well above the background which is very small at that energy. Thus, the profile of the resonance should become visible and direct determinations of the total decay width should be feasible. An experimental result Γ exp (M h → all) = 33 ÷ 34 GeV would favor an experimental branching ratio B exp (M h → ZZ) close to our reference value 0.054 and, therefore, improve the agreement of our smaller m h with the value 125 GeV which is measured directly at LHC. Thus, the description of SSB given here would find a first experimental confirmation. | 2020-12-17T09:13:10.686Z | 2020-12-09T00:00:00.000 | {
"year": 2020,
"sha1": "fa17f50f23c3ad7e893f83d5a1729a0e7e8ba99f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-8994/12/12/2037/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "4dd2a541e2bad23771d6ae91d6c7f86fe0d2c51b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
} |
154964720 | pes2o/s2orc | v3-fos-license | Farmers ’ Perception of Performance Performed by Extension Field Workers / Facilitators during Integrated Pest Management Farmer Field School Training Programme in Sindh Province of Pakistan
In connection to environment friendly farming, potential stakeholders took efforts and launched FAO-EU-ADB funded National Integrated Pest Management (Nat-IPM) Programme for Cotton in Pakistan during the years 2001 to 2004 and introduced new extension training methodology called Farmer Field School (FFS). The basic principle of FFS training was to enable farmers to be self sufficient, using IPM practices that are agro-ecosystem friendly. This study examined the performance performed by agriculture extension field workers/facilitators (EFW/F) in the implementation of IPM-FFS trainings with special reference to cotton crop in selected districts of Sindh province of Pakistan. A survey study was carried out in four districts of Sindh province (Hyderabad, Tando Allahyar, Matiari and Mirpurkhas). The total sample size comprised of 144 farmers who were involved in the series of IPM-FFS training sessions. Farmers’ perceived that EFW/F played an effective role and performed positively in IPM-FFS activities during training programme. Further, results of present study a confirmation of the adoption and a validation of IPM-FFS as a successful extension approach in Sindh province of Pakistan.
Introduction
Pakistan is the territory of cotton (Gossypium hirsutum L.) and big source of livelihood to around 1.5 million farmers in the rural areas.Cotton is a main source of export capital, accounts for 6.9 percent of value added in agriculture and 1.4 percent of GDP.Pakistan is the world's 4th biggest cotton producing country after China, India, and USA.The world cotton production is projected at 24.8 million tons, during 2010-11 as against 22.01 million tons recorded in 2009-10, estimating an increase of 12.6 percent.Production is expected to continue to increase 11 percent to a record of 27.6 million tons in 2011-12 (GoP, 2011)).
Despite of being one of the largest cotton growing countries, the cotton production in Pakistan is low as compared to other countries.Low cotton production is for the reason of weather conditions, pests attack and little awareness of applying scientific and pest curbing techniques by farmers.The timely and optimum use of the pesticides for cotton is essential to prevent the crop from the attack of pests and diseases but the excessive use of the pesticides disrupts the growth of cotton, killing cotton friendly pests and providing opportunity to harmful pests to attack on crop.Also, this throws burden of costs on the growers.Moreover, farmers uses variety of pesticides in cotton to eliminate insects and weeds from their fields, but these limiting agents have the potential to harm our health and the environment (FAO, 2004).The research must provide those methods that are affordable to the farmers and the environment friendly.The Integrated Pest Management-Farmer Field School (IPM-FFS) approach is based on training needs.The farmers participate in the FFS and become a part of wide scale IPM programmes, ranging from local to national research, and analyze the production troubles and develop solutions for them at the country level (FAO, 2000).The collective research with farmers involves information about local conditions, local-ecosystem, and weather.The IPM-FFS takes into consideration local needs as well (Linh, 2001).
Various studies regarding Integrated Pest Management (IPM) programmes were agreed in end that Farmer Field School (FFS) strengthens farmers' ecological knowledge (Thiele et al., 2001;Rola et al., 2002;Feder et al., 2004;Reddy and Suryamani, 2005;Tripp et al., 2005).The information about understanding the crop ecosystem leads reduction in the pesticides use and at the same time increases production and profit, for instance, in the cotton production systems (Godtland et al., 2004;Khan et al., 2005).The FFS is a training model developed primarily by Food and Agriculture Organization (FAO) in which farmers gain the decision making power regarding use of agro-chemicals at their field.This unique extension approach is actionlearning oriented where farmers are allowed to observe, analyze and make alternative decision about their crops (Kingsley, 1999).
During the four years 2001 to 2004, Sindh province has embraced IPM-FFS as the dominant interface between agriculture extension and farmers.It was assumed that through this new training approach, EFW/F would change the farmers' traditional role from passive learner to active learner.The purpose of this study was to record farmers' perception about the performance performed
Methodology
The literature review indicated that various research designs were used to measure the perception of farmers including self-report measures, observations, and personal interviews.In view of the proposed study thus featured a descriptive survey research.Descriptive survey research has evolved over the years to become a popular methodology among educational researchers (McMillan, 2008).
Four Districts of Sindh province were selected as study area viz., Hyderabad, Tando Allahyar, Matiari and Mirpurkhas district, where IPM-FFSs were established during 2001 to 2004 for cotton through Nat-IPM programme.List of the farmers who were trained in IPM-FFS training programme obtained from National IPM programme coordinator, Director General, Agricultural Extension Wing, Hyderabad, Sindh.After obtaining the list, a sample size of 144 was determined using "Table for Determining Random Sample Size from a Given Population" (Fitz-Gibbon and Morris, 1987;Wunsch, 1986) at confidence level 95% with margin of error ± 5%.
Questionnaire was developed in consultation with the IPM-FFS experts and help of available literature.The concepts or ideas were usually measured through different statements on a continuum ranging from negative to positive.A five (5) point likert scale (1=Never, 2=Rarely, 3=Sometimes, 4=Often and 5=Always) was employed for computing farmers' attendance and application of activities conducted by EFW/F while IPM-FFS training and twenty three (23) performance related statements were developed for measuring the farmers' perception of overall performance of EFW/F by using likert scale (1=Strongly unfavourable, 2=Somewhat unfavourable, 3=Undecided, 4=Somewhat favourable and 5=Strongly favourable).The barriers faced by the farmers 'during IPM-FFS activities were also ranked.Survey was conducted for this study during the period March to September 2009.Despite several efforts, a total response rate (93.75%) was obtained.IBM-SPSS version 19 was used for data analysis.Frequency, mean, percentage, and standard deviation were calculated.
Demographic Information
The demographic characteristics of the sampled farmers are presented in Large number of (36.3%) farmers had farming experience in the range of 11 to 20 years followed by less than 10 years of experience (29.6%).Majority of the farmers (25.9%) had their farm yearly income more than 100,000/-(pak rupees) followed by farm income in the range of 41,000 to 60,000 (23.0%) farmers.
Motivation for participation in IPM-FFS training:
The farmers were enquired to disclose the means of their engagement/ motivation for participation in IPM-FFS training programme and the responses of farmers are presented in Table 2.The results show that majority of the farmers (51.1%) perceived that they participated in IPM-FFS on their self interest, and 11.9 percent of the farmers reasoned their training participation on request of the farm manager; while 8.9 percent showed cause of their
Regularity of farmers in IPM-FFS training:
While gathering the information regarding the regularity of the farmers in IPM-FFS training, results showed that vast majority of the farmers (83.7%) indicated that they regularly attended IPM-FFS, which is promising professionalism attitude of farmers towards training programme.Braun and Duveskog (2010) stated that usually IPM-FFS trained farmers become good facilitators because they are practical and well-known about their community.
Farmers' perception of IPM-FFS activities conducted by EFW/F:
The data is reported in main reason for the success of this approach is that the decisions are not preplanned and based on the analysis of agro-ecosystem practiced by the farmers himself with the help of facilitators.standard deviation of the responses according to the likert scale indicates that the farmers were 'Somewhat favourable' with the performance of EFW/F at IPM-FFS platform.The same results were reported by Kenmore (2002) who stated that IPM-FFS is a training approach that trains farmers to compare new techniques in systematic field assessment and it prepares extension agents for their new roles as facilitator and representatives of public problems and difficulties such as environmental conservation, health, social involvement and organization.In another report Bartlett (2005) stated that the FFS training model for extension in Asia have involved over two million farmers in more than a dozen countries, supported by agriculture extension and international agencies.Across Asia, FFS helped hundreds of thousands farmers to learn IPM practices, about agro-ecological concepts, indiscriminate use of pesticides and increase crop yields.
Ranking of barriers faced by farmer during IPM-FFS training:
The barriers/constraints faced by the farmers during IPM-FFS activities were ranked and according to the farmers' perception (Table 5) IPM-FFS activities were time consuming, lack of incentives, lack of mutual understanding among farmers, strict and hectic schedule, sometimes facilitator behavior and discouraging attitude of the pesticide/fertilizer dealers were main barriers/ constraints.Despite the facing problems during IPM-FFS, famers' interest in training shows realization about the indiscriminate use of pesticides as well as benefits of environmentally sound IPM practices.Somewhat similar findings were found by Chukwuone, et al. (2006) who described that major constraints that affect technology transfer process are extension system lapses, lack of cooperation by farmers, uncertainties experienced in agriculture, and conflicts among farmers.
IPM-FFS activities were difficult and time consuming. 1st
There was no extra benefit of adopting agro-ecological sound IPM practices.2nd There was lack of participatory approach among farmers during IPM-FFS training.3rd Participants lost interest in IPM-FFS training due to strict and hectic schedule.4th Facilitator usually not replied the questions so it was embracing for farmers, participating in IPM-FFS training.5th Influence of pesticide dealers discouraged FFS participants to follow IPM practices.6th
Conclusion
IPM-FFS have been deployed around the country.However assessment with regard to performance performed by EFW/F was needed.The results FFS) training programme in Sindh province as farmers showed positive attitude in relation to overall performance performed by EFW/F.Despite of some constraints, majority of participants indicated that they regularly attendant the IPM-FFS and they had engaged in programme activities on their self interest for improving their agro-ecological sound farming skills with special reference to cotton, which shows EFW/F created inter-personal trust among FFS participants on IPM-FFS training programme that is essential for working mutually and evolving innovations.It was suggested that the farmers can be good source for transferring the obtained knowledge of agro-ecological sound IPM practices to their community.Regarding this, agriculture extension needs to play an important role to support and persuade farmers who were participated and trained in IPM-FFS series of trainings during the years 2001 to 2004.
by EFW/F and to identify the barriers/constraints faced by farmers during IPM-FFS training programme in selected districts of Sindh province.
Table - 1
that shows most of the farmers (28.1%) were young and falling into The educational level of farmers was not good; majority of them (27.4%)educated only up to primary level.Most of them (27.4%) were owners of land in between the range of 11 to 20 acres.
Table 1 :
Demographic information of respondents participation as order of their landlord.Similarly, 28.1 percent of the farmers perceived that they were motivated by the EFW/F to participate in IPM-FFS training programme.Results indicate that the farmers have interest to engage in IPM-FFS training for improving their farming capabilities.These findings were supported byKhan et al. (2005)who carried out study an assessment on IPM-FFS training programme on farmers' capabilities, practices and profits in the Khairpur district of Sindh province.The study demonstrated that the whole season-long IPM-FFS training developed farming and decision making capabilities of farmers.
Table 2 :
Farmers' engagement in IPM-FFS training
Table 3
Mallah and Korejo (2007)owed that most of activities were 'always' conducted in IPM-FFS training sessions (4.59±0.81).The results of present study are also in line with those reported byMallah and Korejo (2007)who noted that IPM-FFS programme made a visible impact on farmers understanding; and one of the Sabaragamuwa University Journal 2012, V. 11 NO. 1 pp 1-12 Farmers' Perception of Performance Performed by Extension Field Workers/ Facilitators during Integrated Pest Management Farmer Field School Training Programme in Sindh Province of Pakistan
Table 3 :
Farmers' perception of IPM-FFS activities conducted by EFW/F P = Percentage, M = Mean, SD = Standard Deviation Farmers' perception of overall performance of EFW/F: The twenty three (23) different statements were developed for measuring the farmers' perception of overall performance performed by EFW/F during IPM-FFS training programme and it was found that on the most of statements farmers' perceived 'Somewhat favourable' and 'Strongly favourable', showing highly positive attitude in relation to performance performed by EFW/F during IPM-FFS training programme.The data gathered to this regard (Table4) indicate that the 74.1 percent of the respondents 'Somewhat favourable' that 'EFW/F were active and energetic during IPM-FFS training' and 19.3 percent 'Strongly favourable' over this statement.The 'EFW/F involved himself and was flexible in participation in IPM-FFS activities' was 'Somewhat favourable' by 58.5 percent farmers and 29.6 percent 'Strongly favourable'; while 66.7 percent respondents 'Somewhat favourable' that 'EFW/F conducted IPM-FFS activities step by step in an organized manner' while 'Strongly favourable' by 18.5 percent respondents.On the statement that 'EFW/F used appropriate methods and kept focus on the IPM-FFS continuing activities' 71.
Table 4 :
It was noted that 63.7 percent farmers 'Somewhat favourable' on 'completion of IPM-FFS activities at scheduled time by EFW/F' and 17 percent 'Strongly favourable' this management statement of EFW/F; 48.1 percent respondents were 'Somewhat favourable' on 'monitoring and evaluation by EFW/F in IPM-FFS to achieve objectives' and 21.5 percent farmers were 'Strongly favourable' over this statement.Seventy five percent farmers were 'Somewhat favourable' over the 'soft and polite attitude of EFW/F during IPM-FFS sessions' and 17.8 percent 'Strongly favourable' this attitude.Against the statement that 'EFW/F communicated with farmers in local language' 48.9 percent participants were 'Somewhat favourable' and 37.8 percent respondents were 'Strongly favourable' with this statement; while 67.4 percent respondents 'Somewhat favourable' that EFW/F believed in two ways communication process so that farmers didn't hesitate' and this style was 'Strongly favourable' by 23 percent farmers.The data further shows that 63 percent farmers perceived as 'Somewhat favourable' over the statement that 'EFW/F involved farmers in decision making process through participatory approach' and 20 percent were 'Strongly favourable'; while on the statement that 'EFW/F listened questions completely and carefully before replying to the participant' 60 percent farmers 'Somewhat favourable' and 28.9 percent 'Strongly favourable' over this.The 20.7 percent farmers were 'Strongly favourable' that 'EFW/F always responded to the participants' question timely and in a consistent manner.' and this statement was 'Somewhat favourable' by 57 percent farmers.
Farmers' perception of EFW/F performance performed in IPM-FFS training Sabaragamuwa University Journal 2012, V. 11 NO. 1 pp 1-12 A. A. Siddiqui and M. Siddiqui P = Percentage, M = Mean, SD = Standard Deviation Sabaragamuwa University Journal 2012, V. 11 NO. 1 pp 1-12 Farmers' Perception of Performance Performed by Extension Field Workers/ Facilitators during Integrated Pest Management Farmer Field School Training Programme in Sindh Province of Pakistan A. A. Siddiqui and M. Siddiqui
Table 5 :
Rank wise barriers/constraints faced by farmers Sabaragamuwa University Journal 2012, V. 11 NO. 1 pp 1-12 Farmers' Perception of Performance Performed by Extension Field Workers/ Facilitators during Integrated Pest Management Farmer Field School Training Programme in Sindh Province of Pakistan of this study revealed that the EFW/F performed positively and effectively in activities during Integrated Pest Management Farmer Field School (IPM- | 2019-02-15T17:37:31.803Z | 2013-07-25T00:00:00.000 | {
"year": 2013,
"sha1": "c0ff0cc6e48e89e786eb326d67b20cfe882c0a2a",
"oa_license": "CCBY",
"oa_url": "http://suslj.sljol.info/articles/10.4038/suslj.v11i1.5867/galley/4645/download/",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "c0ff0cc6e48e89e786eb326d67b20cfe882c0a2a",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Business"
]
} |
130641048 | pes2o/s2orc | v3-fos-license | Unconventional shale-gas systems: The Mississippian Barnett Shale of north-central Texas as one model for thermogenic shale-gas assessment
Shale-gas resource plays can be distinguished by gas type and system characteristics. The Newark East gas field, located in the Fort Worth Basin, Texas, is defined by thermogenic gas production from low-porosity and low-permeability Barnett Shale. The Barnett Shale gas system, a self-contained source-reservoir system, has generated large amounts of gas in the key productive areas because of various characteristics and processes, including (1) excellent original organic richness and generation potential; (2) primary and secondary cracking of kerogen and retained oil, respectively; (3) retention of oil for cracking to gas by adsorption; (4) porosity resulting from organic matter decomposition; and (5) brittle mineralogical composition. The calculated total gas in place (GIP) based on estimated ultimate recovery that is based on production profiles and operator estimates is about 204 bcf/section (5.78 10 m/1.73 10 m). We estimate that the Barnett Shale has a total generation potential of about 609 bbl of oil equivalent/ac-ft or the equivalent of 3657 mcf/ ac-ft (84.0 m/m). Assuming a thickness of 350 ft (107 m) and only sufficient hydrogen for partial cracking of retained oil to gas, a total generation potential of 820 bcf/section is estimated. Of this potential, approximately 60% was expelled, and the balance was retained for secondary cracking of oil to gas, if sufficient thermal maturity was reached. Gas storage capacity of the Barnett Shale at typical reservoir pressure, volume, and temperature conditions and 6% porosity shows amaximum storage capacity of 540mcf/ac-ft or 159 scf/ton. AAPG Bulletin, v. 91, no. 4 (April 2007), pp. 475–499 475 Copyright #2007. The American Association of Petroleum Geologists. All rights reserved. Manuscript received June 17, 2006; provisional acceptance August 24, 2006; revised manuscript received November 28, 2006; final acceptance December 19, 2006. DOI:10.1306/12190606068
A B S T R A C T
Shale-gas resource plays can be distinguished by gas type and system characteristics. The Newark East gas field, located in the Fort Worth Basin, Texas, is defined by thermogenic gas production from low-porosity and low-permeability Barnett Shale. The Barnett Shale gas system, a self-contained source-reservoir system, has generated large amounts of gas in the key productive areas because of various characteristics and processes, including (1) excellent original organic richness and generation potential; (2) primary and secondary cracking of kerogen and retained oil, respectively; (3) retention of oil for cracking to gas by adsorption; (4) porosity resulting from organic matter decomposition; and (5) brittle mineralogical composition.
The calculated total gas in place (GIP) based on estimated ultimate recovery that is based on production profiles and operator estimates is about 204 bcf/section (5.78 Â 10 9 m 3 /1.73 Â 10 4 m 3 ). We estimate that the Barnett Shale has a total generation potential of about 609 bbl of oil equivalent/ac-ft or the equivalent of 3657 mcf/ ac-ft (84.0 m 3 /m 3 ). Assuming a thickness of 350 ft (107 m) and only sufficient hydrogen for partial cracking of retained oil to gas, a total generation potential of 820 bcf/section is estimated. Of this potential, approximately 60% was expelled, and the balance was retained for secondary cracking of oil to gas, if sufficient thermal maturity was reached. Gas storage capacity of the Barnett Shale at typical reservoir pressure, volume, and temperature conditions and 6% porosity shows a maximum storage capacity of 540 mcf/ac-ft or 159 scf/ton.
INTRODUCTION
Unconventional shale gas has evolved into an important resource play for the United States, accounting for more than 14% of produced gas in the United States by the end of 2004 (Energy Information Administration [EIA], 2004). These are typically source rocks that also function as reservoir rocks and are a class of continuous petroleum accumulations (Schmoker, 1995). An unconventional, continuous petroleum system consists of an accumulation of hydrocarbons that are found in low-matrix-permeability rocks that depend on fracture permeability (either natural or as a result of stimulation) for production and contain large amounts of hydrocarbons, but with low gas recovery factors (Schmoker, 1995).
Shale-gas systems are of two distinct types: biogenic and thermogenic (Claypool, 1998), although there can also be mixtures of the two gas types. Biogenic shale-gas plays, such as the Antrim Shale of the Michigan Basin (Martini et al., 2003), contain dry gas adsorbed to organic matter. After dewatering, these wells will have modest initial gas flow rates in the 50-400 mcf/day (1416-11,327 m 3 /day) range, but with production histories upward of 30 yr.
Several types of continuous, thermogenic shale-gas systems exist, including (1) high-thermal-maturity shales (e.g., Barnett Shale of the Fort Worth Basin); (2) low-thermal-maturity shales (e.g., some New Albany Shale in areas of the Illinois Basin); (3) mixed lithology intraformational systems containing shale, sands, and silts (e.g., Bossier Shale of east Texas); (4) interformational systems where gas is generated in a mature shale and stored in a less mature shale (e.g., Tertiary Waltman Shale Member of the Wind River Basin, Wyoming); and (5) combination plays that have both conventional and unconventional production (e.g., some vertical Anadarko Basin wells producing from Wapanucka and Hunton reservoirs, as well as the Woodford Shale). There may also be mixed systems containing both thermogenic and biogenic gas (e.g., possibly some New Albany Shale gas systems) (Jarvie et al., in press).
The Barnett Shale of the Fort Worth Basin, Texas, has evolved into the preeminent shale-gas resource play because of the long-term development efforts of Mitchell Energy Corporation (now part of Devon Energy) and subsequent operators in the basin (Givens and Zhao, 2004;Steward et al., 2006;Steward, in press). The estimated ultimate recovery (EUR) from this low-porosity and low-permeability black shale is 2.5-3.5 bcf (7.08-9.91 Â 10 7 m 3 ) of gas from horizontal wells drilled in the core producing areas (counties with the highest production to date) (Mortis, 2004). The Barnett Shale is found at depths of generally 6500-8500 ft (1981-2591 m), with average thickness of about 350 ft (106.7 m) in the core areas, although values range from 50 or less to more than 1000 ft (15.2 or less to more than 304.8 m) across the basin (Pollastro, 2003;Pollastro et al., 2003;Montgomery et al., 2005). Although these shale-gas wells have modest gas flow rates compared to conventional plays, they also have modest drilling costs, so the rate of return (ROR) is commonly 100% within 1 yr in the core area and 65% in noncore areas He joined the U.S. Geological Survey in 1978 and has served as a province geologist on the national and world energy assessment projects. His recent accomplishments include petroleum system analysis and resource assessment of the Bend arch -Fort Worth Basin, with particular focus on Barnett Shale, South Florida Basin, and the Arabian Peninsula.
A C K N O W L E D G E M E N T S
The authors thank AAPG reviewers, including Barry Katz, Ken Peters, and Jerry Sweeney, as well as reviewers from the U.S. Geological Survey, including Chris Schenk and Steve Roberts. All reviewers provided detailed criticisms and suggestions that substantially improved the submitted manuscript. For this, we are most grateful. A point of concern for reviewers was the use of personal communications, unpublished data and presentations, and abstracts. Their concerns are well founded because such materials are not readily available to all readers of AAPG Bulletin and certainly have not been subjected to rigorous peer review. We share those concerns but did use those materials because of the dearth of publications on the Barnett Shale. These references may be subjective and may rely on personal opinions, antidotal observations, or strictly empirical data. Care should be exercised in extending information from these references until they are verifiable by rigorous study and peer-reviewed publication. In addition, we thank the Oil Information Library of Fort Worth and its curator, Roy English, and the Bureau of Economic Geology, especially Steve Ruppel and colleagues in Austin, as well as James Donnally and Randy McDonald, for their assistance in providing information and access to samples. We also thank the AAPG editors and staff, especially Carol Christopher and Frances Whitehurst, for their editorial assistance and patience. Further thanks go to George Mitchell, Mitchell Energy and employees, Devon Energy, and others who have made this play a success.
(areas where production is not well established; EOG Resources Inc., 2006).
Gas storage in the Barnett Shale is primarily as free gas with lesser amounts of adsorbed gas. Gas is derived from both thermogenic cracking of kerogen and from cracking of any retained oil in the shale (Jarvie et al., 2003;Montgomery et al. 2005). Areas where the Barnett Shale is in the oil window (0.60-0.99% R o ) (see appendix for explanation of terms) generally have gas flow rates or EURs that are lower than wells in the core area (Bowker, 2003a) because only oil-associated gas is generated in this maturity window.
Mineralogy appears to be a key factor characterizing the best wells (Bowker, 2003a). The best Barnett Shale production comes from zones with 45% quartz and only 27% clay (Bowker, 2003a). The brittleness of the shale is key to stimulation whereby a fracture network is created, providing linkage between the wellbore and the microporosity (average 6%). Pore throats are typically less than 100 nm in the Barnett Shale (Bowker, 2003a). Although fractures in the Barnett Shale are necessary for good production, macrofractures near major fault zones are less productive because fractures are being filled with carbonate cement and are less responsive to stimulation (Bowker, 2003a). Faults are also conduits for stimulation energy instead of fracturing the indurated shale. Such stimulation generally will not reach much of the microporosity, and the result is poor shale-gas wells.
The Barnett Shale gas system is sealed by limestone above the Barnett Shale and in some areas below the shale. These limestones generally have higher fracture thresholds than the shale itself, thereby providing barriers to stimulation to optimize fracturing of the gasbearing shale (Martineau, 2001). However, Barnett Shale has been shown to be highly productive in areas where the lower stimulation barrier is not present using horizontal wells and rigorous stimulation without breaking into the underlying brine-bearing limestones of the Ellenburger Formation (Marble, 2004(Marble, , 2006.
Although it is known that the Barnett Shale is organic rich, has high gas contents, and responds to stimulation efforts, it is not evident why it is such a prodigious shale-gas system. The goal of this article is to understand the geochemical criteria that make the Barnett Shale of the Fort Worth Basin a prodigious gas resource given its unconventional low porosity and permeability reservoir characteristics. To evaluate this hydrocarbon resource system, it is important to understand various geochemical processes and shale characteristics controlling generation, storage, and access to this gas resource.
GEOLOGIC SETTING
The Barnett Shale occurs in a 38-county area of the Fort Worth Basin in north-central Texas, with the main producing areas north and south of Fort Worth, Texas ( Figure 1). Montgomery et al. (2005) and Pollastro et al. (2007) provide a summary of the geologic evolution of the Fort Worth Basin. The Barnett Shale occurs in other nearby basins, such as the Hardeman, Kerr, Marfa, and Permian basins. Age-equivalent shales, as well as underlying Devonian black shales, are present along the eastern flank of the Ouachita thrust front in the Delaware, Arkoma, and Black Warrior basins and along the Appalachian Mountains, extending into the northeastern parts of the United States.
The Barnett Shale can be divided into five lithofacies: (1) black shale, (2) lime grainstone, (3) calcareous black shale, (4) dolomitic black shale, and (5) phosphatic black shale (Henk et al., 2000;Henk, 2005;Hickey and Henk, 2006;Loucks and Ruppel, 2007). All lithofacies yield high gamma-ray responses (>100j API), with the phosphatic shales having the highest response. Micropores in thin sections show no evidence of connectivity, consistent with the low permeability of the shale. Thin sections also reveal the presence of pyrite, scattered calcite shell fragments, occasional silicified Tasmanites (algal fragments), and conodonts. Poorly sorted lime grainstones are indicative of sediment gravity flows. Figure 2 is a generalized Fort Worth Basin stratigraphic column (Flippin, 1982). The Barnett Shale unconformably overlies limestones of the Ordovician Viola Limestone in eastern parts of the basin. The Viola Formation in this area provides the basal seal and stimulation barrier to prevent Ellenburger Group waters from entering the wellbore during completion, although it is now evident that wells can be completed with high flow rates where the Viola Limestone is missing (Marble, 2004(Marble, , 2006. The Devonian section is absent in the Fort Worth Basin as is the entire Permian section. The Barnett Shale is conformably overlain by Pennsylvanian Marble Falls Limestone. In the eastern part of the basin, the upper quarter of the Barnett Shale is separated from the lower Barnett Shale by the Forestburg limestone. Original completions in the Barnett Shale by Mitchell Energy were only made in the lower Barnett Shale, and they later put the upper Barnett Shale under stimulation, adding about 25% to the EUR of their wells (Bowker, 2003a). Minor amounts of Cretaceous rock occur in certain parts of the basin, and Tertiary rocks are absent.
BACKGROUND: BARNETT SHALE GAS AND OIL SYSTEMS
A petroleum system comprises various components, including source rock, migration pathway, reservoir rock, seal, and overburden (Magoon and Dow, 1994). Processes inherent to such a system include generation, expulsion, accumulation, overburden deposition, and preservation of hydrocarbons (Magoon and Dow, 1994). In addition, petroleum adsorption and its impact on expulsion and secondary cracking of any retained oil (Stainforth and Reinders, 1990;Thomas and Clouse, 1990;Pepper, 1992;Sandvik et al., 1992) are also important processes that are critical to un-derstanding the gas in place (GIP) in the Barnett Shale.
The Barnett Shale has been buried to sufficient depth and/or exposed to hot fluid flow to reach oil-or gas-generation stages in most parts of the Fort Worth Basin. In areas where the Barnett Shale reaches gaswindow thermal maturity (> 1.0% R o ) (vitrinite reflectance in oil) and especially above 1.4% R o , there are large continuous commercial gas resources. The Newark East gas field, now the largest gas field in Texas, has estimated mean gas resources of 26.2 tcf (7.4 Â 10 11 m 3 ) and has grown from the 87th largest in the early 1990s to the second largest gas field in the United States (EIA, 2006). The Barnett Shale is the primary source for petroleum in the Fort Worth Basin, sourcing conventional reservoir systems with both oil and gas (Jarvie et al., 2003;Montgomery et al., 2005;Hill et al., 2007). The Newark East gas field initially comprised gas produced from the Pennsylvanian Bend Conglomerate (locally known as the Boonsville conglomerate), which is sourced from the Barnett Shale. These reservoirs are charged by gas generated Figure 2. Generalized stratigraphic column in the Fort Worth Basin (modified from Flippin, 1982). Used with permission from the Dallas Geological Society.
during the oil window at inferred maturities of 0.70-1.00% R o based on carbon isotopic values (Jarvie et al., 2003;Hill et al., 2007). In addition, the Barnett Shale has sourced petroleum in conventional oil reservoirs, including Ellenburger, Chappel, Strawn, and various other stratigraphically distinct reservoir units based on highresolution gas chromatography, carbon isotopes, and biomarkers (Jarvie, 2000;Jarvie et al., , 2005 R. J. Hill, D. M. Jarvie, R. M. Pollastro, K. A. Bowker, and B. L. Claxton, 2004, unpublished work;Hill et al., 2007). High-resolution light hydrocarbon data suggest some terrigenous oil input to a few production oils, which is possibly caused by organofacies variations within the Barnett Shale or mixing with Pennsylvanian Smithwick Shale hydrocarbons (Jarvie et al., 2003). The Smithwick Shale has excellent TOC values (1-3 wt.%), but with low initial hydrogen indices (<200 mg HC/g rock), making it a type III gas-prone kerogen (Jarvie and Henk, 2006). No other horizons in the Fort Worth Basin have been identified that have petroleum source potential.
SAMPLES AND EXPERIMENTAL RESULTS
A geochemical database consisting of 315 cores, 488 cuttings, and 6 outcrop samples of the Barnett Shale in the Fort Worth Basin was used to determine its geochemical characteristics including gas-generation potential. Core samples from six different high-thermal-maturity Fort Worth Basin wells obtained from the Texas Bureau of Economic Geology were analyzed at 1-ft (0.3-m) intervals with average values reported in Table 1. Oryx provided core samples from the Oryx 1 Grant well located in Montague County, which is in the middle oil window (0.80% R o ). Cuttings samples are from 35 wells in 17 different counties, with average values for all samples and for lower-thermal-maturity samples (T max < 440jC) reported in Table 1 (see the appendix for definition of Rock-Eval parameters). In addition, six outcrop samples from two locations in the far southwestern Fort Worth Basin provided thermally immature Barnett Shale samples, as does one set of cuttings from the Explo Oil 3 Mitcham well in Brown County.
KEROGEN TYPE
The average original hydrogen index (HI o ) value of immature Lampasas County outcrop samples (1, Figure 1) is 475 mg HC/g TOC, indicative of type II marine oilprone kerogen (Jones, 1984). Cuttings samples from the Explo Oil 3 Mitcham well in Brown County (2, Figure 1) are low thermal maturity ($0.60% R o ) and have lower HI o than the Lampasas outcrops averaging 392 mg HC/g TOC. Thus, measured HI o values for low-thermalmaturity samples range from 392 to 475 mg HC/g TOC. When these low-maturity cuttings and the immature outcrop samples are combined on a plot of Rock-Eval S 2 vs. TOC, the slope of the best-fit line gives HI o of 533 mg HC/g TOC (Langford and Blanc-Valleron, 1990;Cornford et al., 1998) (Figure 3). Thus, HI o covers a range of values from 392 to 533 mg HC/g TOC based on both measured and graphical interpretation of measured data.
Visual assessments of kerogen indicate the presence of 95-100% amorphous (structureless) organic matter with occasional algal Tasmanites, confirming the chemical kerogen type. Minor amounts of terrestrial organics may also be found. Although this kerogen type is associated with anoxia, this can first develop at topographic lows in the basin from density stratification, meaning that there can be minor oxidized organic matter occurring with well-preserved organic matter (Jones, 1984).
The hydrocarbons generated from a siliceous marine type II kerogen are predominantly low-sulfur oil and cogenerated gas in the maturity range of 0.60-0.99% R o . Organic matter from the Barnett Shale generates about 30% gas in the oil window from primary cracking based on laboratory experiments (Jarvie et al., 2003). At increasing thermal maturity, present-day HI (HI pd ) is less than HI o , and HI pd will not be indicative of the original kerogen type, but will provide an indication of the primary products that can be generated (e.g., oil, mixed, wet gas, dry gas).
ORGANIC RICHNESS
Determination of the original total organic carbon ( TOC o ) of a source rock provides a quantitative means to estimate the total volume of hydrocarbons that it can generate depending on kerogen type. Heavily explored areas generally have source rocks that are thermally mature, so it is not straightforward to determine original values. Consideration of the components of TOC assists in understanding how to restore highly mature TOC to TOC o .
Total organic carbon in a source rock comprises three basic components: (1) organic carbon in retained hydrocarbons as received in the laboratory (C HC ); (2) organic carbon that can be converted to hydrocarbons, C C , called convertible carbon (Jarvie, 1991a) or Oil in Rock from S 1 (bbl oil/ac-ft) yy Generation Potential from S 2 (bbl oil/ac-ft) yy reactive or labile carbon (Cooles et al., 1986); and (3) a carbonaceous organic residue that will not yield hydrocarbons because of insufficient hydrogen commonly referred to as inert carbon (Cooles et al., 1986;Jarvie, 1991a), dead carbon, or residual organic carbon (C R ). As organic matter matures, C C is converted to hydrocarbons and a carbonaceous residue eventually resulting in a reduced TOC when expulsion occurs. Immature (0.48% R o ) outcrop samples of the Barnett Shale (n = 6) have high TOC values averaging 11.47%. Their average generation potential (Rock-Eval S 2 yield) is 54.43 mg HC/g rock or 1192 bbl of oil equivalent/ac-ft (0.15 m 3 /m 3 ). However, low-maturity cuttings from Explo Oil Inc. 3 Mitcham well in Brown County at the far southwestern flank of the basin average 4.67 wt.% TOC with S 2 values of 18.17 mg HC/g rock (398 bbl of oil equivalent/ac-ft or 0.05 m 3 /m 3 ).
Number of Samples
The present-day average TOC from 290 highthermal-maturity Barnett Shale core samples from six wells in the principal producing area of Wise and Tarrant counties (average % R o of 1.67) is 4.48 wt.% (see Table 1). Because TOC values commonly show lognormal distributions (Cornford et al., 1998), average values may not always be representative of TOC. However, these values are consistent with well-to-well averages that are in the range of 4 -5% TOC. Removal of the residual C CHC in Rock-Eval S 1 and S 2 yields a C Rpd of 4.43 wt.%.
From Figure 3, the x-axis intercept is the amount of dead carbon that does not yield any hydrocarbons (Cornford et al., 1998) and is 1.26 wt.%. However, or-ganic matter decomposition results in C R at high thermal maturity increasing by about 10-20% versus the original C R value (Burnham, 1989). The increase in C R may be from aromatization and condensation reactions (Muscio and Horsfield, 1996) or from the carbon-rich residue of secondary cracking of petroleum to gas. The open-system maturation data (i.e., free of secondary cracking) of Jarvie and Lundell (1991) on low-maturity cuttings showed a C Rpd intercept of 3.20%.
We developed another approach for determining TOC o that consists of first determining HI o based on visual kerogen type percentages and using average HI o for four kerogen types (averages derived from range of HI in Jones, 1984) The transformation ratio (TR HI ) is the change in HI o to present-day values (HI pd ) that includes a correction for early free oil content from the original production index (PI o ) and present-day oil content (PI pd ) (Peters et al., 2006) (equation 2, appendix). Once TR HI is determined, TOC o can be calculated (equation 3, appendix). From equations 1 to 3, the calculated TOC o value for Barnett Shale is 6.41% based on 95% type II and 5% type III with fractional conversion at 0.95. This yields an average original HI value of 434 mg HC/g TOC. This yields a C Ro of 4.09 wt.%. C C of 2.32 wt.% is the amount of organic carbon that is converted to hydrocarbons, which computes to 27.84 mg HC/g rock for S 2o or 609 bbl of oil equivalent/ac-ft (0.082 m 3 /m 3 ). The change in these values with increasing thermal maturity based on the calibration data from Jarvie and Lundell (1991) and Montgomery et al. (2005) is shown in Table 2. . Plot of generation potential (S 2 ) vs. organic richness (TOC). Solid large squares are immature to early mature samples that provide measured indications of HI o and C R from the slope and x-intercept. Open, small squares are highmaturity (average 1.8% R o ) core data. Solid lines from solid squares to x-axis are projected decomposition lines from immature to high thermal maturity.
The TOC o (6.41%) is less than estimates that would be made from both Daly and Edman (1985) and Jarvie and Lundell (1991), who used corrections of 50 and 36%, respectively, to compute original TOC values from C R on high-maturity samples for type II kerogens. This is likely caused by excluding the increase in C R because of oil cracking or other reactions in these earlier studies.
Vitrinite Reflectance
High gas content in the Barnett Shale is caused by the volumes of hydrocarbons generated (a result of organic richness, generation potential, and shale thickness), thermal maturity, and retention of a part of liquid hydrocarbons for subsequent cracking to gas. Where lower maturity Barnett Shale is found, gas flow rates are lower, and this is hypothesized to be caused by both lower volumes of generated gas and the presence of residual hydrocarbon fluids that occlude pore throats. The high gas flow rates, achieved in many high-maturity Barnett Shale wells, result from the large increase in gas generation because of both kerogen and oil cracking. Thus, thermal maturity is a key geochemical parameter to assess the likelihood of high-flow-rate shale gas.
Thermal maturity provides an indication of the maximum paleotemperature reached by a source rock. Two basic approaches exist to this determination: visual and chemical. Vitrinite reflectance is the most common approach for the determination of thermal maturity, which is completed by microscopic examination of kerogen or whole rock mounts and recording the reflectivity of the particle via a photomultiplier. For a review of the history and methods of thermal-maturity assessment, the reader is referred to Burgess (1975). Numerous pitfalls exist to determining the indigenous population of vitrinite, and we commonly use additional chemical assessments to supplement visual measurements. These include Rock-Eval T max , organic matter transformation ratio, residual hydrocarbon fingerprints (extract fingerprints), gas composition, and carbon isotopes, when available.
To be a fully useful parameter in assessment of oil and gas generation, thermal maturity must be directly related to the extent of organic matter conversion and, ultimately, the preservation of generated hydrocarbons. Maturity windows are dependent on the rates of decomposition (kinetics) of organic matter. Waples and Marzi (1998) demonstrated that because the kinetics of vitrinite and hydrocarbon generation overlap, they are related, but cannot provide a universal correlation.
Variation in rates of kerogen decomposition by kerogen type was illustrated by Espitalie et al. (1984), and precise measurements for various source rocks were reported by Jarvie and Lundell (2001). For example, using kinetic data from various kerogen types, they found that at 0.80% R o at a constant heating rate of 3.3jC/m.y., a low-sulfur type II kerogen such as the Barnett Shale would be approximately 27% converted, whereas a typical type III coal would only be about 9% converted, and a high-sulfur, high-oxygen Monterey Formation sample would be about 56% converted. Thus, oil and gas windows vary depending on source rock type and inherent decomposition rates as well as thermal histories.
Organic Matter Conversion (Transformation)
Although maturity parameters such as vitrinite reflectance are empirically related to the oil and gas windows, it is feasible to evaluate conversion directly by measuring changes in organic matter yields, i.e., the extent of kerogen conversion by calculation of the kerogen transformation ratio.
The conversion of organic matter can be assessed by the change in TR HI values from low maturity to high Jarvie and Lundell (1991) Showing Change in TOC o , C C , and C R , and Porosity Increase with Increasing Thermal Maturity maturity. Commonly referred to as transformation ratio (TR), this term has conflicting meanings in the literature (e.g., Espitalie et al., 1984;Tissot and Welte, 1984;Pelet, 1985). Therefore, we add the subscript HI to transformation ratio (TR HI ) and convert to percent to define how it applies to kerogen conversion calculations. This is the same as the fractional conversion, f, of Claypool (Peters et al., 2006). Assuming an average HI o value of 434 mg HC/g TOC for the Barnett Shale (from equation 1 of the appendix), TR HI can be calculated for any HI pd value. For example, HI pd of 28 mg HC/g TOC (e.g., high-maturity Barnett Shale cores, Table 1) suggests about 93% conversion of Barnett Shale organic matter to hydrocarbons and carbonaceous residue. For the lower-thermal-maturity Oryx 1 Grant core samples with a HI pd of 300 mg HC/g TOC, the TR HI is only 31%, explaining why high amounts of recoverable gas are not found and oil is found in conventional reservoirs in Montague County. We emphasize that HI o and hence, TR HI is kerogen specific and does not apply to other kerogen types. TR HI can be assessed from extensive analysis of samples of varying thermal maturity or derived by determining HI o from maceral compositions as shown in equation 1.
HI pd and TR HI are effective to predict oil versus gas windows in the Barnett Shale. The sensitivity of TR HI to HI o is critical around approximately 80% conversion of organic matter (onset of primary condensate-wet gas window). As can be seen in the immature Barnett Shale data in Table 1, HI o values can readily vary by ± 50 mg HC/g TOC. However, even with this range of HI o , the error in TR HI is only ± 2.5% (at very high maturity, it is ±1.4%, whereas at middle maturity, it is about ±10%). For the high-maturity Barnett Shale thermogenic gas system, an approximation of TR HI to equivalent vitrinite reflectance values is derived from data points from the earliest oil window to the end of the gas window . The TR HI needed for the earliest gas window is approximately 80%, but the dry-gas window requires values upward of 90%.
To illustrate the effectiveness of this approach to assess oil and gas windows for the Barnett Shale in the Fort Worth Basin, contour maps of both HI pd and TR HI are shown in Figure 4a and b. These maps complement a vitrinite reflectance maturity map with a pyrolysisbased chemical assessment of thermal maturity. Interestingly, these maps suggest potential gas production farther to the west in the Fort Worth Basin than do vitrinite reflectance data (e.g., Montgomery et al., 2005).
For evaluation of plays and prospects, it is useful to overlay the predicted oil and gas windows from vi-sual and chemical interpretations of geochemical data; where the interpretations agree, risk is lower, and where they disagree, risk is higher. Further interpretation or analysis may be required to resolve differences.
COMPOSITION OF RESIDUAL HYDROCARBON FLUIDS
Because conversion of organic matter depends on kerogen type and thermal history, it is also imperative to evaluate the residual hydrocarbons in shale. The presence of liquid hydrocarbons in Barnett Shale is coincident with lower gas flow rates, faster gas production decline curves, and less-recoverable gas. Shales containing paraffins above C 20 or with large unresolved complex mixtures (UCM) on gas chromatographic fingerprints have much lower gas flow rates than shales with only C 20 residual hydrocarbon content and no UCM. Samples with high C 20 + paraffins in extractable organic matter will have lower gas-to-oil (GOR) values consistent with the black-oil maturity window, whereas samples with low C 20 + hydrocarbons (<5%) will have higher GOR values (>3500 scf/bbl of oil; 623 m 3 /m 3 ).
A gas chromatographic fingerprint of residual hydrocarbons extracted from the Barnett Shale reveals the presence of problematic petroleum products, such as extended paraffins or UCM that are hypothesized to occlude pore throats restricting gas flow (Figure 5a). If the percentage of C 20 to total area of the gas chromatographic fingerprint is greater than 95% (i.e., only condensate-like liquid hydrocarbons are present, e.g., Figure 5b), then high gas flow rates and high GOR values are predicted, all other factors being equal.
INTERPRETED THERMAL MATURITY
Thermal maturity derived from visual and chemical methods can be compared using a conversion routine and polar plot of various maturation parameters. This requires careful calibration of various thermal maturity parameters. For the Barnett Shale, a range of optimum maturity values to achieve economic gas flow rates are shown in Table 3. A maturity risk plot modified from Jarvie et al. (2005) is a simple means to compare various maturity parameters and their application to initial economic assessment of low-permeability shale-gas systems such as the Barnett Shale or other low-maturity gas systems such as the New Albany or Antrim shales (Figure 6).
COMPUTATION OF POROSITY CAUSED BY ORGANIC CARBON CONVERSION
The thermal conversion of kerogen to petroleum results in the formation of a carbon-rich residue (C R ) and increased porosity in the rock matrix, which impacts gas storage capacity. Although TOC is reported in weight percent, its volume percent is about two times higher. For an average TOC of 6.41 wt.% (mass), the volume percent TOC is about 12.7 vol.% using 1.18 g/cm 3 density for organic matter. When thermal maturation is in the dry-gas window (e.g., > 1.4% R o ), approximately 4.3 vol.% porosity is created by organic matter decomposition (see Table 2).
During thermal maturation, C R also becomes more dense, reaching 1.35 g/cm 3 for amorphous organic matter (Okiongbo et al., 2005). However, that still does not exclude microporosity in C R . At high maturity, Figure 5. Gas chromatographic (GC) histograms and fingerprints (inset) from Barnett Shale solvent extracts in the (a) oil window and (b) gas window. A GC fingerprint is basically a histogram of the yield (y-axis) and distribution (x-axis) of various resolvable compounds from GC analysis. The presence of extended paraffins and a large unresolved complex mixture (UCM) causes gas flow reduction or occlusion in the Barnett Shale.
stacked layers of polyaromatic rings are separated by about 0.34 -0.80 nm (Oberlin et al., 1980). Behar and Vandenbroucke (1987) report pore sizes of 5-50 nm, depending on kerogen type. These values are typical of the pore dimensions reported for Barnett Shale (Bowker, 2003a).
ADSORPTION OF OIL AND GAS
Gas is stored in shale source rocks in two principal ways: (1) as gas adsorbed (chemical) and absorbed (physical) (or sorbed to include both physicochemical possibilities) to or within the organic matrix and (2) as free gas in pore spaces or in fractures created either by organic matter decomposition or other diagenetic or tectonic processes. Organic richness, kerogen type, and thermal maturity impact the sorptive capacity of organic matter.
Sorption capacity also affects expulsion efficiency. Although expulsion efficiency is generally cited as related to saturation levels in source rock, it is also a function of sorption capacity, which may be the principal control (Pepper, 1992). Adsorptive sites must be filled before expulsion can proceed (Pepper, 1992), which is independent of saturation thresholds. The function of adsorption and saturation thresholds is illustrated by the numerical simulation of hydrocarbon generation and expulsion from the Barnett Shale in the Fort Worth Basin using the PetroMod 1 one-dimensional basin modeling software (IES). In the first simulation, a standard (default) pore saturation-based simulation of 20% is used (Figure 7a). This is equivalent to about 54 mg HC/g TOC. However, values for adsorption in Figure 6. Polar shale-gas risk plot with various visual and chemical assessments of organic matter conversion or thermal maturity. Low-porosity and low-permeability shales at high levels of organic matter conversion have potential for high gas flow rates (>1 mmcf/day) will plot in the light-gray hatched area, whereas flow rates will be lower for similar shales at low levels of conversion (dark hatched area). Values for the productive MEC 2 Sims and the nonproductive Oryx 1 Grant well are shown with dashed and dash-dotted lines, respectively. Values for highmaturity shale gas are listed in Table 3.
488
The Mississippian Barnett Shale Gas System the oil window are estimated to range from 130 to 200 mg HC/g TOC (Pepper, 1992). In the second simulation, an adsorption saturation of 200 mg HC/g TOC (Figure 7b) is used and illustrates the dramatic impact that adsorption plays in the final product composition in the shale-gas reservoir system. Large differences exist in the amount and types of hydrocarbons that were expelled and, consequently, the amount of oil that was retained for cracking to gas. This dramatically impacts the calculated product composition (gas vs. oil) that is present today in the Barnett Shale. Adsorption may also impact the formation of gas by the decomposition of paraffins (alkanes), which have very high bond decomposition energies. Recent work on the cracking of alkane carbon-carbon bonds shows that adsorption of straight-chain alkanes to a solid matrix results in the structural flex in carbon-carbon bonds and breakage at lower-than-expected energies (Sheiko et al., 2006). Adsorption may also provide the intimate contact between generated hydrocarbons and transition metals, for example, which is required for the catalysis of paraffins to methane and a carbonaceous residue (Mango and Jarvie, 2006).
GAS GENERATION
As a type II oil-prone source rock, what makes the Barnett Shale in the Fort Worth Basin such an exceptional gas system? The first consideration is how large volumes of gas can be formed, and the second is how gas can be stored. Three distinct processes result in the formation of thermogenic gas within shale ( Figure 8): (1) the decomposition of kerogen to gas and bitumen; (2) the decomposition of bitumen to oil and gas (steps 1 and 2 are primary cracking); and (3) the decomposition of oil to gas and a carbon-rich coke or pyrobitumen residue (secondary cracking). The latter process depends on the retention or adsorption of oil in the system, which is a key to the large resource potential of the Barnett Shale gas system. Primary kerogen cracking occurs between temperatures of 80 and 180jC (176 and 356jF), for 10 and 90% conversion, respectively, based on kinetic data and average heating rates (Jarvie and Lundell, 2001). Most source rocks reach 50% conversion between 130 and 145jC (266 and 293jF). Secondary oil to gas cracking has been suggested to begin at about 150jC (302jF) depending on heating rate (e.g., Claypool and Mancini, 1990;Waples, 2000). However, based on whole extract (Jarvie, 1991b) and asphaltene cracking kinetics (di Primio et al., 2000), some volatile bitumen components consisting primarily of asphaltenes and resins crack at the same time or soon after their formation from kerogen. Kinetic experiments and liquid chromatographic analysis indicate that approximately 10% of Barnett Shale -sourced oil cracks over the same temperature window as most kerogens. These fractions average 10-20% of Barnett Shale solvent extracts. The balance of Barnett Shale extracts are composed of about 80% paraffins and aromatics that require higher temperatures to decompose (>175jC; >347jF) excluding any adsorption-induced reduction in cracking rates (Sheiko et al., 2006).
For the most part, the Barnett Shale is a closed system; i.e., products are not released immediately after generation. Pressure buildup caused by the generation of hydrocarbon and nonhydrocarbon gases such as carbon dioxide exists. Carbon dioxide is likely derived from the Figure 8. Processes in a source rock leading to oil, gas, and carbon-rich residue (pyrobitumen). High-maturity shale-gas systems derive high gas contents from the indigenous generation of gas from kerogen, bitumen, and oil cracking. early stages of Barnett Shale organic matter decomposition, although carbonate thermal decomposition and decarboxylation reactions may also increase concentrations of this gas. With increasing thermal maturity, secondary cracking of retained oil results in the formation of hydrocarbon gas, an increase in GOR, a concomitant increase in pressure, and microfracturing. Estimates suggest that 1% oil cracking in a closed system creates sufficient pressure to exceed the fracture threshold of the rock fabric (Gaarenstroom et al., 1993). Thus, microfractures and migration pathways in the Barnett Shale originate, at least in part, from early hydrocarbon and nonhydrocarbon (primarily carbon dioxide and nitrogen) gas generation and secondary cracking of hydrocarbons in the oil and gas windows. This and mass balance considerations suggest episodic expulsion from the Barnett Shale. This process likely occurred many times in the core area, and the range of maturities (0.70-1.0% R o ) of gas samples in the Boonsville field supports this hypothesis (Jarvie et al., 2003R. J. Hill, D. M. Jarvie, R. M. Pollastro, K. A. Bowker, and B. L. Claxton, 2004, unpublished work;Hill et al., 2007).
The creation of microporosity from organic matter decomposition appears to reasonably explain the measured porosity values and the residual organic carbon content. Solvent extraction, which will remove most extractable organic matter, does not remove all adsorbed and trapped hydrocarbons from highly mature Barnett Shale, and further analysis of occluded gas demonstrates the presence of high-maturity gas in the MEC 2 Sims well core samples . These microreservoirs are gas-filled compartments at about 1.70% R o , and their lack of connectivity provides an explanation for the efficacy of restimulation, releasing more gas.
Below 1.0% R o , these microreservoirs are filled with retained petroleum consisting of both oil and gas that restricts gas flow rates and increases production decline rates. This may be a function of discontinuous pore throats or pore throats constricted by adsorbed hydrocarbons, requiring elevated energy to break through exits blocked by adsorbed hydrocarbons or an activated exit (Lindgreen, 1987), i.e., sufficient energy or pressure to overcome adsorption by high-molecular-weight petroleum constituents that have higher adsorption activities than gas resulting in occluded pore throats with pressure drawdown. At some locations, paleotemperature and pressure regimes in the Barnett Shale have provided sufficient energy to overcome these activation barriers resulting in expulsion. Data from a variety of sources demonstrate that compositional fractions with-in crude oil are not equally amenable to expulsion because of sorption (e.g., McAuliffe, 1980;Stainforth and Reinders, 1990;Sandvik et al., 1992).
Overall, lower gas flow rates and faster production decline rates in the oil window (0.50-0.99% R o ) are likely caused by (1) low quantities of gas generated only from primary kerogen cracking (low GOR); (2) restricted release of hydrocarbons caused by the adsorption and occlusion of pore throats by adsorbed bitumen, asphaltenes, and other components of black oil; and (3) lower initial pressures with faster pressure drawdown, or a combination of these and other factors. However, above 1.0 -1.4% R o (paleotemperature of more than 150jC [302jF]), much of the highermolecular-weight crude oil components have been cracked to gas, shale pores are filled with gas, and adsorption affinity is reduced because of the reduction in size of hydrocarbons. Under these conditions, the Barnett Shale has more gas caused by secondary cracking, and stimulation allows a part of the exposed free and adsorbed gas (high GOR) to escape into the wellbore. Pepper (1992) estimated that adsorption is reduced by a factor of 10 at high maturities from 200 mg HC/g TOC in the oil window to about 20 mg HC/g TOC in the gas window. Instead, this may be a function of the reduced energy by which lower-molecular-weight paraffins are adsorbed, but in either case, gas flow is enhanced. Secondary cracking can occur in other nonshale reservoir rocks such as the Bossier Shale gas system of east Texas, where the presence of pyrobitumen in tight sands and silts indicates such a process (Emme and Stancil, 2002;Chaouche, 2005).
Volumes of Hydrocarbons Generated
The remaining hydrocarbon-generation potential of source rocks is typically measured by Rock-Eval pyrolysis (S 2 ) yields. Based on available data, an average, immature Barnett Shale original pyrolysis or original generation potential (S 2o ) contains 27.84 mg HC/g rock (Table 4). This pyrolysis amount can be converted to barrels of oil equivalent per acre-foot and, using an average thickness of 350 ft (106.7 m), results in a yield of 136.5 MMBO equivalent per section (where a section is 640 ac [2.59 km 2 ]) (8.90 Â 10 13 m 3 /m 3 ) or, in gas equivalent, 820 bcf/section (8.93 Â 10 3 m 3 /m 3 ).
The cracking of oil to gas is limited by the amount of available hydrogen in the system needed to form wet and dry gas. The atomic H/C ratio for oil is about 1.8 H/C depending on composition, whereas methane formation requires 4.0 hydrogens per carbon. Thus, there is about 55% hydrogen shortage in oil when it is cracked to methane. Gases in the Barnett Shale are typically less than 100% methane, and a reasonable average is about 90% methane across the entire productive area. Even at 90% methane, the H/C requirement for condensate wet-gas formation is about 3.8, so hydrogen deficiency is still approximately 53%. Taking hydrogen deficiencies into account, gas-generation potential for the Barnett Shale is about 550 bcf/section (5.99 Â 10 3 m 3 /m 3 ) although 148 bcf/section of gas has likely been expelled. However, reductions caused by the expulsion of petroleum will further reduce the gasgeneration potential of the Barnett Shale system.
Expulsion Efficiency
The Barnett Shale has expelled hydrocarbons into conventional oil and gas reservoirs based on the correlation of black oils and condensates to Barnett Shaleproduced oils, condensates, and extracts (Jarvie et al., , 2003; R. J. Hill, D. M. Jarvie, R. M. Pollastro, K. A. Bowker, and B. L. Claxton, 2004, unpublished work;Hill et al., 2007). In fact, the Boonsville field contains 2-3 tcf (5.66-8.50 Â 10 10 m 3 ) of wet gas and condensate, and the combined produced oil in the Fort Worth Basin is more than 2 billion bbl (318 million m 3 ; . Accounting for the amount of expelled hydrocarbons is not a simple task because reconciliation 27.84 Estimate of amount of oil generated from kerogen (70% of total hydrocarbons) (bbl oil/ac-ft) zz 427 Estimate of amount of gas generated from kerogen (30% of total hydrocarbons) (mcf/ac-ft) { 1097 Source rock thickness (ft) 350 Primary oil generated from kerogen with above thickness converted to gas equivalent (bcf/section) 573 Primary gas generated from kerogen from shale with above thickness (bcf/section) 247 Total hydrocarbons (gas and oil) generated from primary cracking of kerogen (gas equivalent bcf/section) 820 Expulsion factor 0.60 Oil expelled (bbl oil/ac-ft) 256 Gas expelled (mcf/ac-ft) 658 Retained hydrocarbons Primary oil retained in shale (bbl oil/ac-ft) 171 Primary gas retained in shale (mcf/ac-ft) 439 Correction factor for insufficient hydrogen in oil 0.47 Gas yield from secondary cracking of oil (mcf/ac-ft) 482 Total retained gas (primary gas plus secondary gas from oil cracking) (mcf/ac-ft) 921 Total retained hydrocarbons under these assumptions (bcf/section) 206 *PVT = pressure, volume, temperature. **EUR from Adams (2001) of reserves plus losses during migration or to the surface is not readily estimated. An empirical approach uses EUR and recovery factors cited by operators in the Newark East field to evaluate the expulsion efficiency. The EUR for Barnett Shale wells ranges from about 1.0 to 3.0 bcf/well (2.8 to 8.5 Â 10 7 m 3 ), depending on whether a well is vertical or horizontal , but a reasonable average is about 1.75 bcf/well. If recovery estimates of 8-12% (Bowker, 2003a) or an average of 10% is used, the EUR per section is 17.50 bcf (4.96 Â 10 8 m 3 ). Most wells have been drilled on 55-ac (22-ha) spacing, so the GIP per section is about 204 bcf (5.78 Â 10 9 m 3 ). This is higher by about 54 bcf/section than reported estimates of GIP in the Barnett Shale (150 bcf/section [640 m 3 /m 3 ]; Adams, 2003;Bowker, 2003b), although these authors used a thickness of 300 ft (91.44 m) for the Barnett Shale.
The total hydrocarbon-generation potential of Barnett Shale at high thermal maturity (> 1.4% R o ) based on original generation potentials from Rock-Eval pyrolysis, corrected for hydrogen mass balance for mixed wet and dry gas, and an average thickness of 350 ft (106.7 m) is about 550 bcf/section (5987 m 3 /m 3 ). Compared to GIP based on EUR, approximately 324 bcf/ section (3530 m 3 /m 3 ) has been expelled from the Barnett Shale either in the form of gas or oil equivalent. This indicates about 60% expulsion from the Barnett Shale based on the above computed GIP.
Another variable in this calculation is the assumption of original generation potentials. Table 4 shows our best estimate calculation as well as low and high average values based on low-maturity Barnett Shale from Brown County and Barnett Shale outcrop samples collected in Lampasas County, respectively. The average value of these two locations approximate our best values arrived at independently based on HI o and TOC o equations. TOC R values greater than 8% ($12% TOC o ) in Barnett Shale are not common, indicating that the rich outcrop samples are an end member of organic-rich Barnett Shale. In addition, the Explo 3 Mitcham samples are cuttings and are early mature (about 0.62% R o ), with a slight understatement of TOC o and S 2o . We think, therefore, that expulsion efficiencies are about 60%, although the range is likely 50-70%. This is lower than conventional wisdom for source rock expulsion efficiency, but explains the high gas contents of gas-window mature Barnett Shale.
Considering TOC o and its components, C C and C R , a reasonable explanation of the expelled carbon as hydrocarbons, retained carbon as gas and some liquids, and the slight increase in C R are reasonably accounted for as shown in Figure 9. At 60% expulsion, only 0.91 wt.% carbon is cracked to gas and a carbonaceous residue. The cited hydrogen deficiency results in an increase in C R by 0.31 wt.%, which, when added to the original C R , yields 4.40 wt.% C R at high maturity. This is comparable to the measured average value of 4.48 wt.% for C R from the database of high-maturity Barnett Shale (see Table 1). The remaining 0.32 wt.% of hydrocarbons is cracked to gas at high thermal maturity and yields about 911 mcf/ac-ft (18.5 m 3 /m 3 ) of gas under these assumptions.
GAS STORAGE CAPACITY
Scientists have shown that the Barnett Shale has the capacity to generate enormous amounts of gas, but can these volumes be stored in a low-permeability shale? Modeling gas storage using pressure, volume, and temperature (PVT) properties assuming nonideal gas, typical porosity values (5-8%), 3800 psi (26.2 MPa), and 70jC (34jF) indicates that the storage capacity of the Barnett Shale is also enormous, ranging from 450 to 720 mcf/ ac-ft (103 m 3 /m 3 ) ( Table 5). For a 6% porosity shale under these temperature and pressure conditions, there is approximately 159 scf/ton (4.96 cubic meters/metric ton), consistent with the estimates of Mavor (2001).
At maximum burial depth ( > 250 Ma), highmaturity Barnett Shale (>1.40% R o ) was at much higher temperatures and pressures. An additional 5000 ft (1524 m) of burial or hydrothermal water movement would likely have elevated temperatures and pressures to about 180jC (356jF) and 8000 psi (55.2 MPa). Oil would be cracked to gas; organic matter decomposition would be near completion, providing additional pore storage capacity; and adsorption would be at a maximum in the burial and thermal history of the Barnett Shale. With uplift, temperature and pressure would be reduced, resulting in less adsorbed gas and easier release of gas.
MINERALOGY
Although mineralogical analysis is generally not included in organic geochemical assessments, it is an important factor in gas production from the Barnett Shale and other tight shale systems that require stimulation. Paraphrasing Bowker (2002), the Barnett Shale produces so much gas because it is brittle and responds to stimulation (and because it has high gas contents). Although the organic matter is capable of generating, retaining, and storing huge amounts of hydrocarbons, gas flow is limited if the individual microreservoir compartments cannot be connected via well stimulation. This brittleness is related to mineralogy, and the Barnett Shale contains high percentages of quartz derived from biogenic silica (data from Gas Research Institute, 1991) ( Figure 10). Although a shale by name and particle size, clay contents range from more than 40 to less than 5%. Clay, quartz, and carbonate contents are highly variable in the Barnett Shale and result in variable fracture gradients (Martineau, 2001), so it would be expected that some zones are more fractured during stimulation than others. Without these mineralogical characteristics, stimulation and fracturing of the Barnett Shale gas system would not be as successful given current technologies.
PROJECTED INITIAL GAS PRODUCTION RATES
With the integration of source rock volumetrics (total generation potential and thickness) and natural and induced fracture presence, gas flow rates can be estimated. A schematic for this is shown in Figure 11 based on Barnett Shale vertical wells. The goal of this schematic is to provide risk assessment. Obviously, a variety of nongeochemical factors affect flow rates, including geological and engineering issues such as the presence of faults and structures, stimulation barriers, stimulation technique and size, and the ability of the rock to fracture.
This flow-risk assessment has been applied to Barnett Shale wells in the Fort Worth Basin in addition to wells in the Delaware and Arkoma basins to predict flow rates from Barnett and Woodford shales. For example, in the core area of the Fort Worth Basin, the Mitchell Energy Figure 9. Depiction of TOC components and values that result from thermal maturation of organic matter in Barnett Shale. A part of the TOC o , C C at 2.32 wt.% is converted to hydrocarbons, whereas there is also a hydrogen-poor component, C R at 4.09 wt.%. With thermal maturation, hydrocarbons are generated and an estimated 60% of carbon in generated hydrocarbons (C Cex ) is expelled from the Barnett Shale. The expelled products are approximately 70% petroleum and 30% gas. A portion of carbon is not expelled (C Cnex ) as hydrocarbons, but is further cracked to (C Cgaso ). Carbon in gas totals 0.28 wt.% unexpelled gas from primary cracking of kerogen and 0.31 wt.% carbon in unexpelled oil that was cracked to gas. Additional dead carbon is formed from secondary cracking of oil (C Roc ), yielding a high thermal maturity C R of 4.43 wt.% comparable to the database average for high-maturity cores (4.48 wt.%; see Table 1). Only 0.59 wt.% carbon is retained in the Barnett Shale as gas, but this totals 921 mcf/ac-ft.
Corporation 2 T.P. Sims vertical well in Wise County has a maturity value of 1.65% R o , and the well has flowed more than 1 mmcf/day (28.3 million m 3 /day). In noncore areas, Barnett Shale wells in the thermal-maturity window of 0.80 -0.90% R o range have initial production rates of 100-500 mcf/day (2.83-14.2 million m 3 /day). More recently, Infinity Oil and Gas has completed horizontal wells in Erath County that have mapped maturities of approximately 1.0% R o that have flowed more than 1 mmcf/day (28.3 million m 3 /day; Infinity Energy Resources, 2006). However, oil-window thermal-maturity Barnett Shale wells (0.60-0.99% R o ) average less than 500 mmcf/day (14.2 Â 10 6 million m 3 /day) over the first month of production and are not commercial depending on economic conditions (drilling costs, gas prices, etc.). Gas flow from shales in the oil window have lower EUR and ROR because less gas has been generated, and the wells also have even faster decline rates.
Structurally, it is also necessary to know the location of sink holes, karsts, macrofractures, faults, or other conduits in the sedimentary package. These features conduct stimulation energy away from the shale using current stimulation techniques, resulting in poor stimulation. One of the reasons for acquiring seismic data in the Fort Worth Basin is to identify areas not to drill based on the presence of sink holes or other structurally complex areas (Bowker, 2003a;Marble, 2006), although these may be found to be productive in the future as new completion technologies are developed.
Restimulation of productive wells commonly results in rates exceeding the original production flow rates (Bowker, 2003a). Mitchell Energy Corporation's 2 Sims well in Wise County shows the impact of restimulation at about 122 months as the initial restimulation flow rate exceed 2 mmcf/day (5.66 Â 10 4 m 3 /day) ( Figure 12). Changes in stress field orientation following production as well as improved stimulation techniques are the principal reasons for the increase in flow rates (Montgomery, 2004).
CONCLUSIONS
A variety of unconventional shale-gas resource plays and gas types associated with these plays exist. Gases may be biogenic or thermogenic, and although most gas is indigenous, in some cases, shale systems contain migrated gas. Shale-gas resource plays may be extremely tight (low permeability) to highly fractured, with variable bulk mineralogical composition controlling the brittle versus ductile nature of the shale. To evaluate the likelihood of economic shale-gas production, it is essential to determine whether the gas is thermogenic or biogenic and to evaluate the geologic characteristics of the shale-gas system to properly apply geochemical measurements and interpretation. The Barnett Shale is an organic-rich, type II, oil-prone marine shale that originally averaged about 6.41% TOC, with a hydrogen index of about 434 mg HC/g TOC. Computation of original generation potentials yields about 27.84 mg HC/g rock or 609 bbl of oil equivalent/ ac-ft (0.083 m 3 /m 3 ). The low permeability of the Barnett Shale and its adsorptive capacity result in the retention of abundant petroleum, which can be cracked to gas given sufficient thermal maturation. Combined with primary kerogen-to-gas cracking, this yields the high gas contents, accounting for the high GIP and EUR from Barnett Shale gas wells. However, expulsion from the Barnett Shale has occurred as evidenced by gas production from the overlying Boonsville conglomerate as well as oil fields in the less mature northern and western parts of the Fort Worth Basin. Expulsion is estimated to be about 50-70% of the total generation potential of the Barnett Shale, but this estimate is highly dependent on the determination of the original generation potential and GIP.
Because there is insufficient hydrogen (53%) in oil to convert it entirely to gas, the retained gas content is reduced from 324 bcf/section (3530 m 3 /m 3 ) to about 204 bcf/section (2223 m 3 /m 3 ). Free gas is stored in microporosity created by organic matter decomposition, generation-induced microfractures, and any intergranular pores preserved during deposition. With a C Co of 2.32 wt.%, a net porosity increase of 4.3 to 4.6% results from the conversion of kerogen to hydrocarbons. This provides a considerable part of the known porosity of the Barnett Shale and results in a multiplicity of compartmentalized microreservoirs. Subsequently, to achieve gas flow in Barnett Shale wells, rigorous stimulation is required to rupture these microreservoir compartments.
Gas storage in the Barnett Shale at reservoir PVT conditions demonstrates that at 6% porosity, 70jC Figure 11. Diagrammatic illustration of increasing gas flow rates with increasing source rock organic richness (TOC), thermal maturity, GOR, and fractures found in shale-gas systems. Play economics can be evaluated using projected gas flow rates and EUR versus drilling and development costs.
To locate high flow rate thermogenic Barnett Shale gas in the Fort Worth Basin, it is essential to determine the extent of organic matter conversion using visual and chemical means of maturity assessment combined with numerical-simulation models. Vitrinite reflectance is commonly used to assess thermal maturity, but should be complemented by chemical measurements. It is helpful to complete vitrinite reflectance profiles over the entire wellbore because in marine shales, vitrinite particles are sparse. Chemical assessments of organic matter maturity and conversion such as Rock-Eval T max , HI-derived transformation ratios (TR HI ), gas composition, and carbon isotopes are complementary chemical techniques. It is also essential to evaluate residual hydrocarbon products to ensure that no high-molecularweight black-oil components remain in the system. These components are hypothesized to restrict or occlude flow by adsorption as noted by lower gas flow in less mature (more oily) areas. They disappear at different levels of thermal maturity depending on kerogen type. Thus, a multiparameter visual and chemical assessment of the extent of organic matter conversion should be undertaken to assess shale-gas producibility.
The Barnett Shale has proven to be a commercial thermogenic shale-gas system, but other systems, some with similar geochemical, petrophysical, and mineralogical characteristics have excellent potential for commercial gas production. Other gas systems with quite different types of gas as well as varying geochemical, petrophysical, and mineralogical characteristics also have good potential for commercial gas and long-term production, but may have lower flow rates compared to Barnett Shale wells in the Fort Worth Basin given current completion technologies. The economics of production of gas from each of these systems are dependent on geological, geochemical, petrophysical, and engineering, as well as drilling costs and gas prices. Thus, one shale-gas model cannot be expected to explain all other shale-gas system plays, and each must be studied in its own right with application of appropriate technologies and, sometimes, the enduring patience that was required for commercial development of Barnett Shale in the Fort Worth Basin.
Terms
R o = vitrinite reflectance (in percent) TOC = total organic carbon (in wt.%) C C = convertible organic carbon or carbon in free oil or kerogen in a rock (wt.%) C R = residual organic carbon or carbon remaining after pyrolysis (wt.%) S 1 = free volatile hydrocarbons thermally flushed from a rock sample at 300jC (nominal) (in mg HC/g rock) S 2 = products that crack during standard Rock-Eval pyrolysis temperatures (300 -600jC [nominal]) (in mg HC/g rock) S 3 = organic carbon dioxide released from rock samples between 300 and 390jC (nominal) (mg CO 2 /g rock) T max = the temperature at peak evolution of S 2 hydrocarbons (in jC) HI ( hydrogen index) = remaining potential (S 2 ) divided by TOC Â 100 (in mg HC/g TOC) PI (production index) = free oil content as measured by S 1 only divided by the sum of S 1 plus the remaining generation potential (S 2 ) or S 1 /(S 1 + S 2 ) (values from 0.00 to 1.00) TR HI = transformation or conversion ratio calculated from HI o and HI pd (see equation below) This equation requires input of maceral percentages from visual kerogen assessment of a source rock. For example, using Barnett Shale that is 95% type II and 5% type III, the calculated HI o value is 434 mg HC/g TOC. At 100% type II, HI o would be 450 mg HC/g TOC. These values are comparable to those measured on immature to low-thermal-maturity Barnett Shale that ranged from 380 to 475 mg HC/g TOC.
Using the equations of Claypool (Peters et al., 2006), the fractional conversion, i.e., the extent of organic matter conversion, can be determined. The fractional conversion, TR HI , is derived from the change in HI o to present-day values (HI pd ) (Espitalie et al., 1984;Pelet, 1985;Peters et al., 2006), where PI is the production index (S 1 /(S 1 + S 2 )) as PI o = 0.02 to PI pd (Peters et al., 2006): This incorporates the formula of Pelet (1985) for computing kerogen transformation where 1200 is the maximum amount of hydrocarbons that could be formed assuming 83.33% carbon in hydrocarbons. The PI is a ratio of hydrocarbons already formed to the total hydrocarbons (determined from the ratio of S 1 to S 1 + S 2 from Rock-Eval data; Espitalie et al., 1977) where 83.33 is the average carbon content in hydrocarbons and k is a correction factor based on residual organic carbon being enriched in carbon over original values at high maturity (Burnham, 1989). For type II kerogen, the increase in residual carbon C R at high maturity is assigned a value of 15% (whereas for type I, it is 50%, and for type III, it is 0%) (Burnham, 1989). The correction factor, k, is then TR HI Â C R . | 2019-04-25T13:04:22.522Z | 2007-04-01T00:00:00.000 | {
"year": 2007,
"sha1": "dede7bbdeb32b9c9af3e81632f60263e7c473790",
"oa_license": "CCBY",
"oa_url": "https://doklady.belnauka.by/jour/article/download/526/529",
"oa_status": "GREEN",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "0dc8b70669380ef84badf902a1648e40fbfabc23",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering",
"Geology"
],
"extfieldsofstudy": [
"Geology"
]
} |
210155935 | pes2o/s2orc | v3-fos-license | An IBSP Description of Sanskrit /n/-Retroflexion
Graf and Mayer (2018) analyze the process of Sanskrit /n/-retroflexion (nati) from a subregular perspective. They show that nati, which might be the most complex phenomenon in segmental phonology, belongs to the class of input-output tier-based strictly local languages (IO-TSL). However, the generative capacity and linguistic relevance of IO-TSL is still largely unclear compared to other recent classes like the interval-based strictly piecewise languages (IBSP: Graf, 2017, 2018). This paper shows that IBSP has a much harder time capturing nati than IO-TSL, due to two major shortcomings: namely, the requirement of an upper bound on relevant segments, and a lack of descriptive succinctness.
Introduction
Research in computational phonology has determined that all phonological patterns fit in the class of finite-state languages (Kaplan and Kay, 1994). The study of subregular phonology explores how characterizations of phonological phenomena can be further restricted by identifying suitable subclasses of the regular languages. This route of study enables us to formally classify the bounds on complexity of phonological computations, which provides new insights for typology and learnability (see Heinz 2018 and references therein).
One phenomenon that has proven to be particularly complex is /n/-retroflexion in Sanskrit, also known as nati. The nasal /n/ undergoes retroflexion whenever it appears immediately before a sonorant and there is a retroflex somewhere to its left. While this interaction of local and nonlocal factors is already unusual, the true complexity of the process comes from various blocking effects. It has been known since Graf (2010) that nati -when viewed as a phonotactic constraint on surface forms -is star-free. Recently, an alternative bound has been established in the form of input-output tier-based strictly local languages (IO-TSL; Graf and Mayer, 2018).
IO-TSL is an extension of the empirically wellsupported class TSL (Heinz et al., 2011). Whereas the subclasses I-TSL and O-TSL of IO-TSL enjoy independent empirical support (De Santo and Graf, 2019;Mayer and Major, 2018), IO-TSL seems to be needed for no other phonological phenomena besides nati. In addition, the formal properties of IO-TSL are not well-understood. It isn't even known whether IO-TSL is a subclass of the star-free languages. By contrast, the class of interval-based strictly piecewise languages (IBSP; Graf, , 2018 is properly star-free, handles a wide range of phonotactic phenomena, and has even been applied to syntax (Shafiei and Graf, 2019). For all these reasons, an IBSP description of nati would be preferable to the current IO-TSL description.
In this paper, I argue that nati can be given an IBSP description, but the resulting grammar is much more convoluted than the IO-TSL analysis. While the basic cases of nati are very natural from an IBSP perspective, the interactions of blocking effects muddy this clear picture. The structure of the paper is as follows: IBSP is formally defined in Section 2, adapting the more general format proposed in (Graf, 2018). Section 3 then walks the reader through the nati analysis, starting from the simplest case and refining the IBSP grammar with each new complication. Section 4 reflects on the status of the analysis and what limitations of IBSP make nati so difficult to account for. first defined the class of intervalbased strictly piecewise (IBSP) string languages as an extension of the strictly piecewise (SP) languages (Rogers et al., 2010). IBSP enriches SP with locality domains, and the checking of SPdependencies is limited to these locality domains. IBSP properly subsumes SP, but also the classes SL and TSL, all three of which play a major role in subregular phonology. Graf (2018) further generalizes the format of locality domains to account for phenomena that had previously been analyzed in terms of I-TSL. Only this more general version can handle nati.
Preliminaries
Intuitively, an IBSP intervals involves definitions of I) the left and right domain edge, II) a finite number k of open slots, and III) the fillers that can occur between open slots. Fillers and domain edges are defined through k-intervals, also called k-vals. The IBSP grammar also supplies a list of forbidden k-grams. A string is wellformed iff there is no way to instantiate the k-val in such a manner that the configuration of open slots matches a forbidden k-gram.
While IBSP is originally defined in terms of first-order logic , I adopt the newer definition of Shafiei and Graf (2019) as it also subsumes the generalized intervals of Graf (2018).
Definition 2.1 (k-val). A segmented k-interval (k ≥ 0) over alphabet Σ, or simply segmented kval, is a tuple L, R, F i 0≤i≤k such that: • L, R ⊆ Σ∪{ε} specify the left edge and right edge, respectively, and • F i ⊆ Σ specifies the i-th filler slot.
Definition 2.2 (IBSP-k). Let Σ be some fixed alphabet and , / ∈ Σ two distinguished symbols. An IBSP-k grammar over Σ ∪ { , } is a pair G := i, S , where i is a segmented k-val over Σ ∪ { , } and S ⊆ (Σ ∪ { , }) k is a set of forbidden k-grams. A string s ∈ Σ * is generated by G iff there is no k-gram u 1 ...u k ∈ S such that k s k is a member of the language The language L(G) is the set of all s ∈ Σ * that are generated by G. A stringset L is IBSP-k iff L = L(G) for some IBSP-k grammar G.
The reader may skip ahead to (1) and (2) for a depiction of a concrete IBSP interval and its application to an illicit string.
In IBSP, all possible instantiations of a locality domain must be evaluated. If at least one of them yields a match for an illicit k-gram, the whole string is discarded. By default, fillers allow each open slot to be arbitrarily far away from the next one. However, adjacency of the i-th and i + 1-th open slot can be enforced by stipulating F i+1 = ∅. Mixing such empty fillers with normal fillers allows IBSP to capture phonotactic constraints in which local and non-local dependencies interact. As will see next, this isn't needed for simplified version of nati, but will be crucial once the full range of facts is considered (Sec. 3.3 and subsequent sections).
Data and Analysis
Nati is a left-to-right long-distance assimilation process with a single trigger, a single target, and several conditions for blocking. While nati is usually described as a process -i.e. a mapping from underlying forms to surface forms -I treat it as a phonotactic phenomenon. That is to say, nati is reanalyzed as a constraint on the distribution of [n] in surface forms, making it a matter of string languages rather than string transductions. This is in line with the previous work done by Graf and Mayer (2018), which will henceforth be referred to as GM.
The discussion starts with the simplest cases of nati and continually refines the IBSP description as new data is considered. The final version is presented in Sec. 3.5.
Several notational conventions will be adopted for the remainder of this paper: Sanskrit examples have their triggers and targets bolded, while active blockers are underlined. All the examples are taken from GM and Ryan (2017). IBSP interval diagrams are represented in a pictorial fashion: domain edges are large, green rectangles, fillers are vertically offset boxes in red, and open slots are blue squares.
Long-distance assimilation
Nati starts out with the basic constraint that a nasal target /n/ becomes [ï] when preceded arbitrarily far to the left by a non-lateral retroflex continuant in {/õ/, /õ " /, /õ: " /, /ù/}. GM formalize this as the constraint "no [n] may appear in the context R . . . ", where R is one of the triggers listed in the preceding sentence.
GM's constraint is easily expressed in terms of IBSP. Our grammar consists of a single forbidden unigram, which is n. The interval spans from R to the right word edge $. Fillers may contain any- thing except a word edge, which captures that nati cannot apply across word boundaries.
(1) IBSP interval (Version 1) For the sake of succinctness, the interval above lists the forbidden unigram directly in the open slot. While this is non-standard, I believe it makes the analysis easier to follow once the complexity of the intervals starts to increase. Table 1 lists some data points that are relevant for this base case. The form of the instrumental singular suffix /-e:na/ alternates based on whether the root it attaches to contains a trigger for nati. For the sake of exposition, I also include an illicit nonce variation (indicated by the gloss "N/A").
Form
Gloss Nati? Licit? ká:m-e:na 'by desire' manuùj-e:ïa 'by human' manuùj-e:na N/A The reader may wonder why an analogous nonce form ká:m-e:ïa isn't included in Tab. 1. In this nonce form, /n/ would undergo nati without a suitable trigger, which should be illicit. However, this presupposes a view of nati as a process. From the perspective of phonotactics, it is not obvious that this nonce form is actually illicit because [ï] can occur independently of nati. The phonotactics of nati only concern the distribution of [n], not [ï], so only the former need to be considered here.
Let us now see how the locality domain in (1) captures the well-formedness of the first two forms in Tab. 1 while also ruling out the illicit nonce form. First, ká:m-e:na is well-formed because it lacks a retroflex, so there is no suitable left edge for the interval in (1). Hence the locality domain cannot be established at all, so there are no open slot configurations to check against the list of forbidden unigrams. As a result, the string is wellformed.
The second example is manuùj-e:ïa, which does allow for numerous instantiations of the interval. In all of them, the interval spans from [ù] to the right word edge, and the only difference is what segments make up the fillers and which one ends up in the open slot. But since manuùj-e:ïa does not contain any [n], the open slot never matches the forbidden unigram. Consequently, this string is also deemed well-formed. In contrast to the first example, where well-formedness followed from the inability to instantiate any locality domain, this example allows for many distinct instantiations but none of them yield a forbidden configuration of open slots.
This leaves us with the illicit manuùj-e:na. It works exactly like the second case, except that now there is an instantiation that results in a match with the forbidden unigram n. This particular instantiation is depicted below.
(2) IBSP interval: manuùj-e:na So far, IBSP has not done anything that could not be accomplished by simpler means, e.g. an SP grammar. But as we start adding on conditions and exceptions, IBSP intervals will quickly become indispensable.
Unconditional blocking by intervening coronals
We now turn to the first of the nati-blocking effects: /n/-retroflexion can be blocked if a coronal segment appears between trigger and target. The set of relevant coronals includes retroflexes but excludes the glide [j] as the latter is both a sonorant and a coronal -see Ryan (2017) for further discussion. In GM, the forbidden context for [n] is updated to RC... , where C matches every segment that is not a coronal, including [j]. To represent this in IBSP, we modify the first filler in (1) so that it may not contain any coronals either. If a string contains a coronal, it must go in the open slot or the second filler. Either way, no subsequent [n] can appear in the open slot, and consequently the string will be deemed well-formed.
At the same time, strings without coronals will still be judged illicit. This is illustrated below for the nonce form Vaõm-ana:nam.
(4) IBSP interval: Vaõm-ana:nam ¬$,¬C ¬$ õ m a n a: n a m $ Va Note that [ï] itself is a coronal blocker, so any subsequent [n] in a word loses its eligibility as a target for nati. The only exception to this is geminate /nn/ sequences where both /n/ become retroflexed. However, this could also be treated as a separate process of progressive local assimilation. I put this issue aside for now, but it will be revisited in Sec. 4.
Mandatory adjacency to sonorant
In order for /n/ to undergo nati, it must also be immediately followed by a vowel, a glide, [m], or [n] itself. More succinctly, the following segment must be a non-liquid sonorant (Whitney, 1889). For example, in the form bõahman, nati does not apply as [n] occurs at the very end of the word without any subsequent sonorant. Similarly, nati does not apply in caõ-a-n-ti, in this case because [t] is not a sonorant. Sanskrit has some nasals besides [m] and [n] that are non-liquid sonorants, but as those cannot follow [n] for independent reasons (Emeneau, 1946) they do not matter for the purposes of this paper.
Form
Gloss Nati? Sonorant? Licit? caõ-a-n-ti 'wander (3Pl)' bõahman 'brahman' bõahmana N/A (3). The list of illicit unigrams is now expanded to illicit bigrams. It is no longer just [n] that is forbidden, but rather any bigram of the form nS. Keep in mind that coronal blocking is still active, though.
The descriptor none in the second filler of (5) indicates that F 1 = ∅. That is to say, this filler cannot contain any symbols at all (which means that it isn't much of a filler). Consequently, the first and second open slot must always be adjacent. Let us verify that the first two examples in Tab. 3 are still well-formed given the grammar in (5). Below is an example of one possible interval established in caõ-a-n-ti. At the same time, bõahmana is correctly ruled out as illicit.
Conditional blocking by preceding velar and labial plosives
Coronal consonants are not the only blockers of nati: velar and labial plosives also block the process, but only if I) the plosive immediately precedes the target nasal, and II) a left root boundary ( √ ) occurs somewhere between the trigger and the plosive. Blocking is contingent on both conditions being met, as is exemplified by the data in Tab. 4. In põa-√ mi:ï-a:-ti, nati still occurs across a left root boundary due to the absence of a plosive immediately before /n/. In √ õug-ïá, nati can target an n after an immediately preceding velar plosive /g/ because the left root boundary does not occur between the triggering retroflex and the plosive. Only in (ab h i-)põa-√ g h n-an-ti does nati fail as there is both a plosive and a root boundary, both of which occur in the relevant positions.
Form
Gloss Nati? Licit? põa-√ mi:ï-a:-ti 'vanishes (3s)' √ õug-ïá 'break (pass. part.)' (ab h i-)põa-√ g h n-an-ti 'broken' In response to this additional complication, GM update the banned context to Rα... S. Here α is any string that neither contains a coronal nor matches ... √ ...P , with P denoting a velar or labial plosive. It is at this point that the complexity of our IBSP treatment ramps up significantly. We must now introduce open slots whose only purpose is to be sensitive to the conditional presence of certain segments. By setting up the fillers in such a way that root boundaries and immediately preceding plosives can only go into open slots, we can ensure that the grammar is always aware of these segments if they occur in the string. The list of forbidden k-grams is then set up in such a fashion that open slot configurations that start with a root boundary and a plosive are exempt from nati. This is a very unusual use of open slots and fillers, and I am unaware of any other IBSP-analysis that has to resort to this trick.
The concrete steps are as follows. First, two additional open slots must be included between the trigger and target. Open slot 1 detects the presence of a left root boundary somewhere arbitrarily to the left of [n]. Open slot 2 detects the presence of a velar/labial plosive immediately before an [n]. For readability, graphical depictions of longer intervals will now be broken up across two lines. The filler before the third open slot is set to none so that it can only be filled by whatever segment immediately precedes /n/. The fillers surrounding the first open slot are more complex. The ban against coronals is carried over from coronal blocking, but in addition these filters may not contain a root boundary either. As a result, a root boundary that occurs somewhere between the triggering retroflex and a suitable plosive must go into the first open slot. The conjunction of all these factors ensures that if a string contains a suitable root boundary and plosive, they will always occur in the first two open slots.
In the next step, we expand the list of forbidden bigrams of the form nS to forbidden 4-grams of the form φnS. Hence φ corresponds to any combination of segments that matches one of the three conditions above.
If the first two open slots in an instantiated interval do not match φ, nati won't be enforced, capturing the described blocking effect. This is illustrated below for (ab h i-)põa-√ g h n-an-ti.
(9) IBSP interval: (ab h i-)põa-√ g h n-an-ti Any configuration where the first two open slots are not √ and a plosive will match φ, triggering a nati violation if the remaining two open slots are filled by /n/ and a sonorant. As a concrete example, consider the nonce form põa-√ mi:n-a:-ti.
(10) IBSP interval: põa-√ mi:n-a:-ti The reader is urged to verify for themselves that the remaining forms in Tab. 4 are handled correctly by this grammar. One additional wrinkle is that the introduction of new open slots has created an "escape hatch" for coronals. In previous versions, a coronal had to go into the first or second open slot, or the third filler. But these are now the third and fourth open slot and the fifth filler. While coronals are still banned in the first and second filler, they could go into the first or second open slot. But since φ currently matches coronals, too, we no longer capture coronal blocking. Fortunately, the fix is easy. We further restrict the shape of φ so that it does not match any open slot configuration with a coronal. Overall, this leaves the following patterns for φ: Given a list of suitable list of segments for Sanskrit, φ can be compiled out into a list of bigrams. These bigrams are then prefixed with every possible instantiation of nS to arrive the list of forbidden 4-grams.
Conditional blocking by following retroflex
Even though the grammar in (8) is already fairly complicated, it still does handle the last layer of nati: if a retroflex appears arbitrarily far to the right of the target /n/, /n/-retroflexion may be blocked. Blocking only occurs when both of the following two conditions are met: I) a left root boundary intervenes between the trigger and the target, and II) there is no coronal between the target /n/ and blocking retroflex. Condition II) is particularly peculiar. Essentially, the appearance of a coronal consonant between /n/ and its following retroflex blocks the blocking of nati by said retroflex, so that the process applies as usual.
We can follow the same approach as in section 3.4 to handle this complication. That is to say, we include yet another two conditional slots following the target nasal, and its mandatory adjacent sonorant. As the interval now gets exceedingly long, graphical depictions have to be broken up again across multiple lines. We then expand the list for forbidden 4-grams to forbidden 6-grams. The 4-gram pattern φnS is expanded to φnSφ . Just like φ describes the illicit segments for 1 and 2, φ handles open slots 3 and 4 in (11). However, φ cannot be described independently of φ as the relevance of slots 3 and 4 for blocking depends on the presence of a root boundary in open slot 1. Hence the options for φ and φ have to be specified in conjunction: (11), with the list of forbidden 6grams above, is the final version of the IBSP grammar for nati (although other potential variants are discussed in Sec. 4). This is a good point to reevaluate some of the earlier data points. 3. Whatever implicational relations hold between the relevant segments are compiled out into a list of forbidden k-grams.
While each step is conceptually simple, the sheer number of open slots and potential combinations of segments make an IBSP analysis of nati a daunting task.
In fact, the analysis presented here still involves major simplifications. As mentioned in Sec. 3.2, geminate /n/ becomes geminate /ï/ under nati. This is not captured by the current grammar, but corresponding modifications could be made. If geminate /n:/ is modeled as underlying /nn/, then the list of forbidden 6-grams can be modified to also block /ïn/. Then /ïï/ would be the only possible surface form. If, on the other hand, /n:/ is a single symbol, then the 6-grams must be modified such that /n:/ is forbidden even if the following segment is not a sonorant (since the geminate, metaphorically speaking, acts as its own sonorant). Needless to say, the resulting list of forbidden 6-grams obfuscates the relevant dependencies even more.
Another problem is that as the size of the kval grows, shorter strings are automatically considered well-formed. An interval with 6 open slots cannot be instantiated in a string that only consists of 5 symbols. allow strings to be padded out by additional edge markers to enforce the required minimal length. But this means that the list of 6-grams also needs to be extended to handle cases where some open slots contain word edges. At this point, inspecting the grammar for correctness is no longer humanly possible.
In the other direction, the interval may still be too large. For instance, coronals cannot go into the first or second filler, leaving only the first open slot as an option for a coronal somewhere to the left of /n/. If a string contains two coronals, neither one of which is adjacent to /n/, the interval cannot be instantiated at all. In this case this is unproblematic since coronals would block nati anyways, so either way the string is deemed well-formed. But the situation is reversed with coronals after /n/, which undo blocking of nati by a retroflex. If a string contains two coronals between /n/ and such a retroflex, the interval won't be instantiated and the string will incorrectly be treated as wellformed. Again one could fix this by adding more open slots and modifying the list of forbidden kgrams. But the resulting grammar would be utterly unintelligible.
For all these reasons, IBSP does not provide an insightful or elegant perspective of nati, in particular compared to GM's IO-TSL treatment. Nonetheless it is a useful observation that nati can be given an IBSP description. The discrepancy we find between IBSP and IO-TSL touches on a larger issue for subregular phonology: to what extent should succinctness and elegance of description be a criterion in the classification of empirical phenomena? If formalism X strictly speaking generates the right string pattern, but a more powerful class provides a more natural perspective than X, which one of the two is closer to cognitive reality?
Conclusion
I have argued that a process as complex as nati, which can be viewed as an interaction between local and non-local dependencies with intervening material that provides blocking effects, can be modeled in IBSP. Since IBSP enjoys independent empirical support, this result makes nati look like less of an outlier in the phonological landscape. However, the proposed grammar is fairly complicated and lacks linguistic naturalness. Future work could revisit these findings along two dimensions. On a formal level, it might be possible to extend IBSP grammars with mechanisms that allow for more succinct descriptions without increasing generative capacity. From a linguistic perspective, one might try to reassess the empirical status of nati with respect to which of its components are most natural under an IBSP-analysis. If these aspects turn out to be on empirically shaky ground, this might provide indirect evidence for IBSP as a model of natural language phonotactics. | 2020-01-12T04:25:58.847Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "41b92b3a941344b14fdb9deab818fe97c28b6bc8",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "e5265ea1a52c581046dc5750a37c24a34f7884e0",
"s2fieldsofstudy": [
"Computer Science",
"Linguistics"
],
"extfieldsofstudy": [
"Philosophy"
]
} |
83907398 | pes2o/s2orc | v3-fos-license | Influence of environmental variables and anthropogenic perturbations on stream fish assemblages , Upper Paraná River , Central Brazil
The Ouvidor River, a tributary of the Upper Paraná River, drains areas covered by cerrado vegetation in Central Brazil. We collected data for environmental variables (water temperature, dissolved oxygen, pH, conductivity, turbidity, water velocity, luminosity, channel substrate and width) and anthropogenic perturbations (industry, reservoirs, urban areas) that may structure the fish assemblage in ten stream sites of the Ouvidor River basin. In each stream we delimited one 50 m long site where fish were captured by electrofishing and abiotic data were collected every two month between August 2004 and June 2005. Coinertia analysis indicated that pH, water velocity, channel width and water temperature most strongly structured the fish assemblages. The interactions of water velocity and channel width with the fish assemblage were not directly affected by wet and dry seasons but the opposite was true for pH and water temperature.
Introduction
The structural and functional characteristics of aquatic communities respond to environmental oscillations that differ in spatial and temporal scales (Matthews, 1998).Four physical habitat characteristics are widely recognized as directly important for fish species distribution and abundance in streams: water depth (Angermeier & Karr, 1994;Penczak et al., 1994), current velocity (Mendonça et al., 2005), composition of the channel substrate (Cunico et al., 2006) and riparian vegetation cover (Ferreira & Casatti, 2006a;Mérigoux et al., 1998;Penczak et al., 1994).Metrics representing these characteristics aid analyses of habitat modification on fish assemblages (Tejerina-Garro et al., 2005).However, fish assemblages are also influenced by other characteristics of the aquatic environment such as the historic/biogeographic conditions, water temperature, flow regime, predation, competition, and diseases (Poff et al., 1997;Jackson et al., 2001).
In the Neotropical region, interest in stream fish ecology is relatively recent (Oliveira & Bennemann, 2005), including the influence of environmental variables on fish assemblage structure (Fialho et al., 2008).Consequently, there have been few studies concerning this subject in the upper Paraná River basin.Abes & Agostinho (2001) reported that the variable set of channel width, depth, water and air temperature, dissolved oxygen, conductivity, pH and substrate influenced fish richness and assemblage composition.Penczak et al. (1994) found that pH, conductivity, depth, channel width, and pres-ence of macrophytes structured fish assemblages.Conductivity, water temperature, pH, dissolved oxygen (Braga & Andrade, 2005) and pH, water temperature, conductivity, chemical dissolved oxygen, and turbidity (Fialho et al., 2008) were also related to differences in fish assemblage structure.However, these relationships were also affected by anthropogenic impacts related to different land uses (Penczak et al., 1994) such as domestic sewage, agriculture, ranching, and urbanization (Fialho et al., 2008).Despite their similarities in stream hydromorphology, neotropical streams are historically and geomorphologically different.Also, any environmental modification, natural or anthropogenic, can influence local fish zoogeography and modify the fish assemblage composition to some degree via local species extirpations (Gorman & Karr, 1978;Tonn, 1990) and introductions (Lomnicky et al., 2007).The lack of knowledge about fish faunas and environmental change in neotropical regions like those of the Central Brazil are of concern to ichthyologists and ecologists because regional biodiversity is unknown and some species appear to be disappearing of some streams even before it is possible to establish their spatial distribution (Tejerina-Garro, 2008) The aim of this article is to identify which environmental variables (water temperature, dissolved oxygen, pH, conductivity, turbidity, water velocity, luminosity, channel substrate and width) and anthropogenic perturbations (industry, reservoirs, urban area) most structure the fish assemblages in streams sites of the Upper Paraná River, Central Brazil.
Material and Methods
Study area.The Ouvidor River basin is located in southern Goiás State, Central Brazil, and is a tributary of the Upper Paraná River, the second largest drainage basin of South America (Lowe-McConnell, 1999).The climate is semi-arid with marked wet and dry seasons, and air temperature varying between 16.9°C and 37.2°C.The predominant soil in the basin is red latosol (IBGE, 2005).The sampled stream sites (Table 1) have narrow (< 8 m) and shallow (< 1 m) channels.Their substrate is formed predominately by sand and gravels (Buraco, Olhos d'água, Posse dos Rodrigues, Riacho streams), bedrock (Lagoa, Ouvidor and Taquara II), or mud (Taquara I).The sites were bordered by riparian vegetation typical of the cerrado biome (Ribeiro, 1998), pasture (Ouvidor, Posse dos Rodrigues, Riacho, Olhos d'água streams) or industrial areas (Taquara II and Santo Antônio).Some channels were fragmented by reservoirs (Taquara I, Buraco, Sapê streams), received domestic sewage (Santo Antônio) or drained urban areas (Lagoa).
Sampling protocols.Fish, environmental variables and anthropogenic perturbations data were collected from one site in each of nine streams and one upper reach of the Ouvidor River main channel (Fig. 1, Table 1) every two months between August 2004 and June 2005 to assess changes in the fish assemblages associated with seasonality (Welcomme, 1979).In each stream, a 50 m long site was delimited based on easy access conditions, its geographic coordinates (Garmin 12) were obtained, and six transects every ten meters were marked.
Fish were sampled by electrofishing, which is efficiently for collecting small fish species (Severi et al., 1995) in lotic environments (Mazzoni et al., 2000).The electrofishing equipment was powered by a portable generator (HONDA, 1800 W, 220 V) connected to a DC transformer then two electrified net rings (anode and cathode).Output voltage varied from 100 to 600 V.Each reach was fished three times from downstream to upstream by three people following the protocol suggested by Esteves & Lobón-Cerviá (2001).Collected fish were fixed in 10% formalin and identified to species or genus in the laboratory.Voucher specimens of each species were deposited in the Museu de Ciências e Tecnologia, Pontifícia Universidade Católica do Rio Grande do Sul, Brazil.
Channel width, substrate, water velocity and luminosity (total luminous flux incident on the channel surface) and the anthropogenic perturbations were measured at each transect.Water temperature and dissolved oxygen were measured at the first, third and sixth transects and the pH, conductivity and turbidity were measured at the center of each reach (Table 2).Data analysis.The data matrices consisted in presence/absence values aiming to treats dominant species equally than rare species (McCune & Grace, 2002), qualitative variables values (channel substrate and anthropogenic perturbations), and average quantitative variables values for each reach.All data were grouped by season (wet and dry).The fish and qualitative and quantitative data matrices were submitted separately to Principal Component Analysis (PCAs).In the case of the quantitative data, the PCA was performed using the correlation method recommended when data collected was measured in different unities, whereas covariance was used for the fish and qualitative data because it was measured in the same unit (Dolédec & Chessel, 1991).Then the results of each PCA were submitted to a co-inertia analysis (COI) separately (fish vs. quantitative variables; fish vs. qualitative).We used multivariate ordination of COI because it is sensitive to a small number of samples (10 sites; Dolédec & Chessel, 1994) and aids identification of fish assemblage patterns resulting from the influence of the variables considered , 2002).The co-structure between fish assemblages and variables resulting from the co-inertia analysis was tested using a Monte Carlo test (1000 iterations).The collinearity between quantitative variables resulting from the PCA was tested using a Pearson correlation test (Zar, 1998).
Results
We collected 4049 specimens and 35 fish species (Table 3).Fish abundance was high at the Buraco site during the dry (601 specimens) and wet (641) seasons.Conversely, low abundance was observed at the Lagoa (76, dry season) and Taquara I (11, wet season) sites.
The COI analysis indicated that the co-structure between the fish matrix and the quantitative variables was significant (p = 0.000; characin P. argentea and the catfish C. cf.iheringi, Hisonotus sp., Hypostomus sp. and H. margaritifer) but not with the qualitative variables (p = 0.084; Table 4).In the first case, two axes explained 70.74% of the total inertia and the correlation (r) between the fish assemblage and the quantitative variables was significant (axis 1 = 0.86; axis 2 = 0.90; Table 4).On axis 1, the ordination of fish assemblages and stream reaches was related to water velocity and channel width in the dry and wet seasons (Fig. 2).The fish assemblages represented by these species were associated with reaches with the higher water velocities and wider channels observed in the Ouvidor River and Riacho stream (right sides of Figs.2a-b; Table 5).By contrast, fish species such as the characin H. aff.malabaricus and the cichlid T. rendalli were related to stream reaches with low water velocities and narrow channel widths as observed in the Buraco and Sapê streams during dry season (Fig. 2; Table 5).
On axis 2, fish assemblages and sites were discriminated by pH and water temperature seasonally (Fig. 2; Table 4).In the dry season, the characins Bryconamericus sp.2, P. myersi, A. eigenmanniorum, A. piracicabae, Bryconamericus sp.1 and the catfish H. nigricans were associated with sites, except Olhos d'água and Lagoa, where the water was acidic and the temperature low (superior side of Figs.2a-b; Table 5).
In the wet season H. margaritifer and P. nasus were associated with sites characterized by elevated water temperature and alkaline pH such as displayed by the streams Posse dos Rodrigues, Olhos d'água and Lagoa (Fig. 2; Table 5).
Discussion
One goal of ecosystem ecology is to establish how assemblages of organism, in this case fish, are related to environmental oscillations (Braga, 2004).Such relationships are characterized by the simultaneous influence of multiple environmental variables on assemblages (Bini, 2004).This also was the case in this study, where four variables (water velocity, channel width, water temperature and pH) structured fish assemblages to the greatest degree among the variables measured.
In our study, water velocity and the channel width were little affected by seasonality as normally occurs in natural conditions (Poff et al., 1997;Tejerina-Garro & Mérona, 2001).This situation is often related to modification of stream hydrology by reservoirs, but reservoirs did not significantly influence many of our sites and fish assemblage ordination results.Reservoirs are known to regulate water velocity and channel width downstream (Mérona et al., 2005), increase water residence time (Thomaz et al., 1997), increase downstream substrate size (Oliveira & Lacerda, 2004), and consequently alter biotic communities (Agostinho et al., 1992).This likely only occurred in sites with low water velocities and narrow channel such as Taquara I, Buraco and Sapé, each of which were located upstream of a reservoir.
In this study the relationship of catfish (C.cf.iheringi, Hisonotus sp., Hypostomus sp. and H. margaritifer) and characin (P.argentea) to sites with elevated water velocities and wide channels can be explained in part by the interaction of environmental filters and the functional characteristics of these species (Poff, 1997).Some of the sites sampled had rocky substrates and pools; both of which provide habitat for algae (Esteves, 1988;Bennemann et al., 2005) and subsequently herbivorous catfish species such as Hisonotus and Hypostomus (Casatti, 2002;Fialho & Tejerina-Garro, 2004;Hahn et al., 1998;Melo et al., 2005;Santos et al., 2004).Also, these catfish display body shapes and morphological adaptations of the pectoral fins that aid them in maintaining position in high velocity areas of streams (Casatti, 2002;Melo et al., 2005).In addition, the catfish C. cf.iheringi is reported to be invertivorous (Casatti & Castro, 1998;Froese & Pauly, 2008;Oliveira et al., 1997), feeding predominantly on aquatic insect larvae such as those of the family Leptophlebiidae, Hydropsychidae, Hydroptilidae (Casatti & Castro, 1998), which are present in streams of the Upper Paraná River (Oliveira et al., 1997).In the case of the characin P. argentea, its fusiform body, terminal mouth position (Fialho & Tejerina-Garro, 2004), small size and opportunistic feeding behavior facilitate its exploitation of different lotic habitats along the margins of water courses (Ferreira & Casatti, 2006b;Ferreira et al., 2002;Gomiero & Braga, 2005).
The dry and wet seasons (IBGE, 2005) accentuate physicochemical water characteristics, particularly pH and water temperature (Esteves, 1988;Gordon et al., 1995;Melo et al., 2003).During dry seasons the fish assemblages represented by the characins A. eigenmanniorum, Bryconamericus sp.1, Bryconamericus sp.2, P. myersi, A. piracicabae and the catfish H. nigricans are associated with sites having acidic water (average pH = 6.12) and relatively low water temperatures (average = 22.31 °C), whereas in wet seasons the catfish H. margaritifer and the characin P. nasus are linked with sites having relatively basic water (average = 8.08) and higher water temperatures (average = 24.37 °C).
The influence of pH on fish assemblage structure was also observed by Abes & Agostinho (2001), Braga & Andrade (2005), Fialho et al. (2008) and Penczak et al. (1994), in streams of the Upper Paraná River.In our study the pH influence seemed to be related mostly to regional soil characteristics and land uses.The Ouvidor River basin drains Cerrado regions characterized by naturally acidic soils (Ratter et al., 1997), where the main economic activity is farming and cattle ranching (Nepstad et al., 1997).These activities require the addition of calcium before the rainy season to reduce soil acidity sufficiently to enable profitable agricultural activities (Ratter et al., 1997).This fertilization introduces Mg and Ca ions into the water courses during the wet seasons and results in more basic pH of the water (Carvalho et al., 2000).However, it is not possible to determine from our study weather the preference of the fish species for basic or acidic water is related to the influence of pH on reproduction (Dei Tos et al., 2002) or growth and development (Esteves, 1988;Ferreira et al., 2001).Esteves (1988) of the specific heat of water is relatively high thermal stability of aquatic ecosystems, and Wetzel (1993) suggested that thermal radiation into and from water reservoirs is predominantly a superficial phenomenon that is, restricted to the top centimeters of the water column.In this way, water courses with less water volume (e.g., streams) tend to gain and lose heat more rapidly than ones with more volume (e.g., river).This seemed to be the situation in our study when comparing the Ouvidor River with the Buraco and Sapê streams during the wet season, and all stream sites in both seasons.However, streams with riparian vegetation cover have lower water temperatures than uncovered ones (Ferreira & Casatti, 2006b), which seems to be the case of the Olhos d'água and Posse dos Rodrigues streams, where H. margaritifer and P. nasus are predominant.Generally, the influence of water temperature on fish is related to their metabolism (Silva & Araújo-Lima, 2003), which was not measured in our study.In sites lacking vegetation cover, the availability of light in the water column increases (Tejerina-Garro & Mérona, 2001).The increased light and additional nutrients from wet season run off favored increased periphyton production, which is consumed by H. margaritifer (Casatti, 2002;Hahn et al., 1998;Melo et al., 2005;Santos et al., 2004;) and P. nasus (Fialho & Tejerina-Garro, 2004).
In conclusion, we found that fish assemblages of Ouvidor basin streams sites were structured by water velocity, channel width, pH and water temperature.Although we did not detect direct effects of anthropogenic perturbations on fish assemblages in this study, it does not mean that they were absent.Our results aids predictions of the responses of Ouvidor fish assemblages to environmental modifications, including those that take place at a large temporal and spatial scale such as air temperature.
Additional studies are necessary to verify if the fish-habitat relationships observed for our Ouvidor stream sites prevail at the larger spatial scale of the Paranaíba River basin in Fig. 2. Ordination of the co-structure between (a) fish assemblages (arrows) and stream sites (squares) and (b) fish species and environmental variables resulting from the co-inertia analysis.Only species that most contributes to each axis are displayed.Black and white squares represent stream sites sampled in the wet and dry season, respectively.Codes correspond to names listed in Tables 1 and 2. Small boxes indicate the graphic scale.Goiás, where streams were modified without prior knowledge of the structure and composition of the aquatic assemblages, including fish.
Fig. 1 .
Fig. 1.Locations of the sampled sites (dots) in the streams of the Ouvidor River, Goiás State, Brazil.Squares indicate the main cities.
Table 1 .
Streams sites sampled in the Ouvidor River basin and their geographic coordinates.
Table 2 .
Qualitative and quantitative variables measured in the Ouvidor River and its tributaries.
Table 3 .
Relative stated that one ecological consequences
Table 4 .
Fish species, quantitative and qualitative variables contribution (%) to axes and statistics of the co-inertia analysis.Boldface values indicate major contributions.
Table 5 .
Averages values of the quantitative variables by stream and season.DS = dry season; WS = wet season. | 2019-03-20T13:08:07.932Z | 2009-03-01T00:00:00.000 | {
"year": 2009,
"sha1": "711eec2c30ba82b5ec580747b52413ac5e46a39a",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/ni/a/jBdM68h6FzcDkhspc7WX9Xs/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "711eec2c30ba82b5ec580747b52413ac5e46a39a",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
216326321 | pes2o/s2orc | v3-fos-license | THE LOGISTICAL COMPONENT OF MANAGERIAL SYSTEMS AT AGRICULTURAL ENTERPRISES IN GENERAL TERMS
Figure 1a shows that the company's value depends on the amount of intangible assets. Thus, as the intangible component increases, the value of the company increases. But this situation is observed until the first moment (extremum point), after which the value of the company decreases. Next we analyze the impact on the company value of changes in the following factors: average industry profitability (Fig. 1b), equity value (Fig. 1c) and company profitability (Fig. 1d). As the average industry returns, the extremum point does not change, but the amount of intangible assets has less impact. As the value of equity increases, the value of the company changes otherwise there is some optimal point of the capital's value at which the company's value reaches its maximum value. As the profitability of the company increases, the extremum point does not change, but the amount of intangible assets has a significant effect only for a certain period of time. But after reaching the extremum, the company's value drops sharply. Conclusions. For today, companies operate under the conditions of rapid development on information technologies, competition, and the growth of the role of intellectual capital. Therefore, there is a shift in the priority of strategic directions of development from material to intangible components. An increase in intangible assets in the overall structure leads to an increase in the value of the company.
shows that the company's value depends on the amount of intangible assets. Thus, as the intangible component increases, the value of the company increases. But this situation is observed until the first moment (extremum point), after which the value of the company decreases. Next we analyze the impact on the company value of changes in the following factors: average industry profitability (Fig. 1b), equity value (Fig. 1c) and company profitability (Fig. 1d). As the average industry returns, the extremum point does not change, but the amount of intangible assets has less impact. As the value of equity increases, the value of the company changes otherwise -there is some optimal point of the capital's value at which the company's value reaches its maximum value. As the profitability of the company increases, the extremum point does not change, but the amount of intangible assets has a significant effect only for a certain period of time. But after reaching the extremum, the company's value drops sharply.
Conclusions. For today, companies operate under the conditions of rapid development on information technologies, competition, and the growth of the role of intellectual capital. Therefore, there is a shift in the priority of strategic directions of development from material to intangible components. An increase in intangible assets in the overall structure leads to an increase in the value of the company. At present, the deterioration of agricultural economic agents` operating conditions in Ukraine is determined by a continuous socio-economic crisis at the world level. The country`s limited mineral resources form a peculiar limitedness of resource potential, which means that the level of economic development directly depends on the efficiency of the competition-based areas of the economy. For this reason, the Ukrainian economy is likely to be quite sensitive to the external dynamics of economic instability and is not able to adapt quickly to modern constantly evolving conditions. As a result, certain industries and sectors of economic activity are in a loss-making position, and some are even subject to stagnation.
References
One of the areas most affected by economic instability is agriculture. At the same time, it considered to be one of the most substantial areas, which is currently the core of the state's gross domestic product. Thus, agriculture performs both socially important functions, providing the population with quality food products, and, to the largest extent, shapes the export component of the country, selling commodities to European countries and all over the world. Based on this, an urgent issue at the moment is the search for new ways to improve the economic efficiency of agricultural production. Logistics is one of the most modern and actively developing tools for optimizing the economic activity of entities in free-market countries.
The concept of logistics can be considered in two ways, namely, it is possible to distinguish two levels: 1) management and organizational systems are of the considerable presence of logistics activities in the microeconomic systems of business entities; 2) application of logistical principles and approaches in organizations` activities and management (there is a deliberate transition to logistics in the organizational and managerial system of an enterprise).
In the field of agricultural production, the implementation of logistical operations occurs in almost all elements (subsystems) of an enterprise, but the purposeful use of logistical principles in the production of crop and livestock products is rare. Consequently, the logistical component of organizational and managerial systems at the discussed business entities is implicit and is identified only by the presence of logistics operations that occur in production.
The need to develop and improve the logistical component of the organizational and managerial systems of agricultural producers is determined by the causal factors of a general economic and sectoral nature. The following should be considered as general economic factors: 1) the experience of application the logistical approach in successful companies proves that a 1% reduction in logistics costs is equivalent to an almost 10% increase in sales volume; 2) active development and implementation of research results in the field of saving logistics make a bigger difference in developing countries; 3) efficient use of the potential (including logistics) for the agricultural industry economic growth in a state impacts its agricultural organizations; 4) at the moment, logistics is becoming an effective tool for the development at the national level, influencing all the related to agriculture industries.
Industry factors include the following: 1) almost all functional areas of logistics involved in commodities and food production and turnover; 2) agricultural economic agents represent the key element of the complex integration at the macro level; 3) the complexity and peculiarity of specific manufacturing activities compared to other areas of production (dependence on natural resources affordability, weather conditions, labor, and power capacity); 4) logistics costs in of agricultural production amount to 16-17%.
Thus, the logistical component of an agricultural producer`s organizational and managerial system exists independently and expresses itself relatively to two levels, represented as a formal logical system or a purposeful process of logistics exploit in a business entity`s organizational and managerial system. | 2020-04-09T09:24:06.651Z | 2020-04-03T00:00:00.000 | {
"year": 2020,
"sha1": "0cbb241548f02e906e39a2389963c2c5af62368c",
"oa_license": "CCBY",
"oa_url": "https://ojs.ukrlogos.in.ua/index.php/logos/article/download/1664/1513",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "700ff5ef04425b1ffa1fdcb8d9af43c5a693668c",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
8236475 | pes2o/s2orc | v3-fos-license | Disentangling self- and fairness-related neural mechanisms involved in the ultimatum game: an fMRI study
Rejections of unfair offers in the ultimatum game (UG) are commonly assumed to reflect negative emotional arousal mediated by the anterior insula and medial prefrontal cortex. We aimed to disentangle those neural mechanisms associated with direct personal involvement ((cid:2)I have been treated unfairly(cid:3)) from those associated with fairness considerations, such as the wish to discourage unfair behavior or social norm violations ((cid:2)this person has been treated unfairly(cid:3)). For this purpose, we used fMRI and asked participants to play the UG as responders either for themselves (myself) or on behalf of another person (third party). Unfair offers were equally often rejected in both conditions. Neuroimaging data revealed a dissociation between the medial prefrontal cortex, specifically associated with rejections in the myself condition, thus confirming its role in self-related emotional responses, and the left anterior insula, associated with rejections in both myself and third-party conditions, suggesting a role in promoting fair behavior also toward third parties. Our data extend the current understanding of the neural substrate of social decision making, by disentangling the structures sensitive to direct emotional involvement of the self from those implicated in pure fairness considerations.
INTRODUCTION
In the last decades, studies in the field of economics reported systematic violations of classical economic theories' predictions, which see maximization of one's monetary gain as the driving principle of decision making (Von Neumann and Morgenstern, 1947). One example is the ultimatum game (UG) in which one player (the proposer) makes an offer to a second player (the responder) on how to divide an amount of money; the responder can either accept (i.e. the money is divided as suggested) or reject (i.e. both players get no money) the offer. Classical economic theory posits that the responder should accept every offer ('few is better than nothing'), and that the proposer, consequently, should offer the smallest amount of money possible. However, behavioral findings describe the responder likely to reject offers considered unfair and the proposer more prone to divide the money equally. Pillutla and Murnighan (1996) suggested that negative emotions (e.g. anger and frustration) underlie responder's behavior: in particular, the unfair treatment evokes a negative emotional reaction which, in turn, leads to rejections (wounded pride/spite model). Evidence supporting this model arise by van't Wout et al. (2006), who measured skin conductance response (SCR) as an index of emotional arousal (Boucsein, 1992), and found increased SCR when responders were about to reject (as opposed to accept) unfair UG offers. Furthermore, Harlé and Sanfey (2007) affected responders' emotional status prior to the game through the presentation of emotionally salient video clips and found that rejections increased following the presentation of sad (but neither happy nor neutral) movies. Finally, Crockett et al. (2008) reported increased rejections in those participants who, following acute tryptophan depletion, presented low levels of serotonin, a neurotransmitter involved in impulse regulation.
The UG is, for its own definition, a self-centered task, in which the person accepting/rejecting the proposers' division is also the direct target of an unfair treatment. Thus, in all the studies reviewed above, the unfairness correlates with the amount of anger/frustration triggered in the responder, leaving open the issue of whether rejections: (i) are reactions to a self-directed unfair treatment ('I have been treated unfairly') which, consistently with the wounded pride/spite model, evokes increased anger and frustration; or (ii) are driven by pure considerations about fairness ('this person has been treated unfairly'), that is by the integration of those cognitive, emotional and motivational mechanisms which lead to the discouragement of social norm violations (Moll et al., 2008). Civai et al. (2010) recently attempted to disentangle self-and fairness-related effects by asking participants to play as responders in a modified version of the UG in which the unfair bargaining was directed not to them personally (as in the classical UG), but to an unknown person. Since in this 'third-party' UG, the responder was not the victim of an unfair treatment, the effect of anger/frustration in the choice was hypothesized to be diminished. Still, the offers in the third-party UG were as unfair as those in the classical ('myself') UG and the responder could, according to the game's rules, accept/reject them. The analysis of SCR and of emotional ratings confirmed stronger negative emotional arousal in the myself than in the third-party UG, especially during the rejections; however, the amount of rejections was significantly modulated by the unfairness of the offer and not by the target of the offer. The data from Civai et al. (2010) suggest that rejections are predominantly driven by fairness sensitivity and that the strong negative emotional reaction seems to be elicited exclusively by the self-directed unfairness. It is still unclear how self-and fairness-related effects in UG relate to the brain. Investigations on the classical UG implicate the anterior portion of the insular (AI) and cingulate (ACC) cortex and the dorsolateral (DLPFC) and medial (MPFC) aspects of the prefrontal cortex (Sanfey et al., 2003;van't Wout et al., 2005;Knoch et al., 2006Knoch et al., , 2008Koenigs and Tranel, 2007;Tabibnia et al., 2008;Moretti et al., 2009;Güroglu et al., 2010Güroglu et al., , 2011Baumgartner et al., 2011). However, the exact role played by this network in the responder's reaction is still under debate. For instance, AI and ACC have been associated with negative emotions such as disgust, anger, fear and pain (Damasio et al., 2000;Calder et al., 2001;Wicker et al., 2003;Corradi-Dell'Acqua et al., 2011), as well as with monitoring one's physiological responses to affective events (SCR andheart beatCritchley et al., 2000, 2004;Patterson et al., 2002). Thus, the involvement of these regions in rejections might be reflective of the anger/frustration elicited by self-directed unfairness (Sanfey et al., 2003). On the other hand, recent accounts suggest that AI and ACC might mediate the integration of emotional, cognitive and motivational processes (Craig, 2009;Singer et al., 2009;Lamm and Singer, 2010) and play a critical role in detecting and reacting to social norm violations (Spitzer et al., 2007;Rilling et al., 2008;King-Casas et al., 2008;Strobel et al., 2011). It is therefore plausible that the rejection-related activity in these regions reflects the wish to sanction unfairness irrespective of the person to which it is addressed. As for DLPFC and MPFC, studies testing classical UG concur in interpreting the involvement of these regions in terms of executive control, goal maintenance and the monitoring/control of one's emotional responses (van't Wout et al., 2005;Knoch et al., 2006Knoch et al., , 2008Koenigs and Tranel, 2007;Moretti et al., 2009;Güroglu et al., 2010;Baumgartner et al., 2011). These interpretations leave open the possibility of prefrontal regions monitoring/controlling those emotional responses elicited by self-related unfair treatment (see Koenigs and Tranel, 2007, for MPFC) but also promoting culture-dependent fairness goals in monetary bargaining (see Knoch et al., 2006Knoch et al., , 2008Baumgartner et al., 2011, for DLPFC).
We used fMRI and engaged healthy participants in the paradigm described by Civai et al. (2010). Subjects performed either the UG or a control task [Free-Win (FW)], in which they accepted/rejected money provided by the computer. Both UG and FW tasks comprehended offers addressed to either oneself or a third party. FW shares many properties with the UG (e.g. self/otherreflection, receipt of monetary value, etc.), except the fact that the money received is the result of an unfair treatment. Furthermore, within UG, we distinguished between trials which were accepted/rejected by the participants (participants seldom reject FW offers; see behavioral results and Civai et al., 2010). This constitutes a 3 Â 2 design with TASK (UG rejections, UG acceptances and FW) and TARGET (myself and third party) as factors and six conditions: URm, rejected trials when playing UG for oneself; UAm, accepted trials when playing UG for oneself; FWm, FW task addressing oneself and, respectively, URt, UAt, FWt, third-party versions of UG/FW. Of crucial interest are the functional properties of regions previously associated with the classical UG (e.g. AI, ACC, MPFC and DLPFC). If these regions code those negative emotional reactions due to a direct exposure to an unfair treatment, they should be significantly associated with the TASK*TARGET interaction, as increases of neural activity for UG (relative to FW) should be observed for the myself but not for the third party. Alternatively, if the neural activity of these regions relates exclusively to fairness, then their involvement in the UG should not be specific for the myself, but should be observed also for the third party.
MATERIAL AND METHODS Participants
Twenty-three (nine females, age: 18-35 years, average ¼ 23.5) subjects took part in the experiment. None of the participants had any history of neurological or psychiatric illness. Written informed consent was obtained from all subjects, who were naïve to the purpose of the experiment. The study was approved by the local ethics committee.
Task and stimuli
Task, stimuli and experimental set-up were similar to the ones employed in Civai et al. (2010). Participants underwent one session of 30 min. The experimental instructions (see Supplementary Data for an English-translated instruction sheet) can be subsumed as follows: another participant (i.e. the proposer) was given a 10E note at each trial, and he/she had to split this money with him (responder). In the myself condition, if participants accepted the offer, the money would be divided as suggested by the proposer whereas if they rejected the offer, none of the players would get any of the money. In the third-party condition, if participants accepted the offer, the money would be divided (as suggested by the proposer) between the individuals acting as proposer and responder in the next experimental session; if they rejected the offer, these individuals would get no money at all; in either case, neither the proposer nor the participant would get any money related to this trial ( Figure 1A).
Although participants were told that they were interacting with a human proposer, they were presented with offers defined a priori by the experimenter. These could be 1, 2, 3, 4 or 5E out of 10 (in '1E out of 10,' the responder is offered only 10% of the money at stake). UG trials were intermingled by trials of a control [Free-Win (FW)] task, in which they were offered the same amount of money as in the UG (1, 2, 3, 4 or 5E); however, this was not a partition between two players. In the myself condition, participants could accept the FW offer and keep the money or reject it and get no money. In the third-party condition, participants could accept the FW offer and the individual acting as responder in the upcoming experimental session would receive the money; if participants rejected the offer, the next responder received no money. In either case, participants received no money related to this trial ( Figure 1A).
In order to strengthen the participant's belief that they were facing a human fellow, they were introduced prior to the experimental session to a collaborator of the experimenter who pretended to act as the proposer. Furthermore, participants were told that the proposer would receive feedback only at the end of the experiment (i.e. 'covered' UG, which prevents strategic use of rejectionsZamir, 2001; Civai et al., 2010). Participants were informed that their compensation for participating in the experiment would be proportional to the amount of money gained during the myself condition. Moreover, they knew that a proportion of the money gained on behalf of third parties would be given to the next players; they were also informed that, following the same principle, their starting stakes were proportions of the money that previous players had split on their behalf. Irrespective of task performance, participants received the same amount of money as compensation after completion of the experiment. Finally, after the whole experimental session an informal debriefing was carried out to assess whether participants believed whether offers were genuinely human. None of the participants exhibited doubts regarding the cover story.
Experimental set-up
Participants lay supine in the MR scanner with their head fixated by firm foam pads. Stimuli were presented using Presentation 11.0 (Neurobehavioral Systems) and projected to a VisuaStim Goggles system (Resonance Technology). Behavioral responses were recorded by pressing the corresponding keys of an MRI-compatible response device (Lumitouch, Lightwave Medical Industries, CST Coldswitch Technologies).
For each experimental trial, participants were first presented with the offer ('I offer you/the next participant 2E out of 10') for 4500 ms, Self vs fairness mechanisms in the ultimatum game SCAN (2013) followed by a blank screen ranging from 4750 ms to 6750 ms with an incremental step of 500 ms. The question 'Do you accept?' was then presented for 2000 ms, by which time the participant had to give a response by button press. Trials were followed by an inter-trial interval ranging from 4750 ms to 6750 ms with an incremental step of 500 ms ( Figure 1B). Each experimental session comprised 105 randomized trials, including 100 experimental trials [2 (ultimatum game, free win) * 2 (myself, third party) * 5 (1, 2, 3, 4, 5E) * 5 repetitions] and 5 'null events' in which an empty screen replaced the stimuli.
fMRI data acquisition A Siemens Trio 3T whole-body scanner was used to acquire both T1-weighted anatomical images and gradient-echo planar T2*weighted MRI images with blood oxygenation level-dependent (BOLD) contrast. The scanning sequence was a trajectory-based reconstruction sequence with a repetition time (TR) of 2200 ms, an echo time (TE) of 30 ms, a flip angle of 908, a slice thickness of 3 mm and no gap between slices. For each subject, 878 volumes were acquired during the whole experimental session.
Imaging processing
Image processing and statistical analysis were performed using the SPM8 software package (http://www.fil.ion.ucl.ac.uk/spm/). For each subject, the first six volumes were discarded. To correct for head motion, the functional images were then realigned to the new first functional image (Ashburner and Friston, 2004), normalized to a template based on 152 brains from the Montreal Neurological Institute (MNI) at a 2 Â 2 Â 2 mm voxel size, and then smoothed by convolution with an 8-mm full width at half maximum Gaussian kernel. Data were then fed into a first-level analysis using the general linear model framework (Kiebel and Holmes, 2004) implemented in SPM8. On the first level, for each individual subject, we fitted a linear regression model to the data. For the UG only, we distinguished between rejected and accepted offers. This yielded a 3 Â 2 factorial design with six conditions. For each of these conditions, we modeled independently the onset of the offer and the onset of the text string prompting a button press ( Figure 1B) through a stick function. For each of the resulting 12 vectors, we accounted for putative linear changes of neural activity across all repetitions by using the time modulation option implemented in SPM, which creates a new regressor in which the trial order is modulated parametrically. Furthermore, regressors testing the parametric modulation of the factor GAIN were included: distinct regressors were modeled for the two onsets within the trial structure (offer, responsesee Figure 1B), the two levels of target (myself, third party) and for task which was UG and FW, but not for different responses within UG trials. This yielded 32 vectors [12 stick functions þ 12 time modulation vectors þ 8 gain modulation vectors], each of which was convolved with a canonical hemodynamic response function and associated with a vector describing its first-order time derivative. Finally, we included six differential realignment parameters as regressors. Low-frequency signal drifts were filtered using a cutoff period of 128 s. Critically, response regressors (e.g. URm) correlate strongly with regressors testing for response-independent effects of GAIN (see behavioral results). By modeling both of them, we ruled out potential confounding effects of the correlated regressor and insured that our results (if any) could be uniquely interpreted (Andrade et al., 1999).
On the second level, we focused on those parameter estimates from the first level associated with the six conditions of our 3 Â 2 design, exclusively when the offer was presented. These images were then fed into a flexible factorial design with a within-subject factor of six levels using a random effects analysis .
Behavioral results
For each subject and for each condition, the rejection rates were calculated across all five repetitions and used in a 2 TASK (UG, FW) Â 2 TARGET (myself, third party) Â 5 GAIN (1-5E) repeated measures ANOVA. Results indicated a significant main effect of task [F(1, 22) ¼ 123.89, P < 0.001], with the UG leading to a larger number of rejections than the FW, as well as a main effect of GAIN [F(4,88) ¼ 58.73, P < 0.001], with lower offers being rejected more often than higher offers. These effects were, however, driven by a TASK*GAIN interaction, which was also significant [F(4,88) ¼ 63.44, P < 0.001], suggesting that lower offers were rejected significantly more often than higher offers in the UG but not in the FW (Figure 2). None of the remaining effects of the ANOVA was significant. This statistical analysis was performed using SPSS 11.5 Software (SPSS Inc., Chertsey, UK). Table 1 reports areas of activations, which exceeded a height threshold of t > 3.17 (corresponding to P < 0.001, uncorrected). With this height threshold, in our data set, clusters associated with a P < 0.05 corrected for multiple comparisons across the whole brain were observed with an extent threshold > 176 voxels (Friston et al., 1993). Furthermore, we focused our analysis on those structures previously implicated in rejection effects in the UG: AI, ACC, MPFC and DLPFC. We created a small volume of interest including those regions in both hemispheres which, according to the AAL atlas (Tzourio-Mazoyer et al., 2002) corresponded to insula, ACC and superior/middle frontal gyri in both their lateral and medial aspects (regions F1, F1M, F1MO, F2). In this small volume, clusters associated with P < 0.05, corrected, will be > 109 voxels.
Main effects
The main effect of task was tested through the contrast testing for regions with increased neural activity for the UG (compared to the FW) irrespective of the responder's choice or to the person to which the bargaining was addressed [i.e. (URm þ UAm þ URt þ UAt)/ 2 À (FWm þ FWt)]. This contrast implicated many regions described playing a critical role in the UG by previous studies, among which bilateral AI, ACC and right DLPFC (Figure 3). Interestingly, the local maxima of these insular, cingulate and prefrontal activations (Table 1) are always < 12 mm distant from the corresponding local maxima reported by Sanfey et al. (2003).
We further explored effects of response within the UG. The analysis of rejections (relative to acceptances), independently of the target of the unfair bargaining [i.e. (URm þ URt) À (UAm þ UAt)], implicated the midbrain and the left AI, over and around the left AI cluster isolated through the main effect of task ( Figure 4A). Critically, the increased activity for rejection tested in the present contrast was not driven by one target condition only (e.g. myself), as the insular response effect was still significant (albeit at an uncorrected threshold) when considering each target separately (t's > 2.60, P's < 0.05). No suprathreshold increase of neural activity was associated with acceptances relative to rejections [i.e. (UAm þ UAt) À (URm þ URt)].
We then tested effects of target and, specifically those increases of neural activity associated with offers addressing oneself (irrespective of whether these were UG or FW) as opposed to offers addressing a third party [i.e. (URm þ UAm þ FWm) À (URt þ UAt þ FWt)]. Such increases were found in the ventral part of the medial prefrontal cortex ( Figure 4B, violet cluster) and in the right inferior frontal gyrus.
Interactions
We first tested for the interaction TASK*TARGET, investigating target-specific increases of neural activity for the UG (relative to FW) task. In particular, we searched for increases of neural activity in the UG which are specific for offers addressing oneself and not the third party [i.e. ((URm þ UAm)/2 À FWm) À ((URt þ UAt)/2 À FWt)]. The only region surviving correction for multiple comparisons was located in the most anterior portion of the MPFC, around 8 mm above the inter-commissural line. Figure 4C (middle graph) displays the parameter estimates extracted from the region's local maximum, showing an increase of neural activity for myself (relative to third party), which was limited to UR and UA but not to FW. Critically, this effect was stronger for rejections than acceptances, as revealed by this region exhibiting a significant (albeit only at the uncorrected level) Fig. 2 Behavioral results. Rejection rates are plotted as a function of gain in both UG (black circles) and FW (white triangles) tasks and myself and third-party conditions. Self vs fairness mechanisms in the ultimatum game SCAN (2013) effect also for the contrast (URm À UAm) À (URt À UAt) (t ¼ 1.68, P <0.05). However, the interaction effect isolating MPFC should not be considered a response bias, as it survived significance also when considering only UG acceptances [i.e. (UAm À FWm) À (UAt À FWt), t ¼ 1.69, P < 0.05]. We then tested for regions exhibiting significant BOLD increase for UG (relative to FW), specifically for the third-party condition. We found no suprathreshold effect. Finally, we tested the RESPONSE* TARGET interaction, thereby assessing target-specific increases of neural activity for specific responses. Also in this case, we found no suprathreshold effect.
DISCUSSION
We employed a modified version of the UG (Civai et al., 2010), in which participants played either for themselves (myself) or on behalf of a third party. We found the anterior insula involved in dealing with unfair offers affecting both oneself and others. Instead, the middle anterior portion of the MPFC was recruited exclusively when the unfair offers were related to oneself. Finally, ACC and the right DLPFC were found, at least in our data set, only broadly involved in the bargaining process (main effect: UG > FW), as their activity was not modulated by the target of the offer or by the participant's choice (but see Supplementary Data for significant uncorrected difference in right DLPFC activity between one's and third-party's rejections). Our data converge with, but also extend, previous studies: we not only mapped the neural mechanisms underlying people's reaction to unfairness, but we also disentangled those processes reflecting judgments related to unfair behavior per se (fairness), from those related to the emotional consequences of being the victim of the unfair behavior (self-effect).
Fairness-related neural networks
Left AI was found active not only when testing effects of UG (as opposed of FW) in both myself and third-party condition, but also when testing rejections (as opposed to acceptances) of UG offers. This result extends what has been found by previous studies (Sanfey et al., 2003;Tabibnia et al., 2008;Güroglu et al., 2010Güroglu et al., , 2011, by describing left AI activity involved in reacting not only to a self-directed mistreatment, but also to the same mistreatment affecting an unknown other person. Furthermore, our results extend the current understanding about the role played by the insula in UG. Indeed, as previous studies reported this portion of the anterior insula involved in negative experiences, such disgust, anger, fear, pain or thirst (Damasio et al., 2000;Calder et al., 2001;de Araujo et al., 2003;Wicker et al., 2003;Corradi-Dell'Acqua et al., 2011), it has been argued that this region mediates those negative emotional reactions which, according to the wounded pride/spite model (Pillutla and Murnighan, 1996), favor rejections (Sanfey et al., 2003). Although the wounded pride/spite model has been recently challenged by studies favoring an interpretation of rejections in terms of reinforcement of fairness in the community (Knoch et al., 2006(Knoch et al., , 2008Civai et al., 2010;Baumgartner et al., 2011), it still could be argued that being the victim of unfairness triggers a negative emotional reaction and that the AI involvement in UG rejection is its neural signature. Our data speak against this interpretation and suggest instead that the role played by AI in UG is in reacting to unfairness, irrespective of whether the mistreatment affects participants themselves or an unknown person. That activity of AI alone cannot be considered evidence of negative emotional arousal, which was already established by studies associating 22 À18 TASK*TARGET interaction: ultimatum game > free win, specifically for myself ((URm þ UAm)/2 À FWm) À ((URt þ UAt)/2 À FWt) Medial prefrontal cortex (middle anterior) M 0 58 8 310 y Note. All clusters survived correction for multiple comparisons at the cluster level (height threshold P < 0.001, uncorrected). Coordinates (in standard MNI space) refer to maximally activated foci. L and R refer to the left hemisphere and right hemisphere, respectively. M refers to medial structures. *P < 0.001; y P < 0.01; z P < 0.05, corrected for multiple comparisons for the whole brain. § P < 0.05, corrected for the small volume. Fig. 3 Surface renderings of the functional contrasts testing regions exhibiting a larger neural activity when subjects were engaged in UG rather than FW.
its activity with positive affect (Hennenlotter et al., 2005;Jabbi et al., 2007) but also with cognitive processes that are not necessarily emotionally grounded, such as motor control, memory, attention, etc. (see Kurth et al., 2010 as meta-analysis). Recent accounts suggest that AI integrates information about modality-specific feelings with cognitive processes, individual preferences and contextual information in order to promote behavioral responses (Singer et al., 2009;Lamm and Singer, 2010). In this perspective, this region is an ideal candidate for promoting fairness-related behavior which emerges from the integration of cognitive, emotional and motivational mechanisms (Moll et al., 2008). Indeed, previous studies engaging participants in dyadic social interactions (but not the UG) have suggested that left AI mediates punishments of unfair behavior: for instance, Rilling et al. (2008) implicated coordinates proximal to ours (<5 mm) in unreciprocated cooperation during the Prisoner's Dilemma, King-Casas et al. (2008) associated left AI (<10 mm) with borderlines patients' inability to maintain cooperation in a Trust Game, whereas Strobel et al. (2011) reported activations the same region (<5 mm) when participants sanctioned unfair offers in the Dictator Game. To the best of our knowledge, this is the first Ultimatum Game study in which insular activity can be interpreted in terms of sanction of the proposer's norm violations (but see Güroglu et al., 2010 for associating AI with one's own norm violations). Furthermore, in almost all previous studies using the other economical games, participants' gain/loss was directly affected by the game's rules, thus leaving open the possibility that the insular activity they reported was reflective of concerns about one's welfare. This is not the case of our study in which participants choices in the third party did not affect their own pocket. We therefore believe that our study provides the strongest evidence in favor of AI promoting fairness-related behavior in money bargaining.
Self-specific neural networks
Studies in the field of economics implicated ventral portions of the MPFC in assessing the value of potential outcomes (see Amodio and Frith, 2006 as review): for instance, the activity of this region was associated by Knutson et al. (2005) with the computation of expected monetary value, and by Coricelli and colleagues (Camille et al., 2004;Coricelli et al., 2005;Larquet et al., 2010) with anticipated regret associated with monetary decision. A similar interpretation of ventral MPFC's activity is provided by neuropsychological studies using the classical UG: in a first experiment, Koenigs and Tranel (2007) described patients with ventral MPFC damage more prone to reject unfair offers, and interpreted their results as a deficit in emotional The parameter estimates associated with representative voxels of the activated areas are displayed together with 95% confidence intervals (for AI, we choose the local maxima obtained when testing rejection > acceptances). Red bars refer to offers addressing oneself, whereas cyan bars refer to offers addressing a third party.
Self vs fairness mechanisms in the ultimatum game SCAN (2013) control (thus being more exposed to the emotional effects of an unfair treatment); however, in a subsequent experiment, Moretti et al. (2009) replicated Koenig's findings, but only when bargaining offers were described as abstract sums to be received later, rather than visible and immediately available banknotes, thus favoring a deficit in the representation of the offer's value (the inability to code which, makes the patients less able to foresee the benefits of accepting). This interpretation of ventral MPFC activity is consistent also with our data which show increased activity in this region whenever participants (but not a third party) are offered money (myself > third party; Figure 4B, violet blobs). Furthermore, part of this region exhibited an activity which increased linearly with the amount of money participants gained in the FW task (see Supplementary Data), thus strengthening the hypothesis of a sensitivity to personal gain rather than in mere self-reflective processing. A much different interpretation has been offered in the literature for the middle anterior portion of the MPFC (over and above the inter-commissural line) and involves the co-occurrence of cognitive, emotional and social processes (Amodio and Frith, 2006). For instance, the middle anterior MPFC responds to emotional events (Dolcos et al., 2004;Ochsner et al., 2004;Peelen et al., 2010) and has a signal which correlates with one's SCR in both gambling tasks and resting state (Patterson et al., 2002). The middle-anterior MPFC has also been implicated also in self-reflection (Kelley et al., 2002;Johnson et al., 2002;Zysset et al., 2002), mentalizing (Fletcher et al., 1995;Goel et al., 1995;Saxe and Powell, 2006) and moral judgments (Greene et al., 2001(Greene et al., , 2004Moll et al., 2002). Amodio et al. (2006) suggested that value-related representations in the ventral MPFC extend the more anterior (and superior), the more complex they become, and that they integrate with socio-affective processes. Our data converge with this distinction: indeed, whereas in our study, the ventral MPFC was most likely activated in relation to one's (but not third party's) potential gain, the same interpretation cannot be used for the middle anterior portion of the MPFC associated with the interaction term ( Figure 4B, green cluster). Indeed, this latter region showed no differential activation between myself and third-party conditions in the FW, but only during UG acceptances and (more strongly) during rejections. This functional pattern is reminiscent of the one described by Civai et al. (2010) who showed enhanced SCR associated for myself (relative to third party) UG especially for rejections. The increase of one's emotional response in relation to self-directed experimental manipulation converge with recent accounts suggesting that self and affective coding might be instantiated in similar neural networks, as emotional judgments might be considered a self-referential task (Amodio and Frith, 2006) and the self an emotional entity per se (Modinos et al., 2009). In this perspective, the middle anterior MPFC activity observed in our study might reflect those processes involved in the coding and control of the differential emotional response evoked by being oneself the target of unfairness, especially when this unfairness is subsequently sanctioned at one's cost. This interpretation of middle anterior MPFC functioning might also account for results of previous UG studies, such as the modulation of rejection-related activity in this region by the proposer's intention to be unfair (Güroglu et al., 2010, although authors offer an interpretation in terms of differential mentalizing). This interpretation is also consistent with recent findings describing the MPFC as part of a network involved in overriding self-interest motives during rejection of unfair UG offers: indeed the activity of this region was found to be affected by transient inactivation of the right DLPFC (Baumgartner et al., 2011) which, in turn, is detrimental for classical UG rejections (van't Wout et al., 2005;Knoch et al., 2006Knoch et al., , 2008Baumgartner et al., 2011). Interestingly, in our data set as well the activity of the MPFC and right DLPFC seems coupled (see Supplementary Data) as both regions show stronger activity when rejecting self-directed (relative to others-directed) offers. Based on previous and present results, it is conceivable that the apparent causal role played by this prefrontal network in promoting rejections would be limited to the case in which the self-interest is relevant, thus not during the third-party UG. Further studies will address this issue.
CONCLUSIONS
Rejections in the classical UG can either be interpreted as emotional reactions to a self-directed unfair treatment ('I have been treated unfairly') or as pure considerations about fairness ('this person has been treated unfairly') leading to the discouragement of social norm violations. Our data allow this distinction and show that the anterior insula is specifically involved in fairness-related behavior, whereas the MPFC (and right DLPFC) is involved in monitoring those emotional reactions due to being the direct target of the bargaining.
SUPPLEMENTARY DATA
Supplementary Data are available at SCAN online. | 2016-11-05T07:37:12.269Z | 2013-04-01T00:00:00.000 | {
"year": 2013,
"sha1": "7911b0bff46907c4577cc4ce288347589f138a07",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/scan/article-pdf/8/4/424/14119594/nss014.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ed54dfd52905695e025d5a931ac39fa627d1d04a",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
257252835 | pes2o/s2orc | v3-fos-license | Temporal-spatial deciphering mental subtraction in the human brain
Mental subtraction, involving numerical processing and operation, requires a complex interplay among several brain regions. Diverse studies have utilized scalp electroencephalograph, electrocorticogram, or functional magnetic resonance imaging to resolve the structure pattern and functional activity during subtraction operation. However, a high resolution of the spatial-temporal understanding of the neural mechanisms involved in mental subtraction is unavailable. Thus, this study obtained intracranial stereoelectroencephalography recordings from 20 patients with pharmacologically resistant epilepsy. Specifically, two sample-delayed mismatch paradigms of numeric comparison and subtracting results comparison were used to help reveal the time frame of mental subtraction. The brain sub-regions were chronologically screened using the stereoelectroencephalography recording for mental subtraction. The results indicated that the anterior cortex, containing the frontal, insular, and parahippocampous, worked for preparing for mental subtraction; moreover, the posterior cortex, such as parietal, occipital, limbic, and temporal regions, cooperated during subtraction. Especially, the gamma band activities in core regions within the parietal-cingulate-temporal cortices mediated the critical mental subtraction. Overall, this research is the first to describe the spatiotemporal activities underlying mental subtraction in the human brain. It provides a comprehensive insight into the cognitive control activity underlying mental arithmetic. Supplementary Information The online version contains supplementary material available at 10.1007/s11571-023-09937-z.
Introduction
Mental arithmetic is not only a basic daily faculty, but also provides a powerful paradigm for characterizing fundamental cognitive processes (Nieder and Dehaene 2009;Houdé et al. 2010).Prior research has indicated that a low arithmetic attainment is attributed to a deficit in the general cognitive abilities (Bull et al. 2008).Over the last 30 years, cognitive arithmetic has been the focus of extensive experimental research.However, subtraction has received considerably insufficient attention from researchers than addition or multiplication.Especially, the psychological studies focusing on subtraction solving are surprisingly scarce compared with those focusing on addition, probably because the former is often implicitly assumed to be cognitively similar to the latter, its mathematical inverse.Cognitively, adding and subtracting are composed of several different strategies (Fayol and Thevenot 2012).Subtraction problems tend to be solved using more procedural approaches than the addition ones (Campbell and Xue 2001).Therefore, in recent times, understanding the mental substrates and neurocognitive mechanisms of subtraction has been an important line of interdisciplinary research.
Mental subtraction contains different cognitive processes, including numerals recognition, numeric comparison, performing mathematical operations, remembering the results, maintaining attention, and other more specialized processes (Kong et al. 2000;Rickard et al. 2000;Pesenti et al. 2001;Pinel et al. 2001Pinel et al. , 2004;;Dehaene et al. 2003;Dimitriadis et al. 2010).Functional imaging studies have identified a consistent network of brain regions that are significant for mental subtraction.As compared to addition, the functional magnetic resonance imaging's (fMRI) findings revealed significantly greater activation during subtraction in regions along the dorsal pathway, including the left inferior frontal gyrus (IFG), the middle portion of the dorsolateral prefrontal cortex, and the supplementary motor area (SMA) (Yang et al. 2017); furthermore, subtraction also had more activation in the postcentral gyrus (postCG), superior temporal gyrus (STG), thalamus (Ni et al. 2011) and precentral gyrus (preCG) (Abd Hamid et al. 2011).Additional data in healthy adults have revealed that the intraparietal sulcus and the posterior superior parietal lobe are more active during subtraction than multiplication (Dehaene et al. 2003); however, the left angular gyri (AG) and supramarginal gyri (SG) were modulated to a greater degree by multiplication than subtraction (Ischebeck et al. 2006).Moreover, different algorithms, such as simple addition, subtraction, and multiplication tasks, have demonstrated common activation in the middle frontal gyrus (MFG), inferior temporal gyrus (MTG) and occipital lobes (Ischebeck et al. 2009;Ni et al. 2011).These imaging data contributed widely to help us understand the cerebral substrates involved in subtraction.However, the dynamic temporal course of mental subtraction with a high temporal resolution in the human brain has been inefficiently documented thus far.
In addition to the fMRI, evoked (phase-locked) and induced (non-phase-locked) activities in the electroencephalography (EEG) pattern have highlighted neurophysiological patterns of mental subtraction in the human brain.The event-related potential (ERP) results were employed to identify several negative components between 200 ms and 400 ms in subtraction that have been referred to as N200, N270, N300, P300 and N400, compared with addition or multiplication (Jasinski and Coch 2012;Taghizadeh et al. 2021;Gao et al. 2022).The event-related desynchronization and synchronization (ERD/ERS) also revealed that subtractions is associated with a lower theta ERS than multiplication in the frontal and parietooccipital cortices (Brunner et al. 2021); additionally, it displayed a higher alpha ERD than addition, with the largest difference in the parietooccipital cortex (De Smedt et al. 2009a).The scalp EEG data offered excellent temporal, however, limited spatial resolution.Therefore, direct evidence regarding the temporospatial cortical activation patterns and mechanisms during mental subtraction remains insufficient.
To explore the neural basis underlying the temporal dynamic process among the different brain patterns during the complete mental subtraction operation, the intracranial stereoelectroencephalography (SEEG) recordings were used to offer a high temporal resolution and precise spatial localization information of the human brain's cognitive activities.The delayed match/mismatch-to sample (DMS) paradigms were applied to engage the SEEG during early numeric comparison and following mental subtraction operation.In Task 1, we utilized the DMS paradigm of two doubledigit numbers to examine incongruity detection, which was contained in the process of numeric comparison (Gómez-Velázquez et al. 2015).Furthermore, the mental subtraction was significantly associated with number comparison (De Smedt et al. 2009b) and relay on this procedure, even sharing an evolutionary neural system involved in numeric comparison (Prado et al. 2011).In Task 2, the DMS paradigm of the mental subtraction results was applied to assess the correctness of mental subtraction.Therefore, both tasks would indicate the time window of the subtraction operation in the brain, that is from the early stage before subtraction operation to result-decision after subtraction.By comparing different conditions in the same task, the spatiotemporal landscape of mental subtraction could be deciphered.
This study attempted to addressed the dynamic neural basis of mental subtraction.A simple course of mental subtraction with few other associated cognition activities, including numeric recognition, task-dependent attention, and result memorization, were abstracted.Furthermore, the mechanism containing both the structural pattern and neural activities during this course of mental subtraction have been described.Our results provided important neural evidence to highlight that the gamma (beta/gamma) band activities in the parietal-limbic-temporal lobes mediated mental subtraction.
Participants
The intracranial recordings were obtained from 20 patients (5 women and 15 men) with pharmacologically resistant epilepsy at the Xuanwu Hospital, Capital Medical University, Beijing, China.Their ages ranged from 13 to 52 years (mean = 25.65,SD = 8.95).All patients were implanted with stereotactic intracranial electrodes for diagnostic purposes as part of their evaluation for the neurosurgical epilepsy treatment.The patients who met strict criteria of inclusion and exclusion could be recruited.Those with abnormal brain structure or destructive lesions, such as tumors or encephalomalacia, were excluded.All patients had normal or corrected-to-normal vision and were right-handed.No clinical seizures occurred during the experiment.The study was approved by the Ethics Committee of the Xuanwu Hospital, Capital Medical University (Project number: [2017]086); further, each patient provided informed consent to participate in the research.
Experimental task
This study used a delayed match-to-sample paradigm.The visual stimuli consisted of a pair of white Arabic doubledigit numbers (ranging from 11 to 49) that were sequentially presented on a black background.The first stimulus (S1) was followed by a smaller, second one (S2).S1 and S2 were presented on the screen for 300 ms each, with an interstimulus interval of 200 ms.The interval between the end of the previous S2 and the onset of the subsequent S1 was 5 s.
The experiment was divided into two tasks.In Task 1, the participants were required to judge whether S2 was identical to S1.It was divided into 2 rounds, each comprising 40 trials.The stimulus pairs were randomized with an equal occurrence rate of the following two conditions: (i) S1 and S2 were identical (S1 = S2); (ii) S1 and S2 were different (S1 ≠ S2).In Task 2, the participants were required to assess whether the difference between S2 and S1 (S2-S1) was 3. Task 2 was also divided into 2 rounds, each comprising 60 trials.The stimulus pairs were randomized with an equal occurrence rate of the following three conditions: (iii) the difference between S2 and S1 was 0 (S1-S2 = 0); (iv) the discrepancy between S2 and S1 was 3 (S1-S2 = 3); (v)the difference between S2 and S1 was unequal to 3 or 0 (S1-S2 ≠ 3/0).
The participants were encouraged to concentrate on the center of the screen and judge the answer to be "YES" or "NO" by pressing the appropriate button on a push pad.They were instructed to respond as quickly and as accurately as possible.The left and right button pressing in each run was counterbalanced.All rounds began with a restingstate period of 3-10 min.The stimuli were presented on a standard liquid crystal display screen using E-Prime software (version 2.0, Psychology Software Tools, PA); the averaged visual angle of the picture was adjusted to 2.1° at ~ 50 cm viewing distance.
SEEG recordings
The patients were implanted with 0.8-mm diameter SEEG electrodes (Sinovation (Beijing) Medical Technology Co., Ltd., Beijng, China).The depth electrodes were semi-rigid platinum/iridium ones, with contacts lengths that were 2 mm long with 1.5 mm interval distances.Each electrode had 8-18 contacts depending on its length.The SEEG was recorded using the Neuroscan system (Scan 4.5; Neurosoft Labs Inc.) with a 128-channel SynAmps EEG/EP amplifier (Compumedics USA Inc., Charlotte, North Carolina, USA).During the recordings, the SEEG signal was referenced to a vertex screw/subdermal electrode and filtered between 0.05 and 500 Hz.The signal was sampled at 2000 Hz.A notch filter was applied at 50 Hz.All data were collected during the interictal stage.The stimulus-triggered electrical pulses were recorded along with the SEEG data for a precise synchronization with the stimulus onset.
Electrode localization
The electrode placement was based solely on clinical requirements and was unaffected by this study's needs.For each patient, we obtained a T1-weighted 1-mm isometric structural MRI scan using a 3-T Siemens scanner.After the implantation, a Siemens computed tomography (CT) scan was acquired.The reconstruction of the SEEG electrodes was performed using Brainstorm (Tadel et al. 2011), which is documented and freely available for download online under the GNU general public license (http://neuroimage.usc.edu/brainstorm).The post-implantation CT was coregistered to the preoperative anatomical MRI scan using the sequential pattern mining algorithms.Thus, the CT scan could be visualized on top of the preoperative MRI; however, there was a minimizing localization error due to a potential brain shift caused by surgery and implantation.the data.The spectrograms were calculated using Morlet's wavelet transform with a linear step of 1 Hz over the range of 3-200 Hz.For each trial, we obtained its Morlet's wavelet transform with the central frequency of 1 Hz and the time resolution (FWHM) of 3 s.The power bands were defined as the theta band (4-7 Hz), alpha band (8-13 Hz), beta band (14-29 Hz), gamma band (30-90 Hz) and high gamma band .For specific frequency band analysis, the Hilbert transform was applied to obtain the average power in each frequency section.Especially, for analyzing the high-gamma-band activities (also known as the high frequency oscillations, HFOs), after the specific band filtering (91-200 Hz), a threshold for HFOs was set three standard deviations above the mean baseline with at least three consecutive peaks.
The Permutation test was utilized to examine the mean SEEG amplitude or time-frequency power for each of the two conditions due to the abnormal distributions.To reduce the chances of obtaining a false-positive test, a false discovery rate (FDR) correction was used to adjust for multiple comparisons at P < 0.05.All statistical tests were two-sided unless stated otherwise.The p < 0.05 were considered statistically significant.Plotting was performed with R and GraphPad Prism (v.8.0; GraphPad Software).
Behavior analysis and the ERPs recordings of numerical comparison and mental subtraction
The subjects were instructed to complete two tasks, consisting of two runs each.They were presented with a pair of sequenced digital numbers (S1 > S2) and required to assess whether S2 was identical to S1 in Task 1. Subsequently, the participants were requested to examine whether the difference between S1 and S2 (S1-S2) was equal to 3 in Task 2 (Fig. 1a).Overall, the following five conditions were evaluated: "S1 = S2", "S1 ≠ S2", "S1-S2 = 0", "S1-S2 = 3" and "S1-S2 ≠ 3/0".The statistics for the behavioral data showed that the accuracy in the five conditions were 97.50% (97.50, 100), 96.25% (95.00, 100), 100.00% (97.56, 100.00), 94.87% (87.18, 99.36) and 95.00% (92.50, 100), respectively (Wilcoxon matched-pairs signed rank test; Fig. 1b-upper panel).The RTs, which were the intervals between the onset of S2 and the time when the answer key was pressed, were 472. 1 ± 16.99 ms,561.30 ± 20.53 ms,600.10 ± 23.10 ms,654.10 ± 23.65 ms and 720.50 ± 20.73 ms,.All participants performed the two tasks with a high accuracy, and their RT patterns reflected the most widely replicated behavioral effect in the numeric cognitive arithmetic: the Subsequently, recording sites were visually identified on the co-registered CT scan and marked in each subject's preoperative MRI native space.The Montreal Neurological Institute (MNI) 152 structural template volume image was used to co-register with individual post-implantation CT scans to obtain the MNI coordinates, followed by a previously described protocol for the localization of the SEEG electrodes in the brain (Ashburner and Friston 2005;Fan et al. 2016).The definition of regions in the Human Brainnetome Atlas was shown by Fan et al. (Fan et al. 2016); it provides a 210 fine-grained cortical subregions.The BrainNet Viewer tool was applied to visualize the human brain subregions (Xia et al. 2013).
Preprocessing and data analysis
For behavior analysis, the data were analyzed with Graph-Pad Prism (v.8.0; GraphPad Software).A paired t-test was performed to examine the mean reaction time (RT) in each of the two conditions for the Gaussian distribution of data.A Wilcoxon test was conducted to compare the mean accuracy in both conditions for the abnormal distributions of data.The data were presented as mean ± SEM and median ± quartile in case of the Gaussian and abnormal distributions, respectively.The Spearman correlation analysis was used between accuracies and ages, accuracies and gender, RT and ages, and RT and gender, respectively.
For SEEG analysis, the resting-state SEEG were evaluated by neurological and neurosurgical expertise, the sites recording containing ictal and inter-ictal activities would be excluded.The raw SEEG data were inspected visually to detect noisy/corrupted channels and exclude them from further analysis.Contacts within the white matter or cerebrospinal fluid were also excluded via co-registration of post-implanted CT and preoperative MRI images.The eligible data were all from performance with high accuracy (> 75%) during the cognitive tasks (for the work flow, see Supplemental Fig. 1).Finally, 85 electrodes with 348 recording sites in 20 patients were selected for further analysis (for detailed information, see Supplemental Table 1).
All SEEG data analyses were performed in MATLAB 2016b (Math Works Inc., Natick, MA) using Brainstorm (Tadel et al. 2011) and custom-developed analysis routines.Regarding the ERP analysis, the screened data were digitally filtered with a bandpass of 0.5-40 Hz.The epochs were selected from the 200-ms pre-S1 to 1000-ms post-S2 period, with overall 1700 ms for each one.The baseline was corrected by the average from the 200-ms pre-S1 to 0.5-ms pre-S1 interval.Sites containing over 30 trials in each condition would be used for further analysis.
For the time-frequency analyses, the 50 Hz power line interference (including its harmonics) was removed from segmentally extracted to ERP epochs between the 200-ms pre-S1 and 700-ms post-S2 period (total 1700 ms).The average epochs in five conditions were rearranged based on the lobar distribution of the sites (Supplemental Fig. 2A).The amplitudes of the average epochs of the ERPs showed significant differences during the cognition activities in Tasks 1, Task 2 (Supplemental Fig. 2B-C), and between Task 1 and 2, respectively (Fig. 1e).It is suggested the human brain goes through a discriminative cognition processing during simple numeric comparison and mental subtraction.However, the precise pathway of this distinguished cognition controls should be explored.
The implanted electrodes were reconstructed through the co-registered preoperative MRI and the postimplantation CT (Fig. 1c).Out of the total 348 recording sites, 157 (45.11%), 77 (22.13%), 46 (13.22%), and 44 sites (12.64%) were located in the temporal, frontal, insular, and parietal lobes, respectively.In addition, 13 sites (3.74%) were within the occipital lobe, and 11 sites (3.16%) in the limbic lobe (Fig. 1d; for detailed information of the subregional distribution, see Supplemental Table 3).The SEEG was A, the illustration of Tasks 1 and 2. The subjects confirmed the answer ("YES" or "NO") for each task by pressing the right and left buttons of the mouse that had been balanced.In one trial, the paired digit (S1 > S2) that ranged from 11 to 49 were sequentially appeared for 300 ms respectively, with an interval of 200 ms.The interval between two trials was 5 s.Five conditions ("S1=S2", "S1≠S2", "S1-S2=0", "S1-S2=3" and "S1-S2≠3/0") were contained in two tasks.B, behavior analysis of five conditions in Task 1 and 2. Upper panel: the accuracies of the five conditions.The data were presented as median ± quartile.A Wilcoxon test was performed to examine the mean correct rate for each of the two conditions.Bottom panel: the reaction times of the five conditions.The data were presented as mean ± SEM.A paired
Temporal-spatial comparison of numerical comparison and subtraction
To explore the temporal-spatial progressing of mental subtraction, the ERPs amplitudes of each recording site were compared between Tasks 1 and 2. The FDR-corrected p-value (p < 0.05 lasting over 50 ms) was also reordered in a time lapse (Fig. 3a).The chronological sites distributed in the brain lobes, within which the frontal, temporal, and insular lobes emerged distinct amplitudes of the ERPs between numeric comparison and subtraction early after 200 ms of the S1 onset (Fig. 3b).Furthermore, the majority of the sites had reacted at the differences between tasks since 100 ms after the S2 onset.
Among these aforementioned sites, those distributed in the paraCL, IFG and MFG of the frontal lobes, the anterior insula (AI) and posterior insula (PI) lobes, and the temporal parahippocampous (paraHG), demonstrated the different amplitudes of the ERPs between two tasks before S2 and during S2 (Fig. 3c).This might indicate that more than subtraction, these regions would be involved in other cognitive activities.However, the sites distributed in the frontal preCG, the cingulate gyrus (CG) in the limbic lobe, the postCG, SG, and AG of the parietal lobes, the occipital cuneus gyrus, the fusiform gyrus, the inferior temporal gyrus (ITG), MTG, and STG of the temporal lobes were presented differences between Tasks 1 and 2 during S2 (Fig. 3d).The sites in the parietal lobes emerged with a large proportion during S2 than before it.These results suggested that the parietal region could play a critical role in mental subtraction in the human brain.Further, diverse brain regions participated in the consecutive procedures of cognitive control, respectively.Especially, recordings from five sites in the SG, one site in the AG, two sites in the fusiform, four sites in the MTG, and two sites in the CG within the parietal-cingulate-temporal cortices initiated the differences after 183 ms of the S2 onset that might be considered as an origin of subtraction based on the aforementioned results.Therefore, the mechanism of discriminating numeric comparison and subtraction in the core regions, including the SG, AG, fusiform, MTG, and CG, ought to be clear.
Gamma band activities undertake digital subtraction chronologically
Brain oscillations at different frequency bands, even the high-gamma-band, have been shown to play a key role in various cognitive tasks, including memory, executive control, and attention to internal processing or the external environment (Klimesch 1999;Kawasaki et al. 2010;Gaona et al. 2011;Kucewicz et al. 2014;Akiyama et al. 2017).Therefore, further exploration of the local network in the frequency
Numerical and subtraction results comparison set the time window of digital subtraction
The procedure of mental subtraction was initiated immediately after the onset of S2.However, during the paradigms, the introduction of the task must have affected the participants' attention and motivation.This attentional effect would be an interference of the procedure of mental subtraction.Especially in the Task 2, two types of incongruency processing were mixed, including the incongruency processing of visual acquired numbers as the precondition for subtraction (Prado et al. 2011;Gómez-Velázquez et al. 2015), and the incongruency processing of discriminating the results from the enquired ones 3 after mental subtraction.
To extract the core process of mental subtraction, we attempted to set a time frame for the subtraction in the human brain.Thus, a time-lapse reordering was applied to the heatmap of the FDR-corrected p-values, which resulted from the comparison of the SEEG amplitude of each recording site in conditions 1 vs. 2 ("S1 = S2" vs. "S1 ≠ S2", Fig. 2a), 3 vs. 4 ("S1-S2 = 0" vs. "S1-S2 = 3", Fig. 2c), and 4 vs. 5 ("S1-S2 = 3" vs. "S1-S2 ≠ 3/0", Fig. 2e).The comparison represented the differences between two cognitive control activities, which were numeric comparison and digital subtraction with the same introduction, respectively.In "S1 = S2" vs. "S1 ≠ S2", the comparison only presented the incongruency processing of simple digitals.Since the SEEG amplitudes of the IFG site demonstrated the earliest significant difference at 183.5 ms after the S2 onset (Fig. 2b), the result indicated the initiation timepoint of numeric comparison (183.5 ms after S2 onset, Fig, 2a) and subregion in the brain (IFG, Fig. 2b).In "S1-S2 = 0" vs. "S1-S2 = 3", the comparison contained the incongruency processing of both digital and subtraction results.Notably, the peak amplitudes of the paracentral lobule (paraCL) site indicated the earliest difference emerged at 228 ms after the S2 onset (Fig. 2d).In "S1-S2 = 3" vs. "S1-S2 ≠ 3/0", the comparison only presented the incongruency processing of the subtraction results, that is the result-decision after the subtraction operation in the brain.Furthermore, the peak amplitudes of the IFG site primarily showed the earliest difference at 320.5 ms after the S2 onset (Fig. 2f).The result indicated the termination timepoint of subtraction operation (320.5 ms after S2 onset, Fig, 2e) and subregion in the brain (IFG, Fig. 2f).Therefore, the result indicated that the core process of mental subtraction would proceed between the numeric comparison and subtraction results comparison, which was between 183.5 ms and 320.5 ms from the S2 onset.order (early to late): the MTG, SG, CG, fusiform, and AG (Fig. 3d).These analyses demonstrated that the gamma or beta/gamma power in the five regions might not be driven by the phase-locked ERP activities.Therefore, we referred to those chronological frequency power changes between Tasks 1 and 2 as activities rather than oscillations.
To verified the intrinsic effects among aforementioned five regions, a Spearman correlation was conducted in the power at the typical frequency band and time duration.The correlation networks indicated that effective correlations among five regions existed in Task 1 and 2, however, there were not effective connection between the MTG and CG, and the MTG and fusiform in Task 1, while as the connections existed in Task 2 (Fig. 5).Notably, most of correlations, such as the AG-CG, the AG-fusiform, the AG-MTG, the SG-CG, the SG-fusiform, the SG-MTG, and the CG-MTG in Task 1 were opposite to those in Task 2. Exceptionally, the correlation of the AG-SG, the CG-fusiform, and the fusiform-MTG showed the consistently negative correlation in Task 1 and 2. The results provided a possibility that the gamma band activities in the SG, CG, fusiform, MTG, and AG might follow a causal relationship to mediated subtraction process, with a distinguished network connections from that in simple numeric comparison.
Discussion
Our findings, based on the most extensive and precise description of the local field potentials in mainly the human neocortical regions, revealed several important insights about the neural mechanisms underlying mental subtraction in the human brain.Mental subtraction requires a complex interplay between many brain regions.Omitting the visual perceptual information (early S1 duration), the initiation times of numeric comparison and subtraction results comparison, set a time frame for the core mental subtraction process, which was from 182 ms to 322 ms after the S2 onset.Our results provided evidence that before showing the S2, the arithmetic-dependent attention was a hybrid procedure in the anterior regions, including paraCL, IFG, MFG, AI, PI, and paraHG.Furthermore, after the S2 onset, the middle-toposterior regions were activated to participate into mental subtraction, including the preCG, postCG, SG, AG, cuneus and fusiform gyri, ITG, MTG, STG, and CG.Among these regions, the SG, CG, fusiform, MTG, and AG demonstrated significant differences between two tasks within the time frame of mental subtraction.Moreover, the gamma or beta/ gamma band activities in the five regions were chronologically involved in the mental subtraction process.At this point, the temporal-spatial mechanism of mental subtraction in the human brain was deciphered for the first time.
domain would provide relevant information regarding the neural mechanism of mental subtraction.The power spectra normalized to each frequency in the five core regions, including the SG, AG, fusiform, MTG and CG, spanning across 3-200 Hz were compared between numeric comparison (Task 1) and subtraction (Task 2) (Fig. 4a).The permutation tests were applied to the power spectra in response to subtraction and numeric comparison.This identified clusters with significant event-related differences once after the S1 onset for the two tasks (p.cluster < 0.05, Fig. 4b).The sparse clusters in the high gamma range (> 90 Hz) during S2 were found in all of the five core regions.However, the analysis of high gamma activities (known as the high frequency oscillations, HFOs) showed that there were sparse HFOs during recordings of Task 1 and 2, respectively.And there was no significant difference of the HFOs rate between two tasks (Supplemental Fig. 3).Notably in fusiform, the alpha band activities with a long latency (9-14 Hz, 68 ms before S2 to 263 ms after S2) and early emerging beta band activities (17-22 Hz, 60-177 ms of S2 ) demonstrated higher powers in Task 2 than in Task 1.Moreover, in the MTG, there were consistent theta and alpha band activities (for theta, 4-8 Hz, 130 ms before S2 to 462 ms after S2; for alpha, 12-15 Hz, 37 ms before S2 to 320 ms after S2), with a higher power in Task 2 than that in Task 1.The low frequency activities in the two temporal lobes suggested a background of mental subtraction, such as calculating attention and focusing during the two different tasks.
Besides the high gamma and low theta/alpha band, the clusters in gamma or beta/gamma ranges in the five regions presented a chronological order (Fig. 4c): in the SG, the gamma activities in the range of 28-37 Hz at a short latency (133-231 ms of S2 onset) indicated a higher power in Task 1 than in Task 2; in the CG, the gamma activities in the range of 56-66 Hz at a short latency (178-203 ms of S2 onset) showed a greater power in Task 1 than that in Task 2; in the fusiform gyrus, the beta/gamma activities in the range of 24-31 Hz at a latency of 185-231 ms demonstrated a higher power in Task 2 than that in Task 1; in the MTG, the gamma activities in the range of 30-41 Hz at a latency of 189-268 ms showed a greater power in Task 2 than that in Task 1; and in the AG, the gamma activities in the range of 37-42 Hz at a latency of 322-355 ms showed a greater power in Task 2 than that in Task 1 (Fig. 4d).
The power of the activities at the identified frequency band produced differences between numeric comparison and subtraction sequentially.This raised the question as to whether those clusters at a short latency reflects a modulation of oscillations.To address this query, we examined the ERPs latency of the sites, the amplitude of which showed differences during 183-322 ms of the S2 onset.The latency of the sites indicated differences in the following in numbers.Studies provided evidence for the sharing regions, including the SMA, regarding the arithmetic and finger representation (Michaux et al. 2013;Berteletti and Booth 2015).Considering the SMA's crucial role in representations for finger movements (Diedrichsen et al. 2013), the corresponding findings indicated the possibility that it underlies finger perception when subjects engage in subtraction problem-solving.Thus, before the core subtraction ongoing in the human brain, several regions cooperate to solve a mathematical problem, including choosing the strategy, preparing for fact retrieval, and networks switching.
Entering the S2 duration was followed by the goal-oriented proceeding of the numeric comparison and mental subtraction tasks.In addition to those regions mentioned above, more regions, especially the parietal and temporal regions were typically involved in this procedure.According to the order of latency of the sites that emerged difference between Tasks 1 and 2 (early to late), the caudoventral ITG showed a higher peak amplitude of subtraction than numbers as soon as S2 was revealed.However, recent studies using an intracranial EEG have found the posterior ITG to be activated during the visual perception of numbers and spreading its adjacent connection in calculations (Pinheiro-Chagas et al. 2018), which might be later than the time of the S2 onset.Therefore, the ITG would play a more critical role in calculation than in numbers.Considering the SMA's role prior to the S2 onset, the ITG and STG were discovered as reflecting attentional orienting toward the fingers when performing arithmetic problems (Proverbio and Carminati 2019).The other selected regions with different reactions between numbers and subtraction, including the postCG, preCG, SG, AG, occipital cuneus gyrus, fusiform, CG, and MTG were reported to participate in the calculation in the human brain (Ischebeck et al. 2006(Ischebeck et al. , 2009;;Abd Hamid et al. 2011;Arsalidou and Taylor 2011;Ni et al. 2011;Liu et al. 2017).Among the regions that began to discriminate numbers and subtraction, it was quite expected to observed the mechanism of cooperation of the core regions during the time window of 182 ms to 322 ms of the S2, when it was supposed to proceed the critical mental subtraction.
Several results from the frequency analysis revealed that various band activities were involved in calculation.Most studies focused on the theta and alpha band activities in the frontoparietal regions in subtraction (De Smedt et al. 2009a;Kitaura et al. 2017).However, the involvement of the beta and gamma bands in arithmetic remains insufficiently known.In mixed arithmetic problem solving, both the alpha and beta band powers in the frontal-parietal network were suppressed as a function of attention load (Lin et al. 2015).Furthermore, in the serial subtraction task, the gamma ERS increased in the right IPS and the gamma ERD decreased in the IFG using the MEG techniques (Ishii et al. 2014).Our Converging evidence has identified the frontoparietal network as the main cortical frame that supports mental arithmetic.Using the fMRI, regions including the IFG, insular, paraCL, left fusiform gyrus, visual area, superior and inferior parietal lobules, and CG were common to both number and calculation tasks; additionally, the AG, MFG, and SFG showed activity in calculation tasks (Arsalidou and Taylor 2011).However, the examining technique cannot offer a high temporal resolution during tasks.Our results predicted that the frontal regions would be participate since the early stage of mental subtraction (before S2), while the parietal regions would be dominantly active during the subtraction proceeding (during S2).The time frame set by different conditions within the same tasks could restrict the course of mental subtraction from an interference with the external instruction-driving motivation and intrinsic attention.
In our study, the ERPs in both IFG and MFG presented different peak amplitudes for numbers than that for subtraction before the calculations.This result could be an verification regarding the frontal lobes underling the strategy choice and planning in mathematical processes (Dehaene and Cohen 1997).It has been proposed that the posterioranterior progression in the insula could play a hub role before mental subtraction (Chang et al. 2013).The distinguishing of the PI, following the AI, in the ERPs reaction to Tasks 1 and 2 suggested that they were responsible for switching the brain networks during the information processing.As the hippocampus' neighboring structure, the paraHG is thought to be associated with information recollection to retrieve the subtraction fact rather than the previous numbers (Bloechle et al. 2016).The paraCL, which is often referred to as the SMA, performed at a higher peak amplitude near before the S2 onset in subtraction than that The temporal-spatial mechanism underlying mental arithmetic might provide essential insights with respect to the higher cognition neuroscience.
Declarations
Competing interests The authors have no competing interests to declare that are relevant to the content of this article.
Ethics approval
The study was approved by the Ethics Committee of the Xuanwu Hospital, Capital Medical University (Project number: [2017]086).
Consent Each patient provided informed consent to participate in the research.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/.findings provided evidence that as the time progressed during the critical time window of mental subtraction, the power of gamma band activities temporally and spatially changed in mental subtraction, compared with the basic numeric processing, along with the changes of theta and alpha band activities in the temporal lobes.It emphasized the important role of the gamma band activity in mental subtraction, especially in the parietal and temporal cortices.This neural basis of the gamma-mediated information flow in the posterior cortex might be able to provide novel insights on the higher cognitive function besides subtraction.
There are, of course, several limitations that should be mentioned.First, all participants were implanted with unilateral depth electrodes, and we could not resolve the lateralization of these responses or assess whether these effects were larger in the left or the right hemisphere.Second, we attempted to extract the cognitive activity related to mental subtraction by comparing it with that associated with the numeric incongruity processing.The two tasks could undergo the same visual perception of the digital Arabic numbers, however, with different instructional attention and memory.Therefore, we failed to provide a direct evidence for precise initiation of subtraction operation.Third, to decrease the problem size, the participants were required to respond by pressing the key to indicate the subtraction solution was the same as number 3. In regular mental subtraction, participants would judge whether the calculation result equal to the random correct answer.Future research would need to generally examine how the relationship between intact number processing and mental arithmetic is modulated by strategy on a trial-by-trial basis.Last, we would like to acknowledge that our exploration was data driven.Other attempts of neural modulation, such as the corticocortical evoked potentials originating from SEEG, could be helpful for verifying the mechanism of dynamic mental subtraction in the human brain.Hence, further investigation may provide stronger and additionally detailed evidence of this relationship.
In summary, using the SEEG recording under the numeric subtraction tasks, our results confirmed and extended the previous studies findings indicating that mental subtraction were taken in a critical time window in the human brain.Moreover, this study's findings suggested that before the window, the anterior cortex, including the paraCL, IFG, MFG, AI, PI, and paraHG were dominantly involved.During the window, the gamma band activities would act as a crossover in the posterior regions, including the SG, CG, fusiform, MTG, and AG within the parietal-cingulate-temporal cortices, to proceed the core procedure of mental subtraction.Our results complement previous work regarding numeric comparison and mental subtraction by providing deeper insights into the neural basis of the mental arithmetic.
Figure 1
Figure 1 Experimental design and the SEEG recording in the human brain.A, the illustration of Tasks 1 and 2. The subjects confirmed the answer ("YES" or "NO") for each task by pressing the right and left buttons of the mouse that had been balanced.In one trial, the paired digit (S1 > S2) that ranged from 11 to 49 were sequentially appeared for 300 ms respectively, with an interval of 200 ms.The interval between two trials was 5 s.Five conditions ("S1=S2", "S1≠S2", "S1-S2=0", "S1-S2=3" and "S1-S2≠3/0") were contained in two tasks.B, behavior analysis of five conditions in Task 1 and 2. Upper panel: the accuracies of the five conditions.The data were presented as median ± quartile.A Wilcoxon test was performed to examine the mean correct rate for each of the two conditions.Bottom panel: the reaction times of the five conditions.The data were presented as mean ± SEM.A paired t-test was conducted to assess the mean reaction time for each of the two conditions.* p<0/05, ** p<0.01 and *** p<0.001.C, the example reconstruction of the depth electrodes into the brain (Patient #15).The surface of the peripheral images (left-top) show the reconstruction of eight electrodes into the brain of Patients #15.The lateral view (right-top), the coronal view (left-bottom) and the top views (right-bottom) of the reconstructed electrode were based on the three-dimensional co-registered MRI.D, the distribution counts of the 348 SEEG recording sites in the cortex lobes.E, the average ERPs amplitudes of total 348 electrodes of Task 1 (green line) and 2 (red line), with the shadow as the SEM, respectively.The grey rectangle indicated the durations of S1 and S2.The black bar presented the t-test between Tasks 1 and 2 with p<0.01.
Figure 2
Figure 2 The incongruent processing of the numerical and subtraction comparison.A, left panel: the heatmap of the p-value from the nonparametric test of the SEEG amplitude of the recording sites examined in conditions 1 vs. 2 (S1=S2 vs. S1≠S2) after the S2 onset.Each row represented one recording site.The p-values were ordered as the occurrence time when the FDR corrected p<0.05 for individual recording sites.The values of p>0.05 were shown as the background (blue).The shadow rectangle indicated the duration of S2.Right panel: the time lapse of each recording site from the heatmap on the left panel with p<0.05.The different colored bars represented cortical lobes, in which the recording sites were distributed: pink, frontal lobes; blue, parietal lobes; purple, temporal lobes; green, occipital lobes; orange, insular lobes and yellow, limbic lobes.The vertical dotted lines indicated the duration of S2.B, the average trace of the ERPs of the earliest recoding site with p<0.05 in (A), under the conditions of S1=S2 (grey line) and S1≠S2 (black line).The site, located at the IFG, was indicated as the yellow arrow in the sagittal MRI individually (insertion).C, left panel: the heatmap of the p-value from the nonparametric test of the SEEG amplitude of the recording sites examined in conditions 3 vs. 4 (S1-S2=0 vs. S1-S2=3) after the S2 onset.Each row represented one recording site.The p-values were ordered as the occurrence time when the FDR corrected p<0.05 for individual recording sites.The values of p>0.05 were shown as the background (blue).The shadow rectangle indicated the duration of S2.Right panel: the time lapse of each recording site from the heatmap on the left panel, with p<0.05.
Figure 3
Figure 3 The comparison of numerical comparison and subtraction.A, the heatmap of p-value from the nonparametric test of the SEEG amplitude of 348 recording sites examined in Tasks 1 (numerical comparison) vs. 2 (digital subtraction).Each row represented one recording site.The shadow rectangles indicated the durations of S1 and S2, respectively.The p values were ordered as the occurrence time when the FDR corrected p<0.05 (continuous for 50 ms) for individual recording sites.The values with p>0.05 were shown as the background (blue).B, the distribution counts of the recording sites at different time periods after the S1 onset in (A).The interval time was 100 ms.The shadow rectangles signified the duration of S1 and S2, respectively.The different colored bars represented cortical lobes, in which the recording sites were distributed: pink, frontal lobes; blue, parietal lobes; purple, temporal lobes; green, occipital lobes; orange, insular lobes; and yellow, limbic lobes.C, the example the ERPs traces of the recording sites distributed in the paraCL, IFG, MFG, AI, PI, and paraHG.The ERPs amplitudes were
Figure 4
Figure 4 Time-frequency analysis for numerical comparison and subtraction.A, time-frequency representations of the power response relative to Tasks 2 and 1 of the earliest recording sites, with discriminable ERPs amplitudes of Task 2 from 1, distributed at the SG, CG, fusiform gyrus, MTG, and AG, respectively.The black lines underneath the heatmap indicated durations of S1 and S2, respectively.B, time-frequency representation of the power response difference between Tasks 2 and 1 of the five regions in (A), showing significant decreased (blue) or increased (red) activity.Significant clusters (FDR
Figure 5
Figure5Correlation of five core regions in numerical comparison and subtraction.Chord diagrams of correlations among the typical gamma band power of the core regions, including SG, CG, fusiform, MTG, and AG, in Task 1 and 2, respectively.The Spearman correlation were applied in each two regions among SG (power of 28-37 Hz, during 133-163 ms), CG (power of 56-66 Hz, during 178-208 ms), fusiform (power of 24-31 Hz, during 185-315ms), MTG (power of 30-41 Hz, during 189-219 ms), and AG (power of 37-42 Hz, during 322-352 ms), respectively.The average correlation coefficient during the initiated 30 ms of five regions were plotted, with the red and blue link indicated the positive and negative correlation, respectively. | 2023-03-01T16:02:06.208Z | 2023-02-27T00:00:00.000 | {
"year": 2023,
"sha1": "582865ea9ad58dabe50684461ca5fac3bc50a91f",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11571-023-09937-z.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "b7981f089927da67a3e3b8496cb48a17d2d960e2",
"s2fieldsofstudy": [
"Psychology",
"Biology",
"Medicine"
],
"extfieldsofstudy": []
} |
2161668 | pes2o/s2orc | v3-fos-license | Physisorption kinetics of electrons at plasma boundaries
Plasma-boundaries floating in an ionized gas are usually negatively charged. They accumulate electrons more efficiently than ions leading to the formation of a quasi-stationary electron film at the boundaries. We propose to interpret the build-up of surface charges at inert plasma boundaries, where other surface modifications, for instance, implantation of particles and reconstruction or destruction of the surface due to impact of high energy particles can be neglected, as a physisorption process in front of the wall. The electron sticking coefficient se and the electron desorption time τe, which play an important role in determining the quasi-stationary surface charge, and about which little is empirically and theoretically known, can then be calculated from microscopic models for the electron-wall interaction. Irrespective of the sophistication of the models, the static part of the electron-wall interaction determines the binding energy of the electron, whereas inelastic processes at the wall determine se and τe. As an illustration, we calculate se and τe for a metal, using the simplest model in which the static part of the electron-metal interaction is approximated by the classical image potential. Assuming electrons from the plasma to loose (gain) energy at the surface by creating (annihilating) electron-hole pairs in the metal, which is treated as a jellium half-space with an infinitely high workfunction, we obtain se ≈ 10−4 and τe ≈ 10−2 s. The product seτe ≈ 10−6 s has the order of magnitude expected from our earlier results for the charge of dust particles in a plasma but individually se is unexpectedly small and τe is somewhat large. The former is a consequence of the small matrix elements occurring in the simple model while the latter is due to the large binding energy of the electron. More sophisticated theoretical investigations, but also experimental support, are clearly needed because if se is indeed as small as our exploratory calculation suggests, it would have severe consequences for the understanding of the formation of surface charges at plasma boundaries. To identify what we believe are key issues of the electronic microphysics at inert plasma boundaries and to inspire other groups to join us on our journey is the purpose of this colloquial presentation. PACS. 52.27.Lw Dusty or complex plasmas; plasma crystals – 52.40.Hf Plasma-material interactions; boundary layer effects – 68.43.-h Chemisorption/physisorption: adsorbates on surfaces – 73.20.-r Electron states at surfaces and interfaces
ction
Low-te
perature plasma physics is undoubtedly an applied science driven by the ever increasing demand for plasma-assisted surface modification processes and environmentally save, low-power consuming lighting devices.At the same time, however, the physics of gas discharges is rich on fundamental problems which are of broader interest.
From a ormal point of view, a gas discharge is an externally driven bounded reactive multicomponent system.It contains, besides electrons and ions, chemically reactive atoms and/or molecules strongly interacting with each other and with external (wall of the discharge vessel) as well as internal (nm to µm-sized solid particles) boundaries.Like in any reactive system elementary collision processes (elastic, inelastic, and reactive), occurring on a microscopic scale, determine in conjunction with external control parameters the global properties of the system on the macroscopic scale.However, whereas in an ordinary chemical reactor all constituents are neutral, a gas discharge contains also charged constituents.There are thus at least two macroscopic scales: the electromagnetic scale, where screening and sheath formation takes place [1,2], and the extension of the vessel.Since the observed physical properties of a gas discharges emerge from processes occurring on at least three different length (and time) scales -one microscopic and two macroscopic scales -the starting point of any quantitative description is a multiple-scale analysis even if it is not explicitly performed.Being externally driven, low-temperature plasmas are moreover faroff thermal equilibrium and like other dissipative systems feature a great variety of self-organization phenomena [3,4].Finally, and this sets the theme of this colloquium, low-temperature gas discharges, in contrast to magnetically confined high-temperature fusion plasmas, are directly bounded by massive macroscopic objects.Thus, they strongly interact with solids.
The plasma-soli interaction is of course at the core of all plasma-assisted surface processes (deposition, im-plantation, sputtering, etching, etc.) [5].Of more fundamental interest, however, is the situation of a chemically inert (i.e., no surface modification due to chemical processes, no reconstruction or destruction of the surface due to high-energy particles etc.) floating surface, where the interaction with the plasma leads only to the build-up of surface charges and thus to a quasi-two-dimensional electron film which may have unique properties similar to electrons trapped on a liquid helium surface [6] or to electrons confined in a semiconductor heterojunction [7].
In plasma-physic l settings surface charges play a role in atmospheric plasmas, where the charge of nm-sized aerosols [8] is of interest, in space bound plasmas, where surface charges of spacecrafts [9,10] and of interplanetary and interstellar dust particles [11,12] have been extensively studied, and in laboratory dusty plasmas, where the study of self-organization of highly negatively charged, strongly interacting µm-sized dust particles became an extremely active area of current plasma research [13,14,15,16,17,18,19].Surface charges affect also the physics of dielectric barrier discharges -a discharge type of huge technological impact [20,21,22,23,24,25].
That surface char es at plasma boundaries could be considered as a thin film of adsorbed electrons ("surface plasma") in contact with the bulk plasma was originally suggested by Emeleus and Coulter in connection with their investigations of wall recombination in the positive column [26].Later, Behnke and coworkers [27] used this idea to phenomenologically construct boundary conditions for the kinetic equations describing glow discharges and Kersten et al. [28] employed the notion of a surface plasma to study the charging of dust particles in a plasma.
Although the surfa e plasma as a physical entity with its own physical properties is implicitly contained in these investigations, a microscopic description of its formation, dynamics, and structure was not attempted.First steps in this direction were taken by us in a short note [29].The purpose of this colloquium is, on the one hand, to extend these considerations, in particular, to identify the surface physics which needs to be resolved before a quantitative microscopic theory of the surface plasma can be constructed and to convey, on the other hand, our conviction that the concept itself is not empty.On the contrary, it puts questions center stage which are of fundamental interest.To list just a few:
• What forces bind ele trons and ions to the plasma boundary?• How do electrons and ions dissipate energy when approaching the boundary?• What is the probability with which an electron sticks at or desorbs from the boundary?• What is the density and temperature of the surface plasma and are there any collective properties?• What is the mobility for the lateral motion of electrons and ions along the wall and can it be externally controlled?• How does all this affect electron-ion recombination and secondary electron emission on chemically inert plasma boundaries?, where s e,i are the sticking coefficients and j plasma e,i are the fluxes of plasma electrons and ions hitting the boundary.Electrons and ions may thermally desorb from the boundary with rates τ −1 e,i , where τ e,i are the desorption times.They may also move along the surface with mobilities µ e,i , which in turn may affect the probability α R with which ions recombine with electrons at the wall.All these processes occur in a layer whose thickness d is at most a few microns, that is, on a scale where the standard kinetic description of the gas discharge based on the Boltzmann-Poisson system breaks down.Thus, the above listed questions can be only addressed from a quantum-mechanical point of view.
Of particular importance for the quantitative description of the build-up of a surface plasma are the sticking coefficients s e,i and the desorption times τ e,i .Little is quantitatively known about these parameters, in particular, with respect to the electrons.Very often, s e ≈ s i ≈ 0.1 − 1 and τ −1 e = τ −1 i = 0 is used without further justification.Below, we sketch a quantum-kinetic approach to calculate s e and τ e from a simple microscopic model for the plasma boundary interaction which treats the interaction of electrons with plasma boundaries as a physisorption process [30,31,32,33,34,35,36,37,38,39,40] in the polarization-induced attractive part of the surface potential.Electron surface states [41,42,43,44,45,46,47,48,49,50,51,52,53,54,55], at most a few nm away from the boundary, will t us play a central role as will surfacebound scattering processes which control electron energy relaxation at the surface and thus electron sticking and desorption.
Although the forces and scales are different for ions, they behave conceptually very similar.The main difference between electrons and ions is that as soon as the surface collected some electrons, because of the faster bombardment with electrons than with ions, the surface potential for ions is the attractive Coulomb potential (most probably screened but thats for the following irrelevant).Hence, ion surface states develop in the tail of the long-ranged Coulomb potential and thus deep in the sheath of the grain, f r away from its surface.The microscopic processes driving ion energy relaxation and eventually ion sticking and desorption are thus not surface-but plasmabound.
In the microscopic approach presented below, we focus on the physics occurring at most a few nm away from the boundary.We will therefore not give here a quantitative treatment of the physisorption kinetics of ions in the long-ranged Coulomb potential.However, when it comes to the calculation of the surface charge via phenomenological equations connecting the quantum with the classical level, we have to make some assumptions about the ion dynamics and kinetics.We will then discuss ions qualitatively.The assumptio s made for ions, which are somewhat in conflict with what other people expect [56,57,58], do however not affect the microscopic calculation of s e and τ e .
The outline of this colloquium is as follows.In the next section we describe and put into context the surface model for the charge of a floating dust particle in a plasma we developed in [29] because it motivated the physisorption-inspired microscopic treatment of electrons at plasma boundaries discussed in this colloquium.A qualitative description of the ion kinetics in the vicinity of a spherical grain is also included in this section.Section 3 describes a microscopic model for the interaction of electrons with plasma boundaries.Specified to a metallic boundary, it will then be used to calculate the electron sticking coefficient s e and the electron desorption time τ e .Key issues of the microscopic description of the electronwall interaction (surface potential, coupling to elementary excitations of the solid, etc.) will be identified and numerical results will be presented and discussed.A critique of our assumptions is given at the end of section 3 and should be understood as a list of to-do's.We close the presentation in section 4 with a few concluding remarks.Mathematical
tails interrupting the presentation wh
ch is meant to be read in order because it successively constructs a case are relegated to three appendices.
Charge of a dust particle in a plasma
The physisorption-inspired treatment of surface charges originated from our attempt to calculate the charge of a spherical µm-sized floating dust particle in a quiescent plasma, taking not only plasma-induced but also surfaceinduced processes into account [29].Here we have to clearly distinguish between the assumptions made to construct a constituting equation for the surface charge, which by necessity has to connect the quantum mechanics occurring at the surface with the classical physics determining the plasma fluxes, and the assumptions to obtain estimates for the surface parameters appearing in this equation.The microscopic calculation of the electron surface parameters s e and τ e presented in the next sections is of course independent
f the assumptio
s about the ion dynamics and kinetics as well as the phenomenological nature of the constituting equation for the surface charge.
Rate equations
First, we will discuss the surface model proposed in [29] from the perspective of the rate equations corresponding to the elementary processes shown in Fig. 1.Thereby we also id ntify the assumptions, in particular, with respect to the surface properties, which are usually made in standard calculations of surface charges.
To be specifi al dust particle with radius R. The quasi-stationary charge of the grain is given by (we measure charge in units of −e) tionary (dσ e,i /dt = 0) rate equations [28],
0 = s e j plasma e − τ −1 e σ e − α R σ e σ i ,(2)0 = s i j plasma i − τ −1 i σ i − α R σ e σ i ,(3)
where j plasma e,i
, s e,i , τ e,i , and α R denote, respectively, the fluxes of electrons and ions hitting the grain surface from the plasma, the electron and ion sticking coefficients, the electron and ion desorption times, and the electron-ion recombination coefficient. 1 In order to derive the standard criterion invoked to determine the quasi-stationary grain charge, we now assume, in contrast to what we do in our model [29] (see also below), that both electrons and ions reach the surface of the grain.In that case, both Eq. ( 2) and Eq. ( 3) should be interpreted as flux balances on the grain surface.At quasi-stationarity, the grain is charged to the floating potential Ū .In energy units, Ū = Z p e 2 /R = 2Z p R 0 a B /R with R 0 the Rydberg energy and a B the Bohr radius.Because the grain temperature k B T s ≪ Ū the ion desorption rate τ −1 i ≈ 0. Equation (3) reduces therefore to α R σ e σ i = s i j plasma i which transforms Eq. ( 2) into s e j plasma e = s i j plasma i +τ −1 e σ provided σ ≈ σ e which is usually the c ard approach the grain surface is moreover assumed to be a perfect absorber for b and τ −1 e = τ −1 i = 0.The quasi-stationary charge Z p of the grain is then obtained f om the condition
j plasma e (Z p ) = j plasma i (Z p ) ,(4)
where we explicitly indicated the dependence of the plasma fluxes on the grain charge.Calculations of the grain charge differ primarily in the approximations made for the plasma fluxes j plasma e,i .For the repelled species, usually collisionless electrons, the flux can be obtained from Poisson's equation and the collisionless Boltzmann equation, using trajectory tracing techniques based on Liouville's theorem and energy and momentum conservation [59,60,61].The flux for the attracted 1 The rate equations connecting the plasma fluxes j plasma e,i and surface densities σe,i with the surface parameters se,i, τe,i, and αR are phenomenological.They should be derived from Boltzmann equations containing surface scattering integrals which encapsulate the quantum mechanics responsible for sticking, desorption, and recombination.with the respective desorption flux τ −1 e,i σe,i, where se,i and τe,i denote, respectively, sticking coefficients and desorption times [29].
species, usually collisional ions, is much harder to obtain.Unlike the electron flux, the on flux depends not only on the field of the macroscopic body but also on scattering processes due to the surrounding plasma, which throughout we assume to be quiescent.For weak ion collisionalities the charge-exchange enhanced ion flux model proposed by Lampe and coworkers [56,57,58] is usually used.Its validity has been however questioned by Tskhakaya and coworkers [62,63].We come back to Lampe and coworkers approach below when we discuss representative results for our surface model.
Hence, irrespective of the approximations made for the plasma fluxes, the standard approach of calculating surface charges is based on three assumptions about the surface physics:
• Both ions and electrons reach the surface, even on the microscopic scale.
• s e = s i = 1 or at least s e = s i . • τ −1 e = 0 or at least τ −1 e σ e ≪ s i j plasma i = α R σ e σ i .
We basically challenge all ce states.Because of differences in the potent al energy, mass, and size the spatial extension of the electron and ion bound states, and thus the average distance of electrons and ions from the boundary, is expected to be different.On the microscopic scale, electrons and ions trapped to the surface should be spatially separated.
Second, s e = s i is quite unlikely.Usually, heavy particles, such as ions, couple rather strongly to vibrational excitations of the boundary [36,39].They can thus dissipate energy very efficiently which usually leads to a large sticking coefficient.Light particles, like electrons, on the other hand, couple only very weakly to vibrations of the solid.On this basis, we would expect s e ≪ s i .To what extend the coupling to other elementary excitations of the boundary (plasmons, electron-hole pairs, ...) can compensate for the inefficient coupling to lattice vibrations is part of our investigations.
Third, if ions and electrons are indeed spatially separated, the two rate equations should be in fact interpreted as flux balances on two different effective surfaces (viz: the two closed circles in Fig. 2).In that case, α R σ i σ e ≪ σ e,i /τ e,i and the surfac flux, τ −1 e σ e , with the electron collection flux, s e j plasma e .The corresponding balance of ion fluxes, to be taken on an effective surface surrounding the grain, would then yield a partial screening charge Z i .Within this scenario, we would thus obtain
Z p = 4πr 2 e • (sτ ) e • j plasma e (Z p ) ,(5)Z i 4πr 2 i • (sτ ) i • j plasma i ,(6)
with r e ≈ R and r i r e .
The surface physics is now enco
d in (sτ ) e,i .These prod
cts depend on the material and the plasma.They could be used as adjustable parameters.A justification of the assumptions, however, made in deriving Eqs. ( 5) and ( 6) can only come from a microscopic calculation of (sτ ) e,i .
For electrons, various aspects of this calculation will be discussed in the following sections.
Semi-microscopic approach
Before we discuss the complete microscopic calculation of s e and τ e we summarize the semi-microscopic approach taken in Ref. [29].This prepares the grounds for a microscopic thinking and demonstrates that Eqs. ( 5) and (6) give results which compare favorable with experimental data.
The approach we adopted in Ref. [29] is based on a quantum mechanical investigation of the bound states of a negatively charged particle in a gas discharge.For that purpose, we considered the classical interaction between an electron (io ) with charge −e (+e) and a spherical particle with radius R, dielectric constant ǫ, and charge Z p .The interaction potential contains then a short-ranged polarization-induced part arising from the electric boundary conditions at the grain surface -the classical image potential -and a long-ranged Coulomb tail due to the particle's charge [64,65].
The polarization-induced part of the potential will be discussed from a quantum-mechanical point of view in appendix A. Concerning the Coulomb tail we may add that it arises from the interaction between the approaching electron and the electrons already residing on he grain.From many-body theory it is known that this interaction can be rather involved because the attached sibility.The Coulomb part is then simply the potential of a sphere (plane) with c n interaction.
Measuring distances from the grain surface in units of R and energies in units of Ū , the interaction energy at x = r/R − 1 > x b , where x b is a lower cut-off, below which the grain boundary cannot be described as a perfect surface anymore, reads
V e,i (x) = ± 1 1 + x − ξ x(1 + x) 2 (2 + x) ≈ 1 − ξ/2x electron −1/(1 + x) ion(7)
with ξ = (ǫ − 1)/2(ǫ + 1)Z p .The second line in Eq. ( 7) is an approximation which describes the relevant parts of the potential very well and permits an analytical calculation of the surface states.In Fig. 3 we plot V e,i (x) for a melamine-formaldehyde (MF) particle (ǫ = 8, R = 1 µm, and Z p = 1500) embedded in a 100P a neon discharge with plasma density n e = n i = 0.39 × 10 9 cm −3 , ion temperature k B T i = 0.026 eV , and electron temperature k B T e = 6.3 eV [15].From the electron energy distribution, f e (E), we see that the discharge contains enough electrons which can overcome the Coulomb barrier of the dust particle.These electrons may get bound in the polarization-induced short-range part of the potential, well described by the approximate ex ression, provided they can get rid of their kinetic energy.Ions, on the other hand, being cold (see f i (E) in Fig. 3) and having a finite radius r size i /R = x size i 10 −4 , cannot explore the potential at short distances.For them, the long-range Coulomb tail is most relevant, which is again w 1)a B and for the ion eigenvalue ε i = −α i /2k 2 with α i = m i RZ p /m e a B , where m e and m i are the electron and ion mass, respectively, the radial Schrödinger equations with the approximate potentials read
d 2 u e,i dx 2 + − α 2 e,i k 2 + Ṽe,i (x) − l(l + 1) (1 + x) 2 u e,i = 0 (8)
where Ṽe (x) = 2α e /x and Ṽi (x) = 2α i /(1 + x).
For bound states, the wavefunctions have to vanish for x → ∞.The boundary condition at x b depends on the potential for x ≤ x b , that is, on the potential within the solid (which is different for electrons and ions).Matching the solutions for x < x b and x > x b at x = x b leads to a secular equation for k.Ignoring the possibility that electrons and ions may also enter the solid, we set Ṽe,i (x ≤ x b ) = ∞ with x b = 0 for lectrons and x b = x size i for ions.For electrons we thereby restrict ourselves to weakly bound polarization-induced surface states, neglecting strongly bound crystal-induced surface states which, in general, may also occur [67].As explained in the next section, we expect them to be of minor importance for physisorption of electrons.
The electron Schrödinger equation with the hard boundary condition at z = 0 is equivalent to the radial Schrödinger equation for the hydrogen atom.Hence k is an integer n.Because (for bound electrons) x ≪ 1 and α e ≫ 1, the centrifugal term is negligible.We consider therefore only states with l = 0.The eigenvalues are then ε e n = Fig. 3. Left panel: Potential energy for an electron (ion) in the field of a MF particle (R = 1 µm, Z = 1500) [15] and representative probability distributions, |u(x)| 2 , sh fted to the binding energy and maxima normalized to one.Dashed lines denote the potentials used in the Schrödinger equations.Note, the finite ion radius r size i ∼ Å forces the ion wavefunctions to vanish at x ≈ 10 −4 .Right panel: Bulk energy distribution functions for the 100P a neon discharge hosting the particle [15]: kBTe = 6.3eV , kBTi = 0.026eV , and ne = ni = 0.39 × 10 9 cm −3 .
1 − α e ξ/4n 2 and the wavefunctions read
u e n,0 (x) ∼ v n,0 (z) = z exp(−z/2)(−) n−1 (n − 1)!L(1)
n−1 (z) (9) with z = 2α e x/n and L
(1) als.
The probability densities |u e n,0 (x)| 2 for the first three states are plotted in Fig. 3.As can be seen, electron surface states are only a few Ångstroms away from the grain boundary.At these distances, the spatial variation of V e (x) is comparable to the de-Broglie wavelength of electrons approaching the particle.More specifically, for k B T e = 6.3 eV , λ dB e /R ≈ |V e /V ′ e | ≈ 10 −4 .Hence, the trapping of electrons at the surface of the particle has to be described quantum-mechanically.
The solutions of the ion Schrödinger equation are Whittaker functions, u i k,l (x) = W k,l+1/2 (x) with x = 2α i (1 + x)/k and k determined from u i k,l (x size i ) = 0.However, since k ≫ 1 and x ≫ 1, it is very hard to work directly with W k,l+1/2 (x).It is easier to use the method of comparison equations [68] and to construct uniform approximations for u i k,l n for the hydrogen atom as a comparison equation.The method can be applied for any l.Here we give only the result for l = 0:
u i k,0 (x) ∼ v n,0 (z)/ dz/dx(10)
with v n,0 (z) defined in Eq. ( 9) and z = 2α i z(x)/n.The mappings z(x) and k(n) can be constructed from the phaseintegrals of the two Schrödinger equations.
In Fig. 3 we show |u i k,0 (x)| 2 for k(300) and k(30000).Note, even the k(30000) state is basically at the bottom of the potential.This is a consequence of α i ≫ 1 which leads to a continuum of states below the ion ionization threshold at ε = 0. We also note that |u i k(n),0 (x)| 2 peaks for n ≫ 1 just below the turning point.Hence, except for the lowest states, which we expect to be of little importance, ions are essentially trapped in classical orbits deep in the sheath of the grain.This will be also the case for l > 0. That ions behave classically is not unexpected because for k B T i = 0.026 eV their de-Broglie wavelength is 10 −3 :
λ dB i /R ≈ 10 −5 ≪ |V i /V ′ i | ≈ 1.
Thus, the interaction between ions and the particle is classical.
Nevertheless it can be advantageous to describe ions quantum-mechanically and to use the method of comparison equations, which is an asymptotic technique, to perform the calculation in the semiclassical regime.Since the io the scope of this paper, we do not give more mathematical details about the solution of the ion Schrödinger equation.We mention however that many years ago Liu [69] pursued a quantummechanical description of the collisionless ion dynamics around electric probes.But he found no followers.
A model for the charge of the grain which takes surface states into account can now be constructed as follows.Within the sheath of the particle, the density of free electrons (ions) is much smaller than the density of bound electrons (ions).In that region, the quasi-stationary charge (again in units of −e) is thus approximately given by
Z(x) = 4πR 3 x x b dx ′ 1 + x ′ 2 n b e (x ′ ) − n b i (x ′ )(11)
with x < λ D i = k B T i /4πe 2 n i , the ion Debye length, wh ch we take as an upper cut-off, and n b e,i the density of bound electrons and ions.For the plasma parameters used in Fig. 3, λ D i ≈ 60µm.The results for the surface states presented above suggest to express the density of bound electrons by an electron surface density:
n b e (x) ≈ σ e δ(x − x e )/R(12)
with x e ≈ x b ≈ 0 and σ e the quasi-stationary solution of of Eq. ( 2) without the recombination term.Equation ( 2) is thus still interpreted ill argue below that once the grain has collected some negative charge, not necessarily the quasi-stationary one, there is a critical ion orbit at x i ∼ 1 − 10 ≫ x e which prevents ions from hitting the particle surface.Thus, the particle charge obtained from Eq. ( 11) is simply Z p ≡ Z(x e < x < x i ).Inserting Eq. ( 12) into Eq.(11) and integrating up to x with x e < x < x i leads to Eq. ( 5), the expression for the particle charge deduced from the rate equations (2) and (3) under the assumption that ions do not reach the grain surface on the microscopic scale.
For an electron to get stuck at (to desorb from) a surface it has to loose (gain) energy at (from) the surface [36].This can only occur through inelastic scattering with the grain surface.To calculat the product (sτ ) e requires therefore a microscopic description of energy relaxation at the grain surface.This will be discussed in the next section.In Ref. [29] we invoked th ) e by (sτ
) e = h k B T s exp E d e k B T s , (13)
where h is Planck's constant, T s is the surface temperature, and E d e is gy of the surface state from which desorption most likely occurs [36].The great virtue of this equation is that it relates a combination of kinetic coefficients, which depend on the details of the inelastic (dynamic) interaction, to an energy, which can be deduced from the static interaction alone.Kinetic considerations are thus reduced to a minimum.They are only required to identify the relevant temperature and the state from which desorption most probably occurs.In the next section we will show, for a particular model, how Eq. ( 13) can be obtained from a microscopic theory.Its range of validity will then become also clear.
Equation ( 5) is a self-consistency equation for Z p .Combined with Eq. ( 13), and approximating the electron flux j plasma e from the plasma by the orbital motion limited flux,
j OML e = n e k B T e /2πm e exp[−Z p e 2 /Rk B T e ] ,(14)
which is reasonable, because, on the plasma scale, electrons are repelled from the grain surface, the grain charge is given by
Z p = 4πR 2 h k B T ition to the plasma parameters n e and T e , the charge depends on the surface parameters T s and E d e .Without a microscopic theory for the inelastic electrongrain interaction, a plausible estimate for E d e has to be found from physical considerations alone.Since by necessity the electron comes very close to the grain surface (see Fig. 3) it will strongly couple to elementary excitations of the grain.Depending on the material these may be bulk or surface phonons, bulk or surface plasmons, or internal electron-hole pairs.For any realistic description of the potential for x ≤ x b the electron wavefunction leaks into the solid, the electron will therefore quickly relax to the lowest surface bound state.The microscopic model for electron energy relaxation at metallic boundaries presented in the next section turns out to even work for an infinitely high e,i (x nd state, it is reasonable to expect
E d e ≈ (1 − 8, leads to E d e ≈ 0.5eV .The particle temperature cannot be determined in a simple way.It depends on the balance of heating and cooling fluxes to-and-fro the particle and thus on additional surface parameters [70].We use T s therefore as an adjustable parameter.To reproduce, for instance, with Eq. ( 15) the charge of the particle in Fig. 3, T s = 370 K implying (sτ ) e ≈ 10 −6 s.
In Fig. 4 we plot the radius dependence of the charge of discharge specified in the caption of Fig. 3.More results are given in [29].Since the plasma parameters are known the only adjustable parameter is the urface temperature.Using T s = 370K we find excellent agreement between theory and experiment.For comparison we also show the charges obtained from Eq. ( 4), approximating the ion plasma flux by
j plasma i = j OML i + j cx i ,(17)
where
j OML i = n i k B T i /2πm i [1 + Z p e 2 /Rk B T i ](18)
is the orbital motion limited ion flux and [15]
j cx i = n i (0.1λ D i /l cx ) k B T i /2πm i (Z p e 2 /Rk B T i ) 2 (19)
is the ion flux originating from the release of trapped ions due to charge-exchange scattering as suggested by Lampe and coworkers [56,57,58].The scattering length l cx = (σ cx n g ) −1 with σ cx = 10 −14 cm 2 the scattering cross section and n g = p/k B T g the gas density.Clearly, the radius dependence of the grain charge seems to be closer to the nonlinear dependence obtained from Eq. ( 15) than to the linear dependence resulting from
j OML e = j OML i + j cx i ,(20)
indicating that the surface model we propose captures at least some of the physics correctly which is responsible for the formation of surface charges.
In order to derive Eq. ( 15) from Eq. ( 11) we had to assume that once the particle is negatively charged ions are trapped far away from the grain surface.Treating trapping of ions in the field of the grain as a physisorption process sug ounterintuitive.Similar to an electron, an ion gets bound to the grain only when it looses energy.Because of its low en rgy and the long-range attractive ion-grain interaction, the ion will be initially bound very close to the ion ionization threshold (see Fig. 3).The coupling to the elementary excitations of the grain is thus negligible and only inelastic processes due to the plasma are able to push ions to lower bound states.Since the interaction is classical, inelastic collisions, for instance, charge-exchange scattering between ions and atoms, act like a random force.Ion energy relaxation can be thus envisaged as a de-stabilization of orbits.This is in accordance to what Lampe and coworkers assume [56,57,58].In contrast to them, however, we [29] expect orbits whose spatial extension is smaller than the scattering length to be stable because the collision probability during one revolution becomes vanishingly small.For a circular orbit, a rough estimate for the critical radius is
r i = R(1 + x i ) = (2πσ cx n g ) −1(21)
which leads to x i ∼ 5.7 ≫ x e ∼ 0 when we use the parameters of the neon discharge of Fig. 3 and σ cx = 10 −14 cm 2 .Although the approach of Lampe et al. [56,57,58] shows a pile-up of trapped ions in a shell of a few µm radius enclosing the grain, they would not expect a relaxat on bottleneck.This point can be only clarified with a detailed investigation of the ion dynamics and kinetics in the vicinity of the grain, including electron-ion recombination.As mentioned before, despite the classical character of the ion dynamics, a quantum-mechanical treatment, similar to the one we will present in the following sections for electrons, is possible and perhaps even advantageous because it treats closed (bound surface states) and open ion orbits (extended surface states) on the same footing.In addition, energy barriers due to the angular motion are easier to handle in a quantum-mechanical context.In fact, Lampe and coworkers neglect these energy barriers whereas Tskhakaya and coworkers [62,63] believe that this approximation overestimates j cx i .In reality, they claim, j cx i is much smaller.If this is indeed the case, the condition j OML e = j OML i + j cx i would yield charges which are much closer to the or t even further, we assumed in [29] that all trapped ions can be subsumed into a single effective orbit as shown in Fig.
xpression for the number of ions accumulating in the vicinity of the grain, that is, fo n density n b i accumulating in the vicinity of the critical orbit by a surface density σ i which balances at x i the ion collection flux s i j plasma i with the ion desorption flux τ −1 i σ i .Mathematically, this gives rise to a rate equation similar to (3), with the recombination term neglected and interpreted as a rate equation at r = r i .Although Eq. ( 13) assumes excitations of the grain to be responsible for sticking and desorption we expect a similar expression (with E d e , T s replaced by E d i , T g ) to control the density of trapped ions.Integrating (11) up to x with
x i < x < λ D i we then obtain Z(x i < x < λ D i ) = Z p − Z i with Z i = 4πR 2 (1 + x i ) 2 h k B T g e E d i (Zp)/kB Tg j B i (22)
the number of trapped ions.Since the critical orbit is near the sheath-plasma boundary, it is fed by the Bohm ion flux
j B i = 0.6n i k B T e /m i .(23)
The ion desorption energy is the negative of the binding energy of the critical orbit,
E d i (Z p ) = −V i (x i ) Ū (Z p ) = 4πσ cx a B n g Z p R 0 ,(24)
and depends strongly on Z p and x i .For the situation shown in Fig. 3, we obtain E d i ≈ 0.39eV and (sτ ) i ≈ 10 −8 s when we use T g = T s = 370 K, the particle temperature whi
reproduces Z p ≈ 1500.The
on screening charge is then Z i ≈ 12 ≪ Z p which is the order of magnitude expected from molecular dynamics simulations [71].Thus, even when the particle charge is defined by Z(x i < x < λ D i ) it is basically given by Z p .From the surface model we would expect (sτ ) e ∼ 10 −6 s to produce particle charges Z p of the correct order of magnitude.Since the particle temperature T s is unknown, it can be used as an adjustable parameter.The calculated Z p can thus be always made to coincide with the measured charge.The particle temperature has to be of course within physically meaningful bonds.Recently, the particle temperature (but unfortunately not the particle charge) has been measured [72].There is thus some hope that in the near future Z p and T s will be simultaneously measured.Finally, let us point out that, because ions are in our model bound a few microns away from the surface, we obtain (sτ ) i < (sτ ) e , in agreement with the phenomenological fit performed in [28].
Physisorption of electrons
In the previous section we described a microscopic, physisorption-inspired model for the charging of a dust particle in a plasma which avoids the unrealistic treatment of the grain as a perfect absorber.Within this model the charge and partial screening of a dust particle can be calculated without relying on the condition that the total electron plasma flux balances on the grain surface the total ion plasma flux.Instead, two flux balance conditions are individually enforced on the two effective surfaces shown in Fig. 2 (solid circles).The quasi-stationary particle charge Z p is then given by the number of electrons "quasi-bound" in the polarization potential of the grain and the screeni g charge Z i is approximately given by the number of ions "quasi-trapped" in the largest stable closed ion orbit (which defines an effective surface for ions and subsumes, within our model, all trapped ions into a single effective orbit).
The physisorption kinetics at the grain boundary, that is, the sticking in and the desorption from (external) surface states due to inelastic scattering processes, is encoded in the products (sτ ) e,i which we approximated by phenomenological expressions of the form (13).For electrons, we now take a closer look at what happens on the surface microscopically.First, we will discuss the microphysics qualitatively.Then we will p
form an exploratory quantum
mechanical calculation of s e and τ e using a simple one-dimensional model for the electronic properties of the surface which allows us to do large portions of the calculation analytically.Finally, we will critically assess the results of the calculation turning thereby its shortcomings into a list of to-do's.
In principle, trapping and de-trapping of ions in the surface-induced Coulomb potential of the grain can be also understood as a physisorption process.However, the quantum-mechanical approach we will use for electrons has then to be pushed to the semi-classical regime appropriate for ions.In additio , not surface-but plasmabased inelastic scattering processes will turn out to control ion energy relaxation.Although conceptually very close, mathematically the calculation of s i and τ i is quite different from the calculation of s e and τ e .It is therefore beyond the scope of this paper.In the concluding section we may however add a few remarks about ions.
Qualitative considerations
The surface of a µm-sized grain is large enough to contain sizeable spatial regions (facets) isomorphous to crystallographic planes.Except specific features arising from the finite extend of the facets, whose influence diminishes with the facet size, the electr nic properties of the facets resemble the electronic properties of (infinitely extended) planar surfaces.In particular, like ordinary surfaces, facets should support surface states to which electrons approaching the grain from the plasma may get bound and then re-emitted when they dynamically interact with the elementary excitations of the grain.
Each facet may give rise to two types of surface states: (i) Crystal-induced surface states due to the abrupt appearance (from the plasma electron's point of view) of a periodic potential inside the grain and (ii) polarizationinduced image states, on which the considerations of the previous section were based.Compared to the binding energy of image states, the binding energy of crystal-induced surface states is very large.Instead of a few tenth of an electron volt, it is typically a few electron volts.As a result, the center of gravity of crystal-induced surface states is much closer to the surface than the center of gravity of image states.
Based on the experimental results of [47] we show in Fig. 5 as an example the schematic electronic structure of three copper surfaces, respectively, for the lateral momentum where the projected energy gap is largest.The electronic structure for a given orientation changes with momentum (not shown) but for all orientations, and that is the point we want to make, surface states exist2 , in addition to projected bulk states, and may thus participate in a physisorption process.For diele tric surfaces the electronic structure is quite similar although the details and physical origin of the states is different [67].An ab-initio modeling of surface states is complex and computationally expensive, even for planar surfaces (see for instance [73]).Fortunately, the essential physics can be understood within simple one dimensional models which assume the potential energy to vary only normally to the surface (z−direction) as illustrated in Fig. 6.Inside the material (z < 0) the potential has the periodicity of the crystal.It may thus lead to an energy gap on the surface.Outside the material, the potential gives rise to a barrier which merges at large distances with the asymptotics of the image potential V p (z) ∼ −1/z.Its physical origin are exchange and correlation effects which, on the one hand, contribute to the confinement of electrons inside the material and, on the other hand, cause the attraction of external electrons to the surface.A simple microscopic model for the image potential [41,42] is given in appendix A.
Q || zero
The situation shown in Fig. 6 is the most favorable one for physisorption of electrons.The vacuum (plasma) potential, which is the zero of the energy scale, is in the middle of a large energy gap.Four main classes of states can then be distinguished: (i) Volume states periodic inside the material and exponentially decaying into the vacuum (plasma).They exist for energies where bulk states are also allowed.Close to band edges they may hav an increased weight near the surface in which case they are surface resonances.(ii) Bound surface states, that is, states decaying exponentially into the material and the vacuum.They appear in regions of negative energies where bulk states are absent: Weakly bound image states close to the vacuum potential and strongly b und crystal-induced surface states close to the Fermi energy.Crystal-induced surface states may have tails on the material side strongly oscillating with the crystal periodicity, while the tails of image states may only weakly respond to the crystal potential.(iii) Unbound surface states for positive energies inside the gap.They are free on the vacuum and bound on the material side.The periodic crystal potential may also not affect these states very much.(iv) States which are free on both sides.Inside the material they oscillate with the lattice periodicity while outside the material their oscillations have to fit the surface potential.In the vicinity of the surface this class of states may also have a peak.
Of particular importance for sticking and desorption are transitions between bound and unbound surface states due to inelastic scattering with elementary excitations of the boundary.The elementary excitations can be phonons, plasmons, and electron-hole pairs.The latter
wo cases are excitations involving v
lume states.
The potential plotted in Fig. 6 is for an uncharged surface.An electron approaching a plasma boundary is of course also subject to the Coulomb repulsion d e to the electrons already residing on the surface.In the meanfield approximation, however, this repulsion leads only to a barrier whose height is the floating energy Ū.Only an electron with an energy larger than Ū has a chance to come close enough to the surface to feel the attractive part of the potential.For an electron bound in this part of the potential, on the other hand, the Coulomb barrier merely sets the ionization threshold.Thus, as long as the Coulomb repulsion is treated in meanfield approximation, the Coulomb term drops out from the considerations provided we shift the zero of the energy axis to Ū , that is, by simply measuring energies with respect to the floating energy (Coulomb barrier) of the surface.If Ū falls inside an energy gap of the boundary the situation is similar to the one depicted in Fig. 6.
Simplified planar microscopic model
Ideally, a microscopic calculation of s e and τ e for a spherical grain would be based on a three-dimensional firstprinciple electronic structure of the grain surface.
In view of the discussion of the previous subsection an estimate for the grain's s e and τ e may be however also obtained by the following strategy which is most probably simpler because it allows to incorporate existing (onedimensional) empirical pseudo-potentials for planar surfaces: (i) Identify the facets on the grain surface and neglect, in a first approximation, the finite lateral exten-00000000000000000000 11111111111111111111
(iv) V p Ṽ (ii) (ii) E F (i) (i) e f (E) E=0 z=0 solid vacuum (plasma) ~−1/z (iii)
Fig. 6.Schematic drawing of the potential energy at a surface (plasma boundary) such as copper (100) or copper (110) and representative wavefunctions for volume states (i), bound surface states (ii), unbound surface states (iii), and free states (iv).Shaded areas denote projected bulk states, EF is the Fermi energy lectron's Boltzmann distribution function.
The dashed lines indicate the approximate potentials defining the simplified planar model of subsection 3.2 on which the calculation of se and τe is based we describe, respectively, in subsection 3.3 and 3.4.
sion of the facets, that is, work with plane waves or Bloch functions in the lateral dimensions.(ii) Use empirical onedimensional potentials for planar surfaces [53] to calculate for each facet separately bound and unbound surface states.(iii) Identify the channels for electron energy relaxation and set up, again for each facet separately, a quantum-kinetic scheme for the calculation of s e and τ e .(iv) Use an appropriate macroscopic spatial averaging scheme to obtain an estimate for the grain's s e and τ e .Despite its approximate nature this strategy is still demanding.To work it out for a realistic grain is surely beyond the scope of this colloquium.In the exploratory calculation of s e and τ e presented below we focused therefore on a single re we moreover did not deduce from an empirical pseudo-potential but from a mo treatment while at the same time it retains the essential physics.
Quite generally, the probability with which an electron approaching from the plasma halfspace z > 0 the plasma boundary at z = 0 ends up in a bound surface state (sticking), or with which an electron bound to the surface ends u describes the electron motion in the static surface potential, the second term denotes the free motion of th boundary controlling electron energy relaxation at the boundary and thu nd the third term is the coupling between the two.It is advantageous to express the Hamiltonian (25) in terms of creation and annihilation operators for the (external) electron as well as the (internal) elementary excitations.For that purpose we use the basis in which H e is diagonal, that is, the eigenstates of the static surface potential V (z). 3 Writing r = (R, z) for the electron position, the Schrödinger equation defining these states reads
H = H e + H s + H es ,(25)− 2 2m e ∆ + V (z) Ψ Qq (R, z) = E Qq Ψ Qq (R, z) . (26)
The lateral motion is free and can be separated from the vertical one.Hence,
Ψ Qq (R, z) = 1 √ A exp[iQ • R]ψ q (z) ,(27)
with A the area of the surface, which is eventually made infinitely large, Q = (Q x , Q y ) a two-dimensional wavevector characterizing the lateral motion of the electron and ψ q (z) the wavefunction for th vertical motion which satisfies the one-dimensional Schrödinger equation (viz: Eq. ( 8))
d 2 dz 2 ψ q (z) + 2m e 2 E q − V (z) ψ q (z) = 0 (28) with E q = E Qq − 2 Q 2 /2m e .
The quantum number q is an integer n for bound and a wavenumber k for unbound surface states.In this basis,
H e = Qq E Qq C † Qq C Qq ,(29)
where C † Qq creates an electron in the surface state Ψ Qq with energy
E Qq = 2 Q 2 /2m e + E q . (30)
The second and third term in (25) depend on the kind of elementary excitations responsible for energy relaxation and hence on the material.For dielectric materials, such as graphi e or silicon, the coupling to vibrational modes is most probably the main driving force for physisorption of electrons.In particular, lattice vibrations should play an important role.Their energy scale is the Debye energy k B T D .For most dielectrics k B T D is smaller than the energy spacing of the lowest surface states.For image states typical energy separations are given in table 1.When crystal-induced surface states or dangling bonds [67]) are also included the situation does not change much it may be even worse.Multiphonon processes could thus significantly affect physisorption of electrons at dielectric surfaces making it a very interesting problem to study.
For metals, on the other hand, electronic excitations, most notably electron-hole pairs, provide an energy relaxation [37,40].They are not created across a large energy refore unimp he Fermi energy of a partially filled band.In metals electronhole pairs can be excited even at room temperature.Physisorption of electrons at metallic plasma boundaries, whose temperatures are typically not much higher than room temperature, is thus most likely controlled by the coupling to electron-hole pairs.
The Fermi energy of a metal is inside a band.Electronhole pairs are thus excitations involving volume states hese states and (bound and unbound) surface states, which should be small because the states are spatially separated (see Fig. 6), electrons occupying these two classes of states can be approximately treated as two separate species: External and internal electrons, where the latter are responsible for energy relaxation of the former.
Specifically for a metallic plasma boundary, and we will restrict the calculation of s e and τ e presented in the next two subsections to this particular case, H s is thus the Hamiltonian of a non-interacting gas of electronic quasiparticles with Fermi energy E F .Hence,
H s = Kk E Kk D † Kk D Kk ,(31)
with D † Kk creating an internal electron in a quasi-particl t us masses, possibly different for the lateral and vertical motions, instead of the bare electron mass.
The function φ k (z), describing the vertical motion of an internal electron, obeys a one-dimensional Schrödinger equation:
d 2 dz 2 φ k (z) + 2m e 2 Ẽk − Ṽ (z) φ k (z) = 0 . (34)
Strictly speaking, the potential Ṽ (z) = V (z).But the spatial parts of the potential determining, respectively, surface and volume states are different.Working conceptually with two separate potentials gives us the flexibility to independently extend the relevant parts of the potential such that the calculation of sur ace and volume states can be most easily performed while the essential physics is kept (see dashed lines in Fig. 6 and below for the particular form of the approximate potentials).For a metallic boundary, the interaction part H es of the Hamiltonian (25) describes the interaction between internal and external electrons.Anticipating a statically screened Coulomb interaction,
H es = 1 2 QqQ ′ q ′ KkK ′ k ′ V Qq Kk Q ′ q ′ K ′ k ′ C † Qq C Q ′ q ′ D † Kk D K ′ k ′ (35)
with
V Qq (z)φ * k (z ′ )e −d|z−z ′ | φ k ′ (z ′ )ψ q (z), (37)
where d = k 2 s + Q 2 and k s = (k s ) surface is the screening wavenumber at the surface.Little is known about this parameter except that it should be less then the bulk screening wavenumber (k s ) bulk because the electron density in the vicinity of the boundary is certainly smaller than in the bulk.In [37] it was for instance argued, based on a comparison of experimentally and theoretically obtained branching ratios for positron trapping at and transmission through various metallic films that (k s ) surface ≃ 0.6(k s ) bulk .Bulk screening wavenumbers for some metals are given in table 2.
The Hamiltonian (25) with H e , H s , and H es respectively given by ( 29), (31), and (35) can be used to calculate the transition rate from any initial surface state ψ Q ′ q ′ to any final surface state ψ Qq .For the sticking process the in ace st q' Fig. 7. Diagrammatic representation of the golden rule (38) for a transition from the surface state (Q ′ q ′ ) to the surface state (Qq) (solid lines) via scattering on an internal electron (dashed line) which can be interpreted as the coupling to internal electron-hole pairs.The wavy line denotes the screened Coulomb interaction between internal and external electrons and the box symbolizes the dynamic, that is, inelastic electronmetal interaction.
and the final state is a bound surface state while for the desorption process it is vice versa.In lowest order perturbation theory (see Fig. 7), the rate is given by the golden rule,
W(Qq, Q ′ q ′ ) = 2π KK ′ kk ′ V Qq Kk Q ′ q ′ K ′ k ′ 2 × n F (E K ′ k ′ )[1 − n F (E Kk )] × δ(E Q ′ q ′ + E K ′ k ′ − E Qq − E Kk ) ,(38)
where n F (E) = 1/(exp[(E − E F )/k B T s ] + 1) is the Fermi distribution function for the metal electrons with Fermi energy E F and temperature T s .
To calculate the matrix element (37) we need the solutions of the Schrödinger equations ( 28) and (34).Physisorption o electrons involves transitions between bound and unbound surface states.The matrix elements for these transitions are large when the spatial overlap between the initial and final states is large.With unbound surface states inside the gap, image states, that is, bound surface states close to the zero of the energy axis (see Fig. 6), have the largest overlap.Crystal-induced surface states, having most w nbound surface states is very small, give rise to a smaller overlap and are thus less important.We neglect surface states and replace V (z) in (28) by
V (z) → ∞ for z ≤ 0 V p (z) for z > 0 ,(39)
where
V p (z) = − e 2 4z (40)
is the classical image potential.As explained in appendix A, V p can be understood in terms of virtual surface plasmon excitations [41,42,43].We thus calculated the surface states as if the energy gap on the surface were infinitely large.The solutions of (28) are then Whittaker functions which vanish for z ≤ 0 (see Appendix B) and the required matrix elements can be obtained analytically.As far as rnal electron-hole pairs are concerned, we followed [ erial from the average of the crystal potential and neglecting the oscillations of the potential, that is, treating the metal boundary as a jellium halfspace,
Ṽ (z) → 0 for z < 0 ∞ for z ≥ 0 .(41)
The wavefunctions φ k (z) vanish then for z ≥ 0 and are standing waves for z < 0. Using box-normalization,
φ k (z) = 2 L sin(kz)(42)
leading to Ẽk = 2 k 2 /2m e with k = πn/L and n ≥ 1 an integer.In the final expressions for s e and τ e we use L → ∞ making k continuous.
The physical content of the simplified planar model is summarized in Fig. 8.It will be used in the next two subsections to calculate, respectively, s e and τ e for a metallic plasma boundary.Due to the app
ximate potentials (39
and (41), external and internal single electron wavefunctions vanish in complementary halfspaces.As a result, the matrix element (37) factorizes,
I qk q ′ k ′ (Q) = I (1) qq ′ (Q)I (2) kk ′ (Q) (43)
with
I (1) qq ′ (Q)= ∞ 0 dz exp[−z k 2 s + Q 2 ]ψ * q (z)ψ q ′ (z) ,(44)I (2) kk ′ (Q)= ∞ 0 dz exp[−z k 2 s + Q 2 ]φ * k (−z)φ k ′ (−z) (45)
to be calculated explicitly in appendix B.
A rigorous calculation of s e and τ e , taking for instance into account that sticking and desorption occur on different imescales [35], should be based on quantum-kinetic master equat quations could be derived from (25) with techniques from non-equilibrium physics [36].In lowest order perturbation theory, the transition rates appearing in the master equation would be given by (38).In the following, we will not use this advanced approach.Instead we will calculate s e and τ e perturbatively by appropriately summing and weighting the ticking fficient s e we consider the positive half space (z > 0) as a kind of quantummechanical boundary layer (see Fig. 8).A measure of the tendency S Qn,Q ′ q ′ with which an electron approaching in a bound state Ψ Qn is then the time it takes the electron to traverse the boundary layer forwards and backwards divided by the time it takes the electron to make a transition from Ψ Q ′ q ′ to Ψ Qn [38].
Since the width of the quantum-mechanical boundary layer is L,
S Qn,Q ′ q ′ = 2L in Q ′ q ′ | p•n me |Q ′ q ′ in × 1 W −1 (Qn, Q ′ q ′ ) ,(46)
where the denominator in the first factor is the velocity matrix element calculated with the incoming part of the towards the plasma and p = −i ∇, the quantummechanical momentum operator.Using the asymptotic form of the unbounded wavefunctions given in Eq. ( 85) of appendix B, we find
in Q ′ q ′ | p • n m e |Q ′ q ′ in = q ′ 8m e a B .(47)
Hence,
S Qn,Q ′ q ′ = 16Lm e a B q ′ W(Qn, Q ′ q ′ ) .(48)
The tendency with which the electron approaching the boundary in the state Ψ Q ′ q ′ gets stuck in any one of the bound states -the energy resolved sticking coefficient -is then simply given by
S Q ′ q ′ = Qn S Qn,Q ′ q ′ = 16Lm e a B q ′ nQ W(Qn, Q ′ q ′ ) . (49)
The sticking coefficient s e entering the rate equation ( 2) is an energy-averaged sticking coefficient resulting from an appropriately performed sum over S Q ′ q ′ .As mentioned before, a rigorous derivation of an expression for s e should be based on the master equation for the occupancies of the surface states [35,36].A simpler way to obtain s e is however to regard the wall as a particle detector.The global sticking coefficient can then be defined as
Q ′ q ′ S Q ′ q ′ q ′ n Q ′ q ′ = s e Q ′ q ′ q ′ n Q ′ q ′ ,(50)
where n Q ′ q ′ are the occupancies of the unbound surface states Ψ Q ′ q ′ .The occupancies n Q ′ q ′ depend on the properties of the plasma.It is tempting to simply identify n Q ′ q ′ with the incoming part of the electron distribution function as i arises on the surface from the solution of the Boltzmann-Poisson equations.However, one should keep in mind that the distribution function is a classical object whereas n Q ′ q ′ is a quantum-mechanical expectation value.There arises therefore the question how the quantum-mechanical processes encoded in the above equations ca be properly fed into the semiclassical description of the plasma in terms of Boltzmann-Poisson equations.The issue is subtle because at the plasma boundary the potential varies so rapidly that the basic assumptions of the validity of the Boltzmann equation no longer hold.Mathematically, the microphysics should be put into a surface scattering kernel, course-grain ribution function with the outgoing one.But even for neutral particles, a microscopic derivation of such a scattering kernel has not yet been given.There exist only more or less plausible phenomenological expressions which parameterize the kernel with accommodation coefficients [75].
From the boundary-layer point of view used in the derivation of Eqs. ( 46)-( 50), the plasma, or, more precisely, the sheath of the plasma, is infinitely far away from the plasma boundary.Rigorously speaking, we can thus say nothing about how the microphysics at the plasma boundary merges with the physics in the plasma sheath.
To make nevertheless contact with the plasma we have to guess how the unbound surface states Ψ Q ′ q ′ are occupied.For simplicity we assume Maxwellian occupancy, with an electron temperature T e = (k B β e ) −1 , but other guesses, more appropriate for the plasma sheath, are also conceivable.For Maxwellian electrons, the global sticking coefficient is given by
s e = Q ′ q ′ S Q ′ q ′ q ′ exp[−β e E Q ′ q ′ ] Q ′ q ′ q ′ exp[−β e E Q ′ q ′ ] .(51)
In the limit L → ∞ and A → ∞ t
momentum summations in Eqs. (
6)-( 5 re to the calculation of high-dimensional integrals.In appendix C we describe the approximations invoked for the integrals.Some of the integrals can then be analytically performed.But the final expressions for the sticking coefficients remain multidimensional integrals, which have to be done for an electron hitting perpendicularly an aluminum surface at kBTs = 0.05eV .The screening wavenumber for the Coulomb interaction between an incident plasma electron and an internal aluminum electron, (ks) surface , is not well known.Results are therefore shown for (ks) surface /(ks) bulk = 0.1 (weak screening, strong coupling) and for (ks) surface /(ks) bulk = 0.6 (moderate screening, weak coupling); (ks) bulk is the screening wavenumber of aluminum (see table 2).Since (ks) surface = 0.6(ks) bulk is most probably the relevant screening parameter [37,40], the sticking coefficient is rather small.
Measuring energies in units of R 0 and distances in units of a B , Eq. ( 51) for the global sticking coefficient reduces to
s e = 4 π 2 β 3/2 e β 1/2 s I stick ,(52)
where
I stick = ∞ 0 dR ∞ −∞ dω 1 + n B (ω) 1 + (R/k s ) 2 h(R, ω)g(R, ω)(53)
with n B (E) = 1/(exp[β s E] − 1) the Bose distribution function and h(R, ω) an g(R, ω) two functions defined, respectively, in appendix C by Eq. ( 104) and ( 105).Below we also present results for the energy resolved sticking coefficient for perpendicular incidence (Q ′ = 0).It is given by
S ⊥ E ′ = 4 π 2 π 1/2 β 1/2 s ∞ 0 dRg ⊥ (R, E ′ ) ,(54)
with E ′ = q ′2 and g ⊥ (R, E ′ ) a function defined in appendix C, Eq. ( 107).The functions h(R, ω), g(R, ω), and g ⊥ (R, E ′ ) contain summations over the Rydberg series of bound surface states.If not stated otherwise, we truncated these sums after N = 15 terms.These functions are moreover defined in terms of integrals which can be done only numerically.We use Gaussian integration with 40−80 integration points.More specifically, h(R, ω) and g ⊥ (R, E ′ ) are onedimensional integrals while g(R, ω) is a two-dimensional one.Hence, s e and S ⊥ E ′ are given by a five-dimensional and a two-dimensional integral, respectively.
In the formulae for the sticking coefficients we multiplied the binding energies of the surface states |E n | obtained from Eq. ( 28) by an overall factor of 0.7.This value was chosen to bring the binding energy of the lowest surface state |E 1 | = 0.85eV in accordance with the experimentally measured value for copper: |E 1 | Cu ≈ 0.6eV [51].For other metals we used the same correction factor.
Figure 9 shows the results for S ⊥ E ′ when an ele tron with energy E ′ hits perpendicularly an aluminum boundary at k B T s = 0.05eV .Representative for weak and moderate screening we plotted data for (k s ) surface /(k s ) bulk = 0.1 and (k s ) surface /(k s ) bulk = 0.6.The latter is the screening parameter used in [37,40] to study the interaction of positrons with an aluminum surface.If the corresponding value for 1/(k s ) surface is indeed a reasonable estimate for the length on which the Coulomb interaction between an external and an internal electron is screened, the sticking coefficient for electrons should be extremely small, of the order of 10 −4 .Only for weak screening, and thus strong coupling, does S ⊥ E ′ approach values of the order of 10 −1 which are perhaps closer to the value one would expect on first sight.
To clarify the contribution the various bound states have to the sticking coefficient, we plot in Fig. 10 the dependence of S ⊥ E ′ on the number N of bound states included in the calculation.As can be seen, the lowest bound state (N = 1) contributes only roughly 40% to the total S ⊥ E ′ .The sticking coefficient increases then with increasing N but converges for N ≈ 10 − 20.Because of this fast convergence we present all results below only for N = 15.The reason for the convergence can be traced back to the decrease of the electronic matrix element I Fig. 11.The global sticking coefficient se for a thermal beam of electrons with kBTe = 5eV hitting various metal surfaces at kBTs = 0.05eV as a function of (ks) surface /(ks) bulk , where (ks) bulk is the screening wavenumber in the bulk of the respective metal (see table 2).Following [37,40], we would expect (ks) surface = 0.6(ks) bulk to be a reasonable estimate for the screening parameter.Hence, se ≈ 10 −5 − 10 −4 .
in Eq. ( 44), which we approximate by I
k≪1n (Q = 0) (see appendix C), with increasing n, where n = 1, 2, ... labels the bound surface states.
Global sticking coefficients s e as a function of the screening wavenumber (k s ) surface are shown in Fig. 11 for different metals.For (k s ) surface /(k s ) bulk > 0.4, the sticking coefficients are again extremely small.As expected they increase with decreasing (k s ) surfac /(k s ) bulk , reaching values close to unity for weak screening.In this strong coupling regime, our perturbative calculation of s e is no longer valid.We believe however that (k s ) surface /(k s ) bulk < 0.4 is unphysical.The kink around (k s ) surface /(k s ) bulk ≈ 0.25 must be due to an accidental resonance in g(R, ω).It is of no physical significance.
Why is the sticking coefficient for electrons so small?We have no satisfying explanation.Our calculation produces small a sticking coefficient because the matrix element (37) turns out to be very small.We certainly underestimate it because the wave unctions of the approximate potentials (39) and 41) vanish in complementary halfspaces, in contrast to the exact wavefunctions which have tails.Nevertheless it is hard to image the tails of the wavefunctions to increase the matrix elements by three orders of magnit de.
The approximations we had to make to end up with manageable equations for s e , in particular, the assumptions about the momentum dependence of the electronic matrix elements (see appendix C and, for a discussion, the next section) should also not lead to a sticking coefficient which is more than one order of magnitude off.In this respect let us emphasize that in contrast to the cal r of 0.1, we use the eigenenergies and eigenstates of the 1/z potential and not the ones of an artificial box potential.
Usually it is assumed that s e is also at least of the order of 0.1 [65].This expectation seems to be primarily based on the semiclassical back-on-the envelop-estimate of Umebayashi and Nakano [76].It is thus appropriate to discuss their approach in some detail.
From the energy ∆E s an electron can exchange in a single classical collision with the consti timated, using the analogy to the Mössbauer effect, the probability α for inelastic one-phonon emission.For that purpose, they had to estimate the number N c of constituents of the surface an electron with a de-Broglie wavelength corresponding to its kinetic energy E 0 , λ dB e = 2πa B R 0 /E 0 , simultaneously impacts.A rough estimate is N c = (λ dB e /a) 2 , where a is the lattice constant of the material.Under the assumption that the electron hops along the surface they then calculated the probability with which the electron does not escape after l hops where l is the number of inelastic collisions which are necessary for the electron to transfer its whole positive kinetic energy to the lattice, that is, to end up in a state of negative energy.Identifying this probability with the (global) sticking coefficient, they obtained
s e = l−1 i=0 1 1 + β i /α ,(55)
where β i = (E 0 − i∆E)/E b is the escape probability after i inelastic collisions [77], ∆E = 2∆E s /3N c α, ∆E s = 4m e (E 0 + E b )/M , E b is the depth of the s rface potential and M is the mass of the constituents of the solid.
Sticking coefficients for graphite obtained from Eq. ( 55) are shown in Fig. 12. Within Umebayashi and Nakano's semiclassical approach we identified E b with the binding energy of the electron.According to Fig. 12 the sticking coefficient very quickly approaches extremely small values with increasing energy E 0 .The smaller the binding energy E b , the faster the decrease.The values for s e originally given by Umebayashi and Nakano were for kinetic energies smaller than 0.0026eV and binding energies larger than 1eV .Only in this parameter regime is the sticking coefficient close to one.In the parameter range which is of interest to us (kinetic and binding energies at least a few tenth of an electron volt) Umebayashi and Nakano's estimate gives also an extremely small sticking coefficient.
We should of course not directly compare the results obtained from Eq. ( 52) with the ones obtained from Eq. ( 55) because Eq. ( 52) assumes energy relaxation due to internal electron-hole pairs whereas Eq. ( 55) assumes energy relaxation due to phonons.However, a quantum-mechanical calculation of the phonon-induced electron sticking coefficient at vanishing lattice temperature al what Umebayashi and Nakano find.Although they incorporate some quantum mechanics their approach is basically classical.It is based on the notion of a classical particle hopping around on the sur
ce and exchangin
energy with the solid in binary encoun- 12. Electron sticking coefficient obtained from Umebayashi and Nakano's phenomenological model [76], see Eq. ( 55).The solid lines are for the e − :graphite system originally considered by them (kBTD = 420K, MC = 12mp, where mp is the proton mass, and a = 2.5 Å) and the dashed line is for an e − :Cu system (E b = 0.6eV , kBTD = 343K, MCu = 64mp, and a = 3.6 diminishes rapidly with increasing electron energy and approaches one at zero electron energy, in contrast to what one would expect from a quantum-mechanical calculation [39].
ters.As in any classical theory for the sticking coefficient roaches unity for the low energies they consider [36].
Desorption time
We now calculate the electron desorption time τ e .For that purpose, we have to specify the occupancies of the bound electron surface states.In general, this is a critical issue.However, provided the desorption time τ e is much larger than the time it takes to establish thermal equilibrium wit assume /k B β s is the surface temperature.Desorption is accomplished as soon as the electron is in any one of the unbound surface states.Hence, the inverse of the desorption time, that is, the desorption rate, is given by [36] 1
τ e = Q ′ n ′ Qq exp[−β s E Q ′ n ′ ]W(Qq, Q ′ n ′ ) Qn ex [−β s E Qn ] ,(57)
where W(Qq, Q ′ n ′ ) is the transition rate from the bound surface state (Q ′ , n ′ ) to the unbound surface state (Q, q) as defined by the golden rule (38).
Measuring again energies in units of R 0 and distances in units of a B and using the same approximations as in Since (ks) surface = 0.6(ks) bulk is most probably the relevant screening wavenumber [37,40], τe ≈ 10 −2 s.
or the used material parameters, see table 2.
the calculation of the sticking coefficient (see appendix C) the desorption rate can be cast into
τ −1 e = R 0 2π 3 Z I desorb ,(58)
where
I desorb = ∞ 0 dR ∞ −∞ dω 1 + n B (ω) 1 + (R/k s ) 2 f (R, ω)g(R, ω) ,(59) Z = n exp[−β s E n ], n B (E)
is again the Bose distribution function, and f (R, ω) is an one-dimensional integral defined in appendix C, Eq. ( 111).Thus, to obtain τ −1 e from Eq. ( 58) we have to do a five-dimensional integral.As for the calculation of s e we again use Gaussian quadratures for that purpose.
In Figure 13 we present, as a function of the screening parameter, numerical results for τ e for an electron bound in the polarization-induced external surface states of various metal surfaces at k B T s = 0.05eV .To be close to reality, we again corrected the binding energies |E n | by a factor 0.7.As can be seen, except for small screening parameters and thus strong coup ing, τ e ≈ 10 −2 s.
Compared to typical desorption times for neutral molecules, which are of the order of 10 −6 s or less [36], the electron desorption time we find is rather long.This is a consequenc (viz: Eq. ( 56)) and the fact that the binding energy of the lowest surface state |E 1 | ≫ k B T s .Thus, the electron desorbs de facto from the lowest surface state which has a binding energy of ∼ 0.6eV .The binding energies for neutral molecules, on the other hand, are typically one order smaller and thus of the order of k B T s resulting in much larger desorption rates and thus shorter desorption times.The product seτe as a function of kBTe for a thermal beam of electrons hitting a copper surface at various temperatures kBTs.The surface screening wavenumber is set to (ks) surface = 0.6(ks) bulk , where (ks) bulk is the screening wavenumber for copper (see table 2).
In the model for the quasi-stationary charge of a dust particle presented in the previous section the product (sτ ) e was of central importance.Combining ( 52) and ( 58), the microscopic approach gives
(sτ ) e = h (k B T e ) 3/2 (k B T s ) −1/2 16I stick I desorb n exp[β s |E n |] ,(60)
where I stick and I desorb are defined in Eqs. ( 53) and ( 59), respectively.Figure 14 shows numerical results for (sτ ) e for a copper surface as a function of the electron and surface temperature.The screening wavenumber (k s ) surface = 0.6(k s ) bulk and the binding energies are again corrected by the factor 0.7 which makes |E 1 | to coincide with the experimental value for copper.Notice, the weak dependence of the product (sτ ) e on the electron temperature and the rather strong dependence on the surface temperature.The latter is of course a consequence of the exponential function in Eq. (60).Although the sticking coefficient and desorption times have values which are perhaps in contradiction to naive expectations, s e being extremely small and τ e being rather large, the product (sτ ) e has the order of magnitude expected from our surface model (see section 2).In particular, (sτ ) e ≃ 10 −6 s for k B T s = 0.045eV would produce grain charges of the correct order of magnitude.Thus, using Eq. ( 60) instead of Eq. ( 13) and k B T s as an adjustable parameter, which is still necessary because the grain temperature is unknown, we could produce, for physically realistic surface temperatures, surface charges for metallic grains which are in accordance with experiment [19].
Although the microscopic Eq. ( 60) has a similar structure as the phenomenological expression (13) there are significant differences.F rst, the microscopic formula contains more than one bound state and depends not only on T s but also on T e .In addition, there is a numerical factor 16 = 2 × 8 where the factor 2 comes from the fact that an electron traversing the quantum-mechanical boundary layer can make a transition to a bound state on its way towards the surface and on its way back to the plasma and the factor 8 originates from the asymptotic form of the wavefunction for the incoming electron.The phenomenological approach simply assumes here a plane wave whereas the microscopic appro ons I stick and I desorb , which depend on the microscopic d thus on the electron and surface temperature as well as material parameters such as the screening wavenu ber, are in general not identical.Hence, I stick /I desorb = 1.
For the hypothetical case of a single bound state, however, whose binding energy |E 1 | is much larger than k B T s and k B T e , Eq. ( 60) reduces to form which, for k B T e = k B T s , becomes identical to the phenomenological expression (13), except of the numerical factor referred to in the previous paragraph.The simplification arises because for low temperatures the integrals defining I stick and I desorb can be calculated asymptotically within Laplace's approximation (see appendix C).The sticking coefficient and desorption time are then given by
s L e = 4 I (1) 1 2 ḡ πβ 1/2 s β 1/2 e ,(61)τ L e = 8π 2 β 2 s R 0 exp[β s |E 1 |] I (1) 1 2 ḡ ,(62)
with ḡ defined in appendix C, Eq. (118).Thus, the product,
(sτ ) L e = 16h (k B T s ) 3/2 (k B T e ) −1/2 exp[β s |E 1 |] ,(63)
is independent of the microscopic details of the inelastic scattering processes encoded in the product |I
1 | 2 ḡ.Identifying |E 1 | with the electron desorption energy E d e and setting k B T e = k B T s , we finally obtain, except of the numerical factor 16 = 8 × 2, from Eq. ( 63) the phenomenological expression (13).
Using Eqs. ( 61)-( 63) we find for an electron at a copper boundary with k B T e = k B T s = 0.045eV , s L e = 6.23×10 −6 , τ L e = 0.131s, and (sτ ) L e = 8.17 × 10 −7 s.Taking only one bound state into account, the corresponding values obtained from Eqs. ( 52) and (58) are s e = 4.42 × 10 −6 and τ e = 0.135s, which leads to (sτ ) e = 6 × 10 −7 s, indicating that at low temperatures Laplace's approximation works indeed reasonably well.Since τ e does not depend on k B T e and k B T s is usually much smaller than |E 1 |, approximation (62) for τ e can be actually always applied, provided the assumption is correct, that the electron is initially in thermal equilibrium with the surface and hence basically in its l
est bound
state.The approximation (61) for s e , on the other hand, deteriorates quickly with increasing electron temperature, as does the approximation ( 63) for (sτ ) e .
It is reassuring to be able to derive, under certain conditions and except of a numerical factor, whose origin is however clear, from the microscopic expressions for s e and τ e the phenomenological relation (13) we used in [29] as an estimate for (sτ ) e .That (60) can be reduced to ( 13) is a consequence of the perturbative calculation of s e and τ e using the golden rule transition ate (38) which obeys detailed balance.In this respect, our calculation is on par with Lennard-Jones and Devonshire's original microscopic derivation of the product sτ for a neutral adsorbant [30].In contrast to them, we keep however the lateral motion of the adsorbing particle and the elementary excitations of the solid responsible for energy relaxation are electronhole pairs and not phonons.
Critique
In the previous subsections we demonstrated for a particular case, a metallic boundary with electron energy relaxation due to creation and annihilation of internal electronhole pairs, how a quantum-mechanical calculation can be set up to obtain s e and τ e from a microscopic model for the electron-wall interaction.To obtain manageable equations we had to make various approximations, some were purely technical, but others concerned the physics.We now restate and criticize the approximations in the hope that it will be read as a list of to-do's.
We start with the purely technical approximations.In the calculation of the transition rate we neglected the dependence of the atrix element (37) on the lateral momentum transfer and approximated furthermore I
(1) kn (Q = 0) by its leading term for k ≪ 1.Both approximations can be avoided but the final equations become more complex and the costs for their numerical handling accordingly higher.At the present stage of the investigation this seemed to us not justified.Even more so because we do not believe that these approximations are the cause for the unexpectedly small values for s e and the unexpectedly large values for τ e .Neglecting the dependence on the lateral momentum overestimates even the matrix elements, hence the transition rate, and thus, eventually, e and τ −1 e .The k−dependence of I
(1) nk (see Eq. (87) in appendix B), on the other hand, can also not be so large that it increases the transition rate by three orders of magnitude as it would be required to obtain s e ∼ 0.1 − 1 and τ e ∼ 10 −5 − 10 −6 s, the values one would perhaps naively expect.
More critical for the matrix element (37) are the replacements (39) and (41) because they lead to wavefunctions vanishing in complementary halfspaces and thus to the factorization (43) of the matrix element.In reality the wavefunctions have tails in the complementary halfspaces.A model neglecting the tails underestimates therefore the matrix element.In addition, the replacements lead to the loss of crystal-induced surface states, about which we have more to say below, and hard-wire the artificial treatment of surface and volume electrons as two separate species.A more realisti modeling should therefore avoid these two approximations.
Both the electron sticking coefficient s e and the electron desorption time τ e were obtained from the golden rule for transitions between bound and unbound surface states.This is only justified for weak coupling and when one quanta of elementary excitation suffices for the transition.When the coupling is strong, or when more than one quanta are necessary, a generalized golden rule has to be used in which the interaction matrix element is replaced by the corresponding on-shell T-matrix [31,36].The calculation becomes more tedious but it can be done.A principle shortcoming, however, of any approach which uses golden-rule-type transition rates directly to calculate s e and τ e is that it assumes the occupancies of surface states to be only weakly affected by the transitions itself.From the physisorption of neutral particles it is known that this is in general not true [34].
The calculation of the desorption time, for instance, was based on the assumption that the desorbing electr n is initially in thermal equilibrium with the surface and that during the desorption process the equilibrium occupancy of the surface states does not change.The desorption time is thus much larger than the timescale on which thermal equilibrium at the surface is established, in which case the electron basically always desorbs from the lowest bound surface state.The equilibration on the surface is controlled by transitions between bound surface states.They have to be much faste than transitions between bound and unbound surface states.In the golden rule approach this information is put in by hand.Thus, although the τ e obtained is consistent with the equilibrium assumption, it does not justify it.For that purpose, the calculation of τ e has to be based on quantum-kinetic master equations which include not only transitions between bound and unbound surface states but also transitions between two bound surface states [35,36].
Master equations are also required when the elementary excitations of the solid have not enough energy to couple the lowest bound surface states to the continuum.In that case, the cascade model developed by Gortel and coworkers [32] has to be used.Its main idea is that an electron initially bound in a deep state can successively climb up to the continuum using weaker bound states as intermediaries.By necessity, it thus also contains transitions between bound surface states.
For metals internal electron-hole pairs provide the most efficient electron energy relaxation channel, with phonons and other elementary xcitations being unimportant, because their energy is either too high (plasmons) or too low (phonons).Both leads to severe restrictions in the available phase space.For dielectric boundaries, however, it is the energy of internal electron-hole pairs, whose energy is of the order of the intrinsic energy gap, which is too high for having any effect.Electron energy relaxation should then be primarily driven by phonons.Their energy, however, is in the cases of interest, for instance, graphite or silicon, too low for promoting an electron from the lowest surface states all the way up to the continuum.Hence, s e and τ e have t be calculated from Gortel et al.'s cascade model.When the energy of the phonon is moreover not enough to connect two neighboring bound states, the transition rates entering the master equation have to be obtained from the generalized golden rule containing the T-matrix for electron-phonon coupling.
We expect multiphonon processes to play a role for all dielectric boundaries even for graphite boundaries, where the Debye energy is rather high, but not high enough to couple the two lowest image states (see table 1).This coupling, on the other hand, is the rate-limiting one, that is, the one which determines the electron desorption time.Multiphonon processes rem in important when in addition to image states also crystal-induced surface states or dangling bonds are included because these states, being stronger bound than image states, are energetically deep in the gap and thus far away from the vacuum (plasma) level.
In our model we made the overall assumption that plasma electrons cannot enter the plasma boundary (hard boundary condition at z = 0).At least electrons with an energy larger than the projected energy gap of the solid, can however enter the plasma boundary, scatter inside the material, before bouncing back to the surface, where they may be either re-emitted to the plasma or trapped in surface bound states.Processes inside the material can be thus only neglected when the projected energy gap is much larger than the typical energies of plasma electrons, that is, E g ≫ k B T e , and when the floating potential Ū is approximately in the middle of this large gap.
Here we come to a potentially very interesting point, in particular, as far as metallic plasma boundaries are concerned.According to Fig. 5 the projected energy gap depends on the crystallographic orientation of the surface.Even planar metallic plasma boundaries will however almost never coincide with a single crystallographic plane.At best, they contain large, crystallographically well-defined facets, as we discussed in the context of spherical grains.Hence, the projected energy gap varies along the boundary.Regions can thus be expected where surface states are absent and plasma electrons can easily enter the boundary.In other regions large gaps prevent plasma electrons from entering the material.Instead they would sit in surface states.How all this affects the spatial distribution of surface charges is an open question.
For dielectric boundaries the projected energy gap is of the order o the intrinsic gap of the bulk material.It depends only weakly on the crystallographic plane.But a problem which concerns both metallic and dielectric surfaces is the existence of crystal-induced surface states.We would expect them to be less important for physisorption of electrons.Being strongly bound and having a center of gravity very close to the surface or even inside the plasma boundary, the spatial overlap between unbound surf ce states and crystal-induced surface states should be rather small.Hence, the matrix element controlling sticking into or desorption from a crystal-induced surface state should be much smaller than the corresponding matrix element involving weakly-bound polarization-induced surface states, which are always exterior to the boundary and, on a microscopic scale, even relatively far away from the surface.However, only a detailed study can show if our intuition is correct.
Another problem concerning both metallic and dielectric plasma boundaries is surface roughness.In our model, the plasma boundary is a well-defined mathematical plane.On the atomistic scale, and we actually do calculations on this scale, the surface is however not perfect.In a refined model for surface states this aspect, possibly in conjunction with surface reconstructions 4 and chemical contamination has to be taken into account.
Throughout we implicitly assumed that bound surface states exist although the exact surface potential supporting them is unknown.Since surface states have been detected many times [44,45,46,47,48,49,50,51,52,53,54,55] this assumption seems to be justified.Naturally, it would be desirable to calculate the surface potential from first principles.However, if not illusionary, it is at least very challenging, even when the plasma boundary is planar and crystallographically well defined.The quantummecha ical exchange and correlation effects determining the tail of the surface potential are beyond the localdensity approximation, the work-horse of most ab-initio packages for the calculation of the electronic structure of solids.Instead, non-local density functional theory [73] has to be used which is much more complicated to impleme t.A compromise would be to calculate the potential inside the boundary from an ab-initio local-density package and then continuously match this "internal" potential to the "external" potential deduced from a model Hamiltonian of the type presented in appendix A. The model produces a diverging potential only in the simplest approximation.With methods adapted from bulk polaron theory [42,43] potentials could be deduced which are finite at the boundary and thus continuously matchable with the periodic crystal potential obtained from the local density approximation.
As in other branches of surface science [36,67], a general strategy to short-circuit unknown microscopic details about the surface would be to work with simple, possibly analytically solvable models containing parameters that can be adjusted to experimentally measured quantities, for instance, the binding energy of surface states.
For this strategy to work, experimental techniques suitable for directly probing the electronic properties of surfaces, for instance, inverse photoemission spectroscopy [44,45,47], from which the binding energy and the lifetime of unoccupied electron surface states can be determined, have to be adapted to plasma boundaries.In addition, macrosc
ic quantities, such
as the quasi-stationary surface charge, the surface temperature, and the temperature and density of the electrons in the plasma have to be also known.So far, however, these combined data are not available for any experiment.For sure, surface charges have been measured in dielectric barrier discharges [25,24,23,22] and of course in complex plasmas, where in fact a 4 Here we do not mean the reconstruction of the surface due to impacting plasma particles but the intrinsic reconstruction leading to geometrical differences between real terminations of crystals and ideal crystallographic planes [67].
great variety of techniques has been invoked to determine the charge of floating µm-sized dust particles [13,14,15,16,17,18,19].But particula ly in the experiments measuring grain charges the diagnostics of the hosting plasma is usually missing.In addition, although it is possible to measure the temperature of the grain [72], grain temperature and charge have not yet been measured simultaneously.For the microscopic modeling of surface charges it is however important to know at least these two quantities.
Concluding remarks
In this colloquium we proposed to treat the interaction of electrons and ions with inert plasma boundaries, that is, boundaries which stay intact during their exposure to the plasma, as a physisorption process involving surface states.The sticking coefficients s e,i and desorption times τ e,i can then be calculated from microscopic models containing (i) a static potential supporting bound and unbound surface states and (ii) a coupling of these states to an environment which triggers transitions between them.Microscopically, the sticking of an electron or ion to the surface corresponds then to a transition from an unbound surface state to a bound one.Desorption of an electron or ion from the wall is then simply the reverse process.
Although this point of view can be applied to ions and electrons, we worked it out -for the par icular case of a metallic boundary and within the simplest possible model -only for electrons because the surface states for electrons are surface states in the ordinary sense, that is, states which are only a few nanometers away from the surface.The environment responsible for transitions between electron surface states, and thus for sticking and desorption of an electron, are therefore the elementary excitations of the solid.For ions, however, as soon as the surface collected some electrons, the surface potential is the long-range attractive Coulomb potential.Sticking and desorption of ions occurs thus far away from the surface.Nevertheless, provided the surrounding plasma is taken as the environment triggering transitions between ion surface states, the dynamics and kinetics of ions in front of the boundary can be described in close analogy to the electron dynamics and kinetics occurring much closer to the surface.Since without the surface no attractive Coulomb potential for ions would exist, the ion dynamics and kinetics is also a kind of surface physics although it takes place far away from the surface.
Ions are much heavier than electrons and the potential most relevant for them, the Coulomb potential, varies on a scale much larger than the ion's de-Broglie wavelength.Quantum mechanics is thus not really required for studying the ion kinetics in front of a plasma boundary.Instead of pushing the quantum-mechanical techniques we used for electrons to the semiclassical regime, it is thus also possible to analyze ions with Boltzmann equations.In that case it is however crucial to set up two Boltzmann equations, one for unbound ions and one for bound ions.As in the quantum-mechanical calculation, collisions of bound and unbound ions with the atoms/molecules of the background gas determine the number of trapped ions and how they are spatially distributed.
Studying the ion dynamics and kinetics is important because it affects the rate with which ions and electrons may recombine in the vicinity of the grain surface.If the corresponding flux α R σ e σ i is larger than the electron desorption flux τ −1 e σ e , the charge of the grain is the one which balances on the grain surface the electron collection flux s e j plasma e with the ion collection flux s i j plasma i and not with the electron desorption flux τ −1 e σ e as in our surface model for the grain charge.Provided s e ∼ s i this would eventually lead to the standard criterion from which the grain charge is calculated.We emphasize in th s respect however that the rate equations for the surface densities σ e,i are phenomenological.They should be derived from a quantum-mechanical surface scattering kernel taking bound surface states into account.Only then would we know if the microscopically obtained s e and τ e and the macroscopic plasma fluxes j plasma e,i are as simply connected as in the phenomenological rate equations.In any case, the quantum-mechanical approach for calculating s e and τ e stands by itself irrespective of the fate of our surface model for the grain charge.
Admittingly, the microphysics at the plasma-boundary we discuss is not the one utilized in plasma technology.Precisely the processes we excluded are most important there: Implantation of heavy particles, reconstruction or destruction of the surface due to high energy particles, and chemical modification due to radicals, to name just a few.The target surfaces are of course charged but, from the perspective of plasma technology, the surface charges only control the particle fluxes to the surfaces.Properties other than their mere existence are of no concern.
From a microscopic point of
ew, the technologically important surface pr
cesses just listed are extremely complicated.A description of these processes at a level, let say, solid state physicists describe superconductivity in bulk metals is certainly far from reach.It m y even not be required for plasma technology to proceed as a business.But as in other branches of science, it is the pleasure and duty of research driven by curiosity to push the understanding of particular processes, technologically relevant or not, to an ever increasing level of sophistication.We firmly be scopic understanding.It is our hope to have inspired other groups joining us on our journey to the microphysics at an inert plasma boundary.In particular, howeve eager to design experiments with well-defined model surfaces, which come as close as possible to the idealized boundaries theorists have to consider in their calculations, and at the same time are accessible to the surface diagnostics used elsewhere in surface science.
A Microscopic model for the image potential
In this appendix we discuss a microscopic model which interprets the image potential in terms of virtual excitation of surface modes [41,42,43].The model is applicable to metals and dielectrics.
To be specific, we consider a planar plasma boundary in the xy plane putting the plasma in the posit ve halfspace defined by z > 0. A convenient starting point for a microscopic description of the polarization-induced interaction betw ctron † K a K + K Γ (K) exp[−iK • R − Kz](a † K + a for th tion, and
Γ (K) = πe 2 ω s AK • ǫ − 1 ǫ + 1 1 2(65)
is the coupling function; K = (K x ctor, R = (x, y) denotes the projection of the electron position onto the surface, whose area is A, and z is the distance of the electron from the surface.For metals (ǫ = ∞), the relevant surface modes are surface plasmons with typical energies of a few electron volts, for instance, for copper, ω s ≈ 2eV [51].For dielectrics (ǫ < ∞), the relevant surface modes are optical phonon 43eV w with ǫ = 12 and ω T = 0.17eV [78].
To approximately separate the static from the dynamic interaction, we apply to the Hamiltonian (64) the unitary transformation [43]
U = exp K γ * K (R, z)a † K − γ K (R, z)a K(66)
with
γ K (R, z) = Γ (K) ω s exp[iK • R − Kz] .(67)
After the transformation the Hamiltonian reads
H = U HU † = − 2 2m e ∆ + V p (z) + ω s K a † K a K + i m e A(r) • ∇ + 1 2m e A(r) • A(r)(68)
where
V p (z) = − Q ω s |γ Q (R, z)| 2 = − e 2 (ǫ − 1) 4(ǫ + 1)z (69)
is the classical image potential arising from virtual excitation of surface modes and
A(r) = −i K ∇γ * K a † K − h.c.(70)
is a vector potential giving rise to a "minimal-type" dynamic coupling-the last two terms on the rhs of (68)between the electron and the surface mode.
The first two terms on the rhs of ( 68) describe an electron in a potential.Diagonalizing these two terms, that is, using the eigenstates of Eq. ( 26) with V (z) → V p (z) as a basis and ignoring the nonlinear term ∼ A 2 we , without the dynamic coupling to surface modes, H → H e , and we would have obtained the model we used for the calculation of s e and τ e .
Obviously, the dynamic coupling encoded in the last term on the rhs of (71) renormalizes the classical image states.The eigenstates of the full Hamiltonian -the true polarization-induced surface states -are not identical to the classical image states.The latter should be considered as zeroth order (or bare) eigenstates.Better approximations can be constructed using methods from polaron theory [42,43].At large enough distances, however, where the residual interaction becomes negligibly small, classical image states are reasonably good approximations to the true polarization-induced surface states.
Separating the lateral from the vertical motion according to Eqs. ( 26) and ( 28) the matrix element for the dynami coupling between the electron and the surface mode becomes
G qq ′ (Q, K) = Γ (K) mω s A Q•KJ (1) qq ′ (K)−KJ(2)
qq ′ (K) (73) with
J (1) qq ′ (K) = dzψ * q (z) exp[−Kz]ψ q ′ (z) ,(74)
J
(2)
qq ′ (K) = dzψ * q (z) exp[−Kz] d dz ψ q ′ (z) .(75)
In general, G qq ′ is non-diagonal.It contains intraband (q = q ′ ) and interband (q = q ′ ) transitions.The latter could in principle affect the physisorption kinetics of electrons (understood -for the moment -as transitions between bound and unbound bare surface states).This happens however only when the energy of the surface mode is comparable to k B T s , where T s is the surface temperature, as well as comparable to the energy spacing of the bare surface states.Transitions between bare surface states are then associated with creating or annihilating real surface modes in contrast to virtual modes which would only renormalize the energies E Qq and the wavefunctions ψ Qq .Physisorption would then be triggered by other elementary excitations of | 2014-10-01T00:00:00.000Z | 2009-01-01T00:00:00.000 | {
"year": 2009,
"sha1": "72713f90c591bb7d7353f5ace153ec3095f5b733",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0901.4915",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "72713f90c591bb7d7353f5ace153ec3095f5b733",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
270308017 | pes2o/s2orc | v3-fos-license | Commitment-based human resource practices, job satisfaction and proactive knowledge-seeking behavior: The moderating role of organizational identification
Purpose – Based on social exchange theory and social identification theory, I investigated how employee organizationalidentificationaffectstheeffectivenessofcommitment-basedhumanresource(HR)practices.Ifocused onemployeeattitudes(jobsatisfaction)andbehaviors(proactiveknowledgeseeking)asHRpractices ’ outcomes. Design/methodology/approach – Using a structural equation modeling analytical approach, I tested the hypotheses with data from a web-based cross-sectional survey of 208 specialists and engineers of manufacturing subsidiaries in Poland. Findings – Results showed that the positive relationship between commitment-based-HR practices and job satisfaction is weakened for employees strongly identified with the organization. Simultaneously, the connection between seeking knowledge and job satisfaction is stronger and more important for people who identify moderately to strongly. Research limitations/implications – The study limitations regard mainly its cross-sectional design and single cultural and industrial context. Practical implications – From the managerial perspective, the study suggests that to enhance proactive employee behavior, companies need to increase employee organizational identification and ensure that employees have a positive perception of the implemented HR practices. Originality/value – The study contributes to the ongoing discussion on whether individual contingencies affect the effectiveness of commitment-based HR practices in the form of individual attitudinal and behavioral outcomes. The findings revealed that the contingent effect of organizational identification depends on the type of individual outcomes, suggesting that the strength of organizational identification affects how employees decide to reciprocate the organization ’ s attention and investment.
Introduction
Human resource (HR) practices are one of the main ways managers affect employees' wellbeing, attitudes, behavior, and further individual and unit-level performance (Jiang, Takeuchi, Central European Management Journal & Lepak, 2013).The commitment-based HR (CB-HR) system treats employees as valuable and unique human capital with an emphasis on developing long-term employer-employee relationships (Lepak & Snell, 2002) and results in positive firm innovation, performance (Allen, Ericksen, & Collins, 2013), and HR outcomes (Nieves & Osorio, 2017).On the individual level, the main theoretical explanation of how HR practices affect employees and attitudes is social exchange theory (SET; Blau, 1964), which argues that if the organization supports employees' needs and expectations, then they are willing to reciprocate the "organization's commitment to them" voluntarily (Whitener, 2001, p. 516) with their attitudes and behaviors.While numerous studies examined the effect of HR practices on attitudes (job satisfaction, organizational commitment; e.g.Edgar & Geare, 2014), only several studies looked at proactive behaviors (Maden, 2015;Elorza, Harris, Aritzeta, & Balluerka, 2016) but not knowledge-seeking behavior.Knowledge-seeking is a discretionary work behavior that creates learning-related opportunities and increases work effectiveness by enabling access to valuable complementary insights and experiences from co-workers.Knowledge-seeking initializes and directs knowledge flow among employees (Gubbins & Dooley, 2021) and supports knowledgesharing effectiveness by externalizing knowledge needs in a request.In the recent decade, the importance of knowledge-seeking behavior has grown because "the increasingly interdependent and dynamic nature of work requires daily collaboration with others" (Burmeister, Alterman, Fasbender, & Wang, 2022, p. 1303) and sourcing knowledge from coworkers to get work done with higher quality, react to disturbances faster, and solve problems quicker (Lim, Tai, Bamberger, & Morrison, 2020).Despite evidence for the positive effect of CB-HR systems on employees, some scholars argue that there are industry or strategy-based boundary conditions to HR practices' effectiveness (Collins & Kehoe, 2017).Moreover, researchers signal that employees also affect the effectiveness of HR practices (Kooij & Boon, 2018), informing about the conditional role of commitment (Yousaf, Sanders, & Yustantio, 2018) or identification (Mostafa, Bottomley, Gould-Williams, Abouarghoub, & Lythreatis, 2019).Those results suggest a need to further examine the employee-related contingencies of HR effectiveness concerning positive work attitudes and proactive behaviors.In the present research, organizational identification is an employee-related contingent factor under study.It represents how employees define themselves regarding perceived affiliation with an organization and how they sense who they are (Ashforth & Mael, 1989).
Therefore, having in mind the importance of job satisfaction as a component of individual well-being, which organizations aim to sustain and proactive knowledge-seeking behavior as a behavior initiating intraorganizational knowledge flow, the present study seeks to answer the following question: How does the organizational identification of employees affect the relationships between CB-HR practices and employees job satisfaction and knowledgeseeking behavior?To address this question, I conducted a cross-sectional survey among 208 specialists working in four manufacturing-based subsidiaries located in Poland.This study concentrates on subsidiary workers as they are overlooked (Lindsay, Sheehan, Cieri, Lindsay, Sheehan, & Cieri, 2017), even though the MNEs subsidiaries employ 22 million European Union workers (in Poland, 2.1 million, 45% in the manufacturing industry) (GUS, 2022).
From the theoretical perspective, I aimed to integrate insights from two theories, social exchange theory (SET) and social identification theory (SIT; Tajfel & Turner, 1986), in predicting positive work attitudes (job satisfaction) and proactive behavior (knowledgeseeking).Those theories conceptualize and explain the effects of the psychological relationships between employees and employing organizations differently.While SET focuses on how the employees' evaluation of the relationship quality the organization develops with them (e.g. with HR practices) influences job-related attitudes and behaviors, SIT draws attention to the role of an extent of individual attachment to the organization in forming and expressing the individuals' attitudes and behaviors.Integrating social exchange CEMJ and social identification perspectives, previously studied extensively in isolation, allows for a better understanding of how different psychological relationships between individuals and their organizations predict employee behavior (Van Knippenberg, Van Dick, & Tavares, 2007).The previous results on the integration of SET and SIT have yielded inconsistent results, suggesting that a strong organizational identification may either strengthen or weaken an employee's tendency to reciprocate the received organizational treatment.Thus, I aimed to identify the type of interaction effect, whether additive or substitutive, between those psychological employee-organization relationships, depending on the type of employee outcome.Figure 1 illustrates the overall conceptual framework.
This study contributes to the literature in several ways.Firstly, it adds to the research on individual outcomes of the employee's perceptions of CB-HR practices by examining knowledge-seeking behavior and job satisfaction.Secondly, it advances understanding of the individual-level contingencies of HR practices' effectiveness, specifically by investigating when CB-HR practices are more or less effective concerning job satisfaction and knowledgeseeking behavior.Thirdly, the present study contributes to the stream of research on the effects of integrating the social exchange and social identification perspectives on individual attitudes and behaviors.
The article is structured as follows: The theory and hypotheses section briefly reviews CB-HR practices from the perspective of SET and introduces their relationships with job satisfaction and knowledge-seeking behavior.It also raises the moderating role of organizational identification in those relationships.The method section presents the data collection approach, analytical procedure, and data analyses, followed by the results presentation.Finally, the last section concludes the research by discussing theoretical and practical implications, limitations, and potential paths for further research.
Theory and hypotheses
Commitment-based HR practices and social exchange theory When examining the effects of HR practices, researchers indicate that they require analysis as bundles or systems of practices (Jiang, Lepak, Hu, & Baer, 2012).The HR practices applied in a company represent the aspects of people and employee relationships management to which managers pay particular attention, thus creating the HR system (e.g.Kehoe & Wright, 2013;Flinchbaugh, Li, Luth, & Chadwick, 2016;Al-Amin, Akter, Akter, Uddin, & Mamun, 2021).In the CB-HR system, HR practices aim to create a long-term relationship with employees by "forging psychological links between organizational and employee goals" (Arthur, 1994, p. 672) because, in this view, the human capital is a valuable component of the firm's knowledge base (Lepak & Snell, 2002).By applying CB-HR practices, managers show employees that the organization cares for their long-term employment, development, and well-being (e.g.Meijerink, Bos-Nehles, & de Leede, 2020).Specifically, managers apply CB-HR practices to hire and maintain organizationally committed and competent employees who can be trusted in how they perform their tasks based on the assumption that those committed employees would be willing to support the accomplishment of the organizational goals.The commitment-based recruitment and selection practices emphasize the need for personorganization fit, including the alignment with the organizational values and the firm's growth over time (Collins & Kehoe, 2017).The commitment-based development and appraisal practices focus on developing the skills and competencies of employees within an organization over time (Lepak & Snell, 2002).Thus, the CB-HR system comprises extensive general skills training, broadly defined jobs, higher salaries, and comprehensive benefits (Arthur, 1994).With respect to subsidiaries, MNCs apply diversified approaches regarding the decentralization degree of HR practices on the subsidiary level, from the range between two extremes'-standardizing HR practices throughout MNC or localizing them according to the host organization's needs.Recent studies show that subsidiaries conform the HR practices to key institutionalized norms in the host country because the HR function is closely related to the local environment (Stavrou, Parry, Gooderham, Morley, & Lazarova, 2023).Moreover, they also standardize the best HR practices throughout MNC (Pudelko & Harzing, 2007), which allows them locally to diversify from other companies.Therefore, applied in a subsidiary, the HRM system could be a mix of practices specific to the host context and standardized in the MNCs.However, the extent to which a subsidiary can decide about its HR policies and practices is associated with better subsidiary performance via employee behavior (Lazarova, Peretz, & Fried, 2017).
From the employee perspective, SET (Blau, 1964) and the norm of reciprocity (Gouldner, 1960) are the dominant explanations of how and why the CB-HR system affects employee attitudes and behaviors.SET posits that a social exchange relationship forms when one party provides benefits to the other, without specifying the expected form, value, or timing of reciprocation.The resources involved in such exchange, whether economic, social, or emotional, are not pre-determined (Shore, Coyle-Shapiro, Chen, & Tetrick, 2009).For the recipient, the perceived value of a favor creates a sense of an obligation (or indebtedness) to reciprocate by doing something for the provider's benefit (Gouldner, 1960).Therefore, employees' positive perception of HR practices can motivate them to reciprocate the organization with their behavior or attitudes.By implementing a bundle of CB-HR practices, organizations establish and support positive social exchange relationships with employees, which "encourage employees to reciprocate by using their abilities and motivation in the pursuit of organizational goals" (Allen et al., 2013, p. 154).CB-HR practices enhance employees' understanding that the organization does something good for them: they perceive greater organizational support (Wahab, Tatoglu, Glaister, & Demirbag, 2021), better job security (Latorre, Guest, Ramos, Gracia, Latorre, & Guest, 2016), or create relational rather than transactional psychological contract with the organization (Uen, Chien, & Yen, 2009).Therefore, the employee's positive perception of CB-HR practices is related to their in-role and organizational citizenship behaviors (Uen et al., 2009), work engagement (Meijerink et al., 2020), effort (Wahab et al., 2021), organizational commitment (Farndale, Hope-Hailey, & Kelliher, 2011), and job satisfaction (Latorre et al., 2016).
However, sometimes, HR systems have different (not always positive) effects on employee outcomes (e.g.Dysvik & Kuvaas, 2008).From SET, scholars argue that the employees' sense of obligation to reciprocate the organization and how they decide to do this depends on the perceived value of the organizational investments in them.Therefore, employee-related factors may affect this perception (Mostafa et al., 2019).Consequently, HR practices might not appear equally beneficial to every individual and they will not motivate individuals equally to put in the effort to exhibit certain behaviors or have a positive attitude in exchange.
CEMJ HR practices and job satisfaction
Job satisfaction is a positive attitude about one's job or job situation (Saks & Gruman, 2014) that reflects "a pleasurable or positive emotional state resulting from the appraisal of one's job or job experiences" (Locke, 1976(Locke, , p. 1304)).It has a positive effect on employee well-being (Pekkan & Bicer, 2022), prosocial behavior in the workplace (e.g.De Clercq, Haq, & Azeem, 2019), and job performance (e.g.Judge, Thoresen, Bono, & Patton, 2001).Besides the nature of work itself or individual psychological factors, among the antecedences of job satisfaction, there are also organizational factors like social support, interaction with co-workers, and HR practices (cf., Bowling & Hammond, 2008;Hauff, Alewell, & Hansen, 2014).
However, on the individual level, limited studies examine the link between perceived CB-HR practices and job satisfaction (e.g.Latorre et al., 2016).We may explain this relationship with insight from SET.By introducing HR practices aiming to select and hire employees that fit the organization over the long run and also grow and develop employees within and build an internal community (Collins & Kehoe, 2017), managers inform employees that they care not only for the performance but also for their career plans and are willing to maintain the individual-organizational goal alignment.Thus, employees are more likely to perceive the working environment positively, because an organization will help them grow (Allen et al., 2013), give support (Rhoades & Eisenberger, 2002), and satisfy their affiliation need.Thus, employees are willing to reciprocate with a positive affective reaction towards the job (Mostafa & Gould-Williams, 2014).Therefore, I hypothesize: H1.CB-HR practices relate positively to employees' job satisfaction.
HR practices and knowledge-seeking behavior
Knowledge-seeking behavior is conceptualized as "proactively requesting task-related information, know-how, or feedback from another member" (Haas & Cummings, 2015, p. 37) of an organization to get valuable complementary insight.To source knowledge, employees actively interact with selected knowledgeable individuals (Gubbins & Dooley, 2021) who can help them achieve the desired work-related goals (Lim et al., 2020).Through the knowledge inquiry, individuals take the initiative not only to get a better understanding of their work or problem situation and decrease ambiguity (Grant & Ashford, 2008), but also to perform the work better or improve it by basing on the solutions grounded in an organization (Rudawska & Gadomska-Lila, 2023).Knowledge-seeking can also prepare individuals for future activities by expanding their knowledge and skills (Crans, Bude, Beausaert, & Segers, 2021).This corresponds with proactive behavior, defined as "self-starting, future-oriented behavior that aims to bring about change in one's self or the situation" (Bindl & Parker, 2010).
In line with the arguments of SET, building with CB-HR practices the long-term organization-employees relationships leads to the development of a sense of obligation in employees to act harder for the organization's benefit to repay the favorable treatment.This obligation makes them feel more accountable for their own and organizational performance and makes them willing to act proactively (Caesens, Marique, Hanin, & Stinglhamber, 2016).As Grant and Ashford (2008) proposed, accountability for a goal, task, or group causes an individual to perceive higher potential benefits of performing risky behavior (like proactive) and calculate fewer potential costs related to it.Regarding knowledge-seeking behavior, employees who feel accountable are willing to proactively inquire and gain needed knowledge from co-workers to perform their tasks better, solve problems, or learn.Moreover, with CB-HR practices, employers inform employees that they are essential organizational capital and that their contribution is highly valued.This creates an environment where proactive behavior is welcome (Crant, 2000).Thus, I hypothesize: H2.CB-HR practices relate positively to knowledge-seeking behavior.
Central European
Management Journal
Contingent role of organizational identification
Organizational identification captures the linkage between an employee and an organization based on SIT insight (Tajfel & Turner, 1986).As a specific form of social identification, organizational identification is a perceptual, cognitive construct.It is an individual's perception of oneness with the organization and being "psychologically intertwined with the fate" of the organization (Ashforth & Mael, 1989, p. 21).Consequently, employees perceive themselves in terms of the organization's state and the "characteristics they share" with other organizational members (Van Knippenberg & Van Schie, 2000).In this line, the employees' attitudes and behaviors can be governed by the organization, because the more an individual identifies with it, "the more likely he or she is to take the organization's perspective and act in the organization's best interest" (Van Knippenberg & Van Schie, 2000, p. 138).By feeling oneness with the organization, the strongly identified individuals take care of its welfare as their own and perceive organizational success as their own (Blader, Patil, & Packer, 2017).
The way employees define themselves in terms of an employing organization may create the boundary conditions of how the organizational practices will affect their attitudes and behaviors.The previous studies propose two different interaction mechanisms between SET and SIT when predicting employees' outcomes.The first suggests that the combined effect of motivational forces deriving from SET and SIT is "not additive" (Mostafa et al., 2019).On the one hand, the social exchange perspective buffers the adverse effects of low identification by placing a strong sense of obligation to reciprocate, enhancing the active behavior on behalf of the organization.On the other hand, the sense of oneness with the organization of strong identifiers motivates them to behave actively regardless of the level of the social exchange relationship with the organization (Van Knippenberg et al., 2007).Thus, for strongly identified individuals, the social exchange does not play a motivational role in fostering their behavior, because organizational identification implies employee-organization psychological unity.In contrast, in social exchange, an employee and an organization are psychologically separate entities.
However, Tavares, van Knippenberg, and van Dick (2016) propose another mechanism of social exchange and social identity interaction, arguing that even strongly identified individuals recognize the organization as a separate entity.They suggest that the organizational identification of employees "influence the social exchange content and resources (. ..) people choose to reciprocate" (Tavares et al., 2016, p. 36).Thus, the employees' behavior and attitudes can be considered as "social currencies" they use to pay back the organization.In this line, for strongly identified individuals who perceive themselves through the organizational lenses, job dissatisfaction, low loyalty, or turnover intention will not be an attractive social exchange response to the low quality of HR practices because of high self-definition costs.Therefore, for strongly identified individuals, the social exchange processes with job satisfaction (affectional attitude) as an exchange "currency" will be less important.In contrast, the social exchange processes with individual attitudes as reciprocation "currency" will play a more significant role for low-identified employees.On the other hand, initiating proactive knowledge-seeking behavior represents doing something with the aim of change, improvement, and better results (while not doing that does not represent negative self-perceptual consequences).Therefore, when employees strongly identify with the organization and adopt its interests, they are more motivated to engage in proactive behavior (Blader et al., 2017).In turn, this enhances their willingness to reciprocate supportive HR practices with such behavior.In the present study, I assume that job satisfaction (positive, affectional attitude) and knowledge-seeking behavior (proactive behavior) are the different types of "social currencies" individuals reciprocate to their employing organization.
CEMJ
H3. Organizational identification negatively moderates the relationship between CB-HR practices and job satisfaction in such a way that the link between CB-HR practices and job satisfaction is weakened when the organizational identification is strong.
H4. Organizational identification positively moderates the relationship between CB-HR practices and knowledge-seeking behavior in such a way that the link between CB-HR practices and knowledge-seeking is strengthened when the organizational identification is strong.
Method
Sample and data collection I tested the hypotheses with a sample of 208 knowledge workers from four medium and bigsized manufacturing subsidiaries (automotive accessories, metal products, road safety) located in Central Europe (Poland) of multinational enterprises (MNE).The subsidiaries, each employing over 120 workers, mainly focus on production and distribution.They have been operating for at least a decade.Thus, I assumed that they have developed systems of HR practices (Huselid, 1995).I collected the data using a cross-sectional survey with self-reported measures that allow for the measurement of variables hard to evaluate by outside observers (i.e.job satisfaction, organizational identification, or the perception of HR practices).Initially, I contacted 50 randomly selected subsidiaries located in the North West of Poland, and eventually, four participated in the study (last quarter of 2020 and the first quarter of 2021; the Covid-19 pandemic did not affect participating companies substantially).Based on data from local HR departments, we chose 402 full-time employees who had been working on-site for at least the past six months, using a computer and a formal email for work.An official invitation to participate in the study sent by HR departments with an invitation sent by me to every individual with a questionnaire URL link.The invitation contained a description of the study aims and data usage, assurance about anonymity and confidentiality, voluntariness, withdrawal possibilities, and consent.The final sample included 208 employees (52% response rate), of whom 29% were women and 69% held at least a bachelor's or engineering degree.The average organizational tenure was 8.1 years (54% had tenure higher than eight years), and the average tenure on the position was five years.
Measures
If not stated differently, items were measured on a 7-point Likert scale (1-totally disagree, 7-totally agree).The questionnaire was in Polish.Therefore, I adopted the translation-backtranslation method for items originally in English.To ensure the validity and reliability of the measures, I performed a pilot test on a group of students of management majors working for various organizations.
CB-HR practices were measured as employee perceptions using the descriptive observation-based approach (Wang, Kim, Rafferty, & Sanders, 2020), in which individuals evaluated each item on a scale of 1-7 (1-it was not applied; 7-it was applied fully), answering the general question to what extent the following HR practices were applied concerning them or other employees holding a similar position.I adopted the items from Collins and Smith (2006) and Chadwick, Super, and Kwon (2015) and reflected on four groups of HR practices.The interviews with HR practitioners and middle-level managers yielded the exclusion of several practices (e.g. an offering of company shares, job rotation, or team-building training).Finally, the recruitment and selection aspect of HR included three items describing the company's commitment to internal hiring and selecting individuals who can grow with the company (α 5 0.74).The sample item was: "In the selection process, the company focuses on
Central European
Management Journal the potential of the candidate to learn and grow with the organization."The incentives and compensation HR component was measured with four items describing the organizationbased incentives and the competitive level of salaries (α 5 0.85), with a sample item: "Employee bonuses or incentive plans are based primarily on the performance of the company."The appraisal and development HR component was measured with five items regarding employee social integration and adaptation programs, long-term growth, and development of employees (α 5 0.84).The sample item was: "Performance appraisals are used primarily to set goals for personal development."I measured the communication HR component with three items regarding communicating the company's plans and outcomes and obtaining feedback from employees (α 5 0.85).The sample item was: "The company listens to employees' opinions through different kinds of formal or informal programs (e.g.surveys)."The confirmatory factor analysis (CFA) supported the four-dimensional structure of CB-HR measurethe four-factor model (χ 2 (83) 5 139; CFI 5 0.964; RMSEA 5 0.057; SRMR 5 0.048) was significantly better than the one-factor model.Following the subscale aggregation approach (Chadwick, Super, & Kwon, 2015), I calculated the mean scores of each HR component and then used them as indicators of the CB-HR latent variable (α 5 0.80).
Job satisfaction was measured with a 3-item scale (Nielsen & Colbert, 2022) with the sample item "All in all, I am satisfied with my job."Cronbach's α was 0.86.
Next, Knowledge-seeking behavior conceptualized as proactively requesting co-workers for a different type of knowledge was measured on a 7-point frequency-based scale (1-never; 7-always, it is my daily routine) with four items based on Mohammed and Kamalanabhan (2019) and De Vries, Van Den Hooff, and De Ridder (2006) that refer to the behavior of asking for needed work knowledge, requesting for teaching some skill (like a method of analysis), requesting for remarks regarding the work-related topic and inquiring about some workrelated issues (α 5 0.80).The sample item was "I asked my co-workers for certain knowledge when I needed it." Organizational identification was measured with five items adapted from Mael and Ashforth (1992), with the names of each local company inserted (α 5 0.83).The sample item was "When someone praises [name of a subsidiary], it feels like a personal compliment." Following the prior research, the organizational tenure (in years), tenure on the specific position (in years), educational level (5 educational levels), and the need for creativity in the position ("To what extent is the creativity needed in the work you perform?" on a 5-point scale from 1-not at all to 5-in a very great extent) were the control variables in the models.
Common method bias
Because of the potential concern of common method bias (CMB) (related to the cross-sectional survey and self-reported measures), I applied several a priori activities (e.g. the physical distance between dependent and independent variables and items of the same measure; different anchor labels of the scales; Podsakoff, MacKenzie, & Podsakoff, 2012).Moreover, I conducted a post hoc common method variance assessment using Harman's single-factor test.The results of exploratory factor analysis showed that one factor explained 32% of the variance.Hence, CMB should not be a significant problem (Fuller, Simmering, Atinc, Atinc, & Babin, 2016).Moreover, according to the simulation studies, common method variance does not affect the interaction results (Siemsen, Roth, & Oliveira, 2010).Thus, in the case of the main hypotheses, the concern of CMB was somewhat weakened.
Analytical procedure
First, I calculated the interclass correlation coefficients (ICCs) for dependent variables to evaluate whether their variance size was significantly affected by clustered data structure (employees in different companies).The ICC1s were very low (job satisfaction ICC1 5 0.008; knowledge-seeking CEMJ ICC1 5 0.004).Therefore, I conducted single-level analyses (Heck & Thomas, 2020, pp. 36-37).I tested the hypotheses with structural equation modeling (SEM) because of the use of latent variables with multiple indicators (Kline, 2016).I applied the three-step analytic approach starting with the measurement model with confirmatory factor analysis (CFA), followed by the direct path model (SEM model 1), and finally, the moderation model (SEM model 2).I used the latent moderated structural equations method (LMS) for moderation analyses, because it enables the moderation of latent variables without needing to compute interaction product terms and has greater power to detect latent interaction effects (Sardeshmukh & Vandenberg, 2017).Prior to the interaction, I standardized the moderator (organizational identification).Then, following Aiken and West's (1991) procedure, I did the simple slope test for the situations when the scores of the moderators were at the mean level, one standard deviation above and below the mean.I ran all the analyses with the use of MPlus 8.8 software.
Concerning the sample size, I tested the power analysis for the RMSEA test of not-close fit for the full SEM model (Jak, Jorgensen, Verdam, Oort, & Elffers, 2021).The study's sample size of 208 participants exceeded the minimum recommended sample size of 194, estimated with a significance level of 0.05 and a power level of 0.80.
Measurement model
I examined a series of CFAs to verify the four-factor hypothesized measurement model and check if all measures were distinct.The analyses showed that the four-factor model with control variables fitted data well (χ 2 (145) 5 266, p < 0.01; CFI 5 0.924; RMSEA 5 0.063; SRMR 5 0.058) and significantly better than alternative models (Table 1).The CFA revealed sufficient reliability and convergent validity of the measures, as composite reliability (CR) ranged from 0.80 to 0.87 (significantly higher than the threshold of 0.60), and average variance extracted (AVE) was equal to or greater than 0.5.Moreover, the square root of each construct's AVE was higher than the correlation coefficients, thus, indicating discriminant validity of the measures (Table 2).
Central European Management Journal
Testing the hypotheses With SEM model 1, I tested hypotheses 1 and 2 that addressed the direct links between CB-HR practices and individual outcome variables, job satisfaction, and knowledge-seeking behavior (Table 3).The model exhibited a good fit to data (χ 2 (147) 5 266.5, p < 0.01; CFI 5 0.922; RMSEA 5 0.063; SRMR 5 0.058).Results showed that CB-HR practices have a positive link with job satisfaction (β 5 0.374; p < 0.01) and a positive but weaker link with proactive knowledge-seeking behavior (β 5 0.191; p < 0.1), which supported hypotheses 1 and 2.Moreover, organizational identification was significantly and positively related to both job satisfaction (β 5 0.386; p < 0.01) and knowledge-seeking behavior (β 5 0.205; p < 0.01).In the case of control variables, only organizational tenure related negatively to job satisfaction, while tenure on the position was negatively associated with knowledge-seeking behavior.
In SEM model 2, I analyzed the moderation effects of organizational identification.The log-likelihood ratio test showed that SEM model 2 fit the data better than model 1.The analysis revealed that the interaction of CB-HR practices and organizational identification was significant and negative for job satisfaction (β 5 À0.150; p < 0.01) but significant and positive for knowledge-seeking behavior (β 5 0.134; p < 0.05), providing support for hypotheses 3 and 4.
To better understand the moderating effect of organizational identification, I computed and plotted the interaction slopes (Figures 2 and 3).The strength of the relationship between CB-HR practices and job satisfaction was weaker for employees strongly identified with the organization (b 5 0.205; p < 0.05) than weakly identified employees (b 5 0.483; p < 0.001).On the contrary, the strength of the relationship between HR practices and knowledge-seeking behavior is greater for strong identifiers (b 5 0.337; p < 0.01), while for employees with a low level of organizational identification, the relationship is insignificant (b 5 0.075; p 5 0.495).
Discussion
Through an individual perspective, I aimed to understand how an integrated effect of employees' perception of CB-HR practices and their identification with the organization behavior.Furthermore, this study found that the strength of those relationships depends on the extent of the employee's identification with the organization.In the case of job satisfaction, the organizational identification of employees reduces the role of CH-HR practices, while in the case of proactive knowledge-seeking, it complements it.
Theoretical implications
This work makes several theoretical contributions.Firstly, the findings add to the research on the individual outcomes of employees' perceptions of the CB-HR system (cf., Wang et al., 2020) by examining two types of effectsattitude and behavior.It found that a positive perception of CB-HR practices strongly affects employees' job satisfaction.The results corroborate the previous studies on the positive relationships between the employees' perception of diverse HR systems (high involvement, high commitment, developmental) and relational attitudes (engagement and affective commitment; Farndale et al., 2011;Boon & Kalshoven, 2014) but also general job satisfaction (Latorre et al., 2016).In summary, when employees have positive experiences with HR practices focused on building long-term relationships, it strongly influences their positive attitudes toward work.Regarding the behavioral outcomes, the results suggest that positive employee perception of CB-HR practices increases knowledge-seeking behavior.These results are similar to the studies on HR and proactive behavior relationships (general proactive behavior, feedback-seeking behavior; Conway & Monks, 2009;Maden, 2015;Marescaux & De Winne, 2023).
With the above-discussed results, the study also adds to research on the behavioral response in the social exchange relationship between organizations and employees, showing that the employees reciprocate the organizational benefits not only with attitudes, prosocial or extra-role behavior (Cropanzano, Anthony, Daniels, & Hall, 2017) but also proactive behavior (Singh & Rangnekar, 2020).The employees' willingness to reciprocate the organization motivates them to take risks and source needed knowledge from their peers.Moreover, this study supports the argument that employees will positively reciprocate the benefits received from organizations based on how they perceive HR practices, not how managers implement them.The subsequent contribution pertains to the employee-related factors that influence the HRM practices' effectiveness.Adding to previous studies, which found that employees' abilities (Boon & Kalshoven, 2014) or their attitudes (Alfes, Shantz, Truss, & Soane, 2013) determine the impact of HR practices on employee behavior or attitudes, the present research shows that the employees' organizational identification is also a significant boundary condition of HR effectiveness.With that, this study extends the results of Mostafa et al. (2019).Specifically, depending on how intrinsically motivated employees are to act in the organization's best interest, HR practices are more effective for proactive behavior but less effective for job satisfaction.
The final area of contribution refers to the integrated effect of SET and SIT on employee outcomes.This study supports Tavares et al. (2016) argument that individuals, based on their level of identification with the organization, choose how to respond to the benefits they perceive from HR practices.I found that the influence of organization identification on the relationship between HR practices and employee outcomes varies depending on the type of outcome.Thus, we may conclude that employees' organizational identification regulates the content of social exchange because it changes how individuals conceptualize their roles, expectations, needs, and expected behaviors (Tavares et al., 2016).
Similarly to Mostafa et al. (2019) and Tavares et al. (2016), I found a substitutional effect of social identification and social exchange on employees' attitudes.Specifically, results showed that the strongly identified employees would respond with a minor increase in their job satisfaction on the higher level of CB-HR practices.It means that the combined effect of the feeling of attachment to the organization and the sense of being supported by the organization through HR practices on positive work attitudes is not additive.Specifically, strong organizational identification substitutes the low level of implementation of CB-HR practices, so the job satisfaction of strongly identified employees is mainly based on fulfilling their need for affection and belonging to the organization.However, the high positive perception of CB-HR practices will strongly affect the job satisfaction of employees who weakly identify with the organization.Future studies could analyze whether the substitutional influence of identification and social exchange refers to other positive (e.g.work engagement) or negative (e.g.burnout, perceived stress) work attitudes.
In the case of employees' behavior as an outcome of the integration of SET and SIT, the present results align with the findings of Abbasi, Shabbir, Abbas, and Tahir (2021), Tavares et al. (2016), andHekman, Steensma, Bigley, andHereford (2009).Those studies, together with the present one, suggest that for proactive knowledge-seeking behavior (and also knowledgesharing, extra-role behavior, or organizational policy adherence), SET and SIT play complementary roles.This suggests that the motivation driven by social exchange and the desire to reciprocate the organization's support is enhanced by the sense of belonging and the motivation for organizational well-being that comes from identification (Blader et al., 2017).As a conditional factor, organizational identification sets off the reciprocation motivation to behave proactively at work.Specifically, the present study showed that individuals who strongly identify with the organization are willing to reciprocate with the organization with a greater frequency of inquiring about work-related knowledge from organizational experts to perform their tasks better and learn for the future.On the other hand, low identifiers are not willing to reciprocate with that proactive behavior.
Practical implications
The HR managers are interested in positively influencing employees' attitudes and behaviors by implementing policies and practices, fostering relationships, and establishing a unique organizational identity.The study's findings indicate that increasing managerial interventions may not always lead to improved attitudes among all employees.Therefore,
Central European
Management Journal managers should purposefully determine the actions used in forming and maintaining employee-organization psychological relationships depending on the expected behavioral or attitudinal effects.
First, the study's results suggest that managers can influence employees' job satisfaction by facilitating their organizational identification and employing CB-HR practices, but not simultaneously.Therefore, managers, having the job satisfaction attitude as the primary expected effect, should select the most cost-effective activities from between implementing HR practices that support high commitment and encourage social exchange or actions like, for example, developing the prestige of an organization that supports organizational identification of employees (Weisman, Wu, Yoshikawa, & Lee, 2023).Previous studies add that similar decisions could be made regarding employees' organizational commitment, organizational citizenship behavior, or intention to quit the organization (Mostafa et al., 2019).However, in the case of those MNC subsidiaries that do not have autonomy in designing HR practices and policies, this study suggests that the job satisfaction of employees can be improved by implementing local interventions that increase employees' identification with the subsidiary (e.g.supporting inspirational leadership).
When managers aim to increase employees' proactiveness (knowledge-seeking or extrarole behaviors; Tavares et al. (2016), the most effective would be to simultaneously develop commitment-based employee-organization relationships through HR practices and strengthening employees' organizational identification through taking care of organizational values, its distinctiveness and leadership.
Finally, managers should monitor how employees receive the HR practices they implement, because it significantly determines their effectiveness on the employee level.In the context of subsidiaries, when HR practices are standardized across MNCs, local managers play a crucial role in ensuring these practices are understood and implemented effectively within the local context.They achieve this through clear communication of HR practices to staff, providing training, and regularly gathering feedback from employees on HR activities.Thus, the effectiveness of HR practices not only depends on the practice specificity but even more on how employees find them and understand their role.
Limitations and future research
Although the presented results are consistent with the theory, the study had several limitations.The first one was the study's cross-sectional design, which might cause CMB (as discussed in previous sections).Future studies could address this problem.Moreover, it could be valuable for the robustness of the results, especially regarding proactive behavior, to collect data from two sourcese.g. an employee and the immediate supervisor or co-workers.Other concerns deriving from cross-sectional data are the possible alternative links between variables.For example, HR practices could affect organizational identification over time, and job satisfaction could affect identification.It would be valuable to design a longitudinal study with repeated measures.
Another limitation was collecting data from a limited number of subsidiaries (four organizations) in Central Europe's single national cultural context (CE).Scholars performed previous studies on SET and SIT interactions in West European countries (WE) (Germany, Greece, Sweden, and Portugal).Although this study showed some consistency in results with previous ones, there were also some differences.Therefore, it would be helpful to analyze the role of national cultural values in the European context.It seems valuable from a practical perspective because companies from WE decide to launch their subsidiaries in CE.Moreover, a study on a bigger number of subsidiaries might shed some light on the possible moderation effect of the extent of localization or standardization of HR practices in MNCs.
CEMJ
Finally, concerning the type of organizations under study, the subsidiaries of MNEs, I analyzed the organizational identification only from the local subsidiary perspective.However, individuals might identify with different social entities.Therefore, taking cognizance of team or group identification or higher-order identification with the MNE could add to the results.
Figure 3. Moderation effect of organizational identification on the relationship between commitment-based HR practices and knowledge seeking behavior
Table 1 .
Alternative measurement models Note(s): Δχ 2 is the difference between focal model and model 0; CB-HR 5 commitment-based human resource practices Source(s): Own elaboration Correlation greater than ±0.13 are significant at 0.05 level.Values on the diagonal in italics are the square root of AVE values Source(s): Own elaboration influences individuals' positive work attitudes and proactive behavior.The results supported the hypothesized relationships.Specifically, I found that employees' perception of CB-HR practices related positively to employees' job satisfaction and proactive knowledge-seeking | 2024-06-07T15:09:09.650Z | 2024-06-07T00:00:00.000 | {
"year": 2024,
"sha1": "acd22e8440dd5406a3b449b2127565bb76f95ddf",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1108/cemj-05-2023-0217",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "53d55cf29ed45600f37c5a52c050eef4ce00733c",
"s2fieldsofstudy": [
"Business",
"Psychology"
],
"extfieldsofstudy": []
} |
231719473 | pes2o/s2orc | v3-fos-license | Optimal Thermoelectric Power Factor of Narrow-Gap Semiconducting Carbon Nanotubes with Randomly Substituted Impurities
We have theoretically investigated thermoelectric (TE) effects of narrow-gap single-walled carbon nanotubes (SWCNTs) with randomly substituted nitrogen (N) impurities, i.e., N-substituted (20,0) SWCNTs with a band gap of 0.497 eV. For such a narrow-gap system, the thermal excitation from the valence band to the conduction band contributes to its TE properties even at the room temperature. In this study, the N-impurity bands are treated with both conduction and valence bands taken into account self-consistently. We found the optimal N concentration per unit cell, $c_{\rm opt}$, which gives the maximum power factor ($PF$) for various temperatures, e.g., $PF=$0.30$\rm{W/K^2m}$ with $c_{\rm opt}=3.1\times 10^{-5}$ at 300K. In addition, the electronic thermal conductivity has been estimated, which turn out to be much smaller than the phonon thermal conductivity, leading to the figure of merit as $ZT\sim 0.1$ for N-substituted (20,0) SWCNTs with $c_{\rm opt}=3.1\times 10^{-5}$ at 300K.
Introduction
In 1993, Hicks and Dresselhaus proposed that significant enhancement of the thermoelectric (TE) performance of materials could be realized by employing one dimensional (1D) semiconductors. 1) Single-walled carbon nanotubes (SWC-NTs) are of particular interest as high-performance, flexible and lightweight TE 1D materials. [2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20] Both n-and p-type semiconducting SWCNTs are required to develop SWCNT-based TE devices. A great deal of effort has been put into the carrier doping of SWCNTs using various chemical [5][6][7][8][9][10][11][12] and field-effect doping methods. [13][14][15] In the case of field-effect doping, the present authors (T.Y and H.F) have theoretically clarified that an SWCNT exhibits the bipolar TE effect (i.e., the sign inversion of Seebeck coefficient from positive (p-type) to negative (n-type) by changing the gate voltage) within the constant-τ approximation and the self-consistent Born approximation. 16) On the other hand, in the case of chemical doping, such as with nitrogen (N) and boron (B) doping, the impurity-doped SWCNTs are regarded as strongly disordered systems of which the TE properties cannot, in principle, be theoretically described by the conventional Boltzmann transport theory (BTT). The present authors (T.Y. and H.F.) have recently succeeded in describing the TE properties of N-substituted SWCNTs using the linear response theory (Kubo-Lüttinger formula 21,22) ) combined with the thermal Green's function technique. 17) In Ref. 17, the authors reported that a decrease in the N concentration of a (10,0) SWCNT increases both the electrical conductivity and the Seebeck coefficient at room temperature (T = 300 K), and eventually the room-T thermoelectric power factor of the SWCNTs increases monotonically as the N concentration decreases down to an extremely low concentration of 10 −5 atoms per unit cell.
In the case of a (10,0) SWCNT with small diameter of d t = 0.78 nm, the influence of thermal excitation from the valence band to the conduction band on the room-T TE effects is negligible because of the large band gap E g = 0.948 eV. On the other hand, when the diameter is larger, the electron-hole excitation probability (∼ e −E g /k B T ) becomes much larger than that for a (10,0) SWCNT. For example, the electron-hole excitation probability at T = 300 K for a (20,0) SWCNT with a diameter of d t = 1.57 nm and a band gap of E g = 0.497 eV, which are a typical diameter and band gap in experiments, 23) is much larger (3.80 × 10 7 times larger) than that for a (10,0) SWCNT. The influence of electron-hole excitation on the TE properties of SWCNTs determines the performance of SWCNT-based TE devices.
To clarify the objective of the present study, we here briefly summarize the above-mentioned two our previous studies. 16,17) In Ref. 16, overall trends of bipolar TE effects have been studied by incorporating the both conduction and valence bands, but the impurity band was not incorporated. In Ref. 17, we focus on the impurity-band effects on TE properties of (10,0) SWCNTs with N concentration from c = 10 −2 to c = 10 −5 . Here, we neglect the presence of valence band because the electron-hole excitation probability is negligible even at a high temperature of 400 K (see Appendix A). In this situation, the thermoelectric power factor increases with decreasing the N impurity concentration (see Fig.8 in Ref. 17). On the other hand, for the (20,0) SWCNT, the contribution of valence band to the TE effects cannot be neglected even at ∼300 K because of the small band gap of E g = 0.497 eV and it is not clarified yet. Thus, in this study, we incorporate both the valence and the conduction bands of N-substituted (20,0) SWCNTs and treat precisely the N-induced impurity band in the band gap using the self-consistent t-matrix approximation. As a result, we found the power factor exhibits the maximum value at a certain concentration of N atoms for a fixed temperature. In addition, we also estimate the temperature dependence of electronic thermal conductivity λ e of N-substituted SWCNTs to be compared to that of phonons λ ph and then estimate the figure of merit, ZT .
Linear Response Theory for Thermoelectric Effects
In the presence of both an electric field E and a temperature gradient of dT/dz along the z-direction in a material (e.g., the tube axis of an SWCNT), the electrical current density J is generally given by within the linear response with respect to E and dT/dz. 24) Here, L 11 and L 12 are the electrical conductivity and the thermoelectrical conductivity, respectively. Using L 11 and L 12 , the Seebeck coefficient S is expressed as (2) and the power factor PF, which is one of the figures of merit for TE materials, is described by The expression of L 11 and L 12 is given by in terms of the spectral conductivity α(E). Here, e is the elementary charge, µ is the chemical potential and f (E − µ) = 1/(exp((E − µ)/k B T ) + 1) is the Fermi-Dirac distribution function. S in Eq. (2) and PF in Eq. (3) can thus be determined from Eqs. (4) and (5) once α(E) is known. To the best of our knowledge, the expression of L 11 and L 12 in Eqs. (4) and (5) was first proposed by Sommerfeld and Bethe in 1933, 25) subsequently by Mott and Jones,26) and then by Wilson. 27) Recently, the authors (T.Y. and H.F.) applied the Sommerfeld-Bethe (SB) relation expressed as Eqs. (4) and (5) to disordered N-substituted SWCNTs using a simple tight-binding model combined with a self-consistent t-matrix approximation. 17) Also, Akai and co-workers adopted the SB relation to treat the disordered metal alloys using the density functional theory combined with a coherent potential approximation (CPA) . 28,29) More recently, Ogata and Fukuyama clarified the range of validity of the SB relation, even for correlated systems including electron-phonon coupling and electron correlations. 30)
Effective-Mass Hamiltonian of SWCNTs
In this subsection, we briefly review the electronic structure of semiconducting SWCNTs with zigzag-type edges (z-SWCNTs). In our previous paper in Ref. 16, we gave a onedimensional Dirac Hamiltonian with an energy dispersion for the effective Hamiltonian of semiconducting (n, 0) SWC-NTs near the conduction (+) and valence (−) band edges, where k is the wavenumber along the tube-axial direction and q specifies the two pairs of lowest-conduction and highest-valence bands: and ∆ q is a half of the band gap (i.e., E g ≡ 2∆) and v q is a group velocity, expressed as and v q = − a z γ 0 cos πq n where γ 0 = 2.7 eV is the hopping integral between nearestneighbor carbon atoms and a z = 0.426 nm is the unit-cell length for an (n, 0) SWCNT. 31,32) The energy origin (E = 0 eV) in Eq. (6) is set at the middle of the band gap, E g . In the small-k region that obeys k 2 (∆/ v) 2 , the energy dispersion in Eq. (6) is reduced to with the effective mass m * q = ∆ q /v 2 q for both conduction and valence bands. Thus, the effective Hamiltonian is also given by with Eq. (11), where c † k and d † k (c k and d k ) are the creation (annihilation) operators for the conduction and valence band electrons, respectively. The spin and orbital degrees of freedom q are omitted from Eq. (12).
At this point, we take account of the random potential term in H 0 in Eq. (12) such that to examine the effects of N-doping on SWCNTs. Here, V 0 is the attractive potential (V 0 < 0) for an N atom in an SWCNT. For example, V 0 = −0.91 eV for (20,0) SWCNTs (see Sec. 2.3 for details). In Eq. (13), c † j (c j ) is the creation (annihilation) operator of an electron at the jth impurity position, and j represents the sum with respect to randomly distributed impurity positions for a fixed average concentration of c = N imp /N unit , where N imp is the total number of impurity positions and N unit is the number of unit cells in a pristine SWCNT with the length L.
We also confirm that the small-k condition of | vk| ∆ is satisfied within the temperature region of 0 < T < 500K discussed in this paper.
Self Energy due to Impurity Potential
The modification of thermoelectric effects by randomly distributed impurities will be studied based on the thermal Green's function formalism through self-energy corrections of the Green's functions. In this study, the influence of random N potential on the conduction-and valence-band electrons is incorporated into the retarded self energy using the self- Diagram of a self-consistent t-matrix approximation for the retarded self-energy. The crosses, dashed lines and solid double lines with arrows respectively denote the impurity sites, the impurity potential and the one-particle retarded Green's function to be determined self-consistently.
consistent t-matrix approximation as shown in Fig. 1, 17,[33][34][35] which corresponds to the dilute limit of CPA for binary alloys. 28,29,36,37) Within the self-consistent t-matrix approximation, both the self energies Σ c/v (E) for conduction/valence-band electrons in an N-substituted SWCNT are independent of k because of the short-range of the impurity potential in Eq. (13) and are determined by the requirement of self-consistency, as with where ± corresponds to c/v, respectively. The k-summation in Eq. (15) can be analytically performed by substituting Eq. (11) into Eq. (15), and we obtain where Im ±(x − σ c/v (x)) − δ > 0, and equal to the characteristic energy of an SWCNT. From Eqs. (14) and (16), the self-consistent equation for σ c/v (x) is given by Equation (18) can also be rewritten as or as the cubic equation for σ c/v , with a 3 = 1, a 2 = −(x∓δ+2cv 0 ±v 2 0 /4), a 1 = cv 0 {2(x∓δ)+cv 0 }, and a 0 = −(x ∓ δ)(cv 0 ) 2 , where the upper/lower sign is for the conduction/valence-band electrons. Equation (20) indicates that for each energy, mined using the condition dx/dσ c/v = 0. In addition, we can see in Fig. 2(a) that the impurity band appears just below the conduction band. In the limit of c → 0, the binding energy of bound state can be calculated from a pole of t-matrix T c (E) = V 0 /(1 − X c (E)) for the conduction-band electron as Thus, once E b is given, the attractive potential V 0 can be determined by Eq. (21). For the (20,0) SWCNT, E b is known to be E b = 0.068 eV, 38) and eventually the attractive potential is V 0 = −0.91 eV. In other words, the single impurity level is located at E = 0.18 eV (x = 0.060). Note that, in CPA methods, including the present selfconsistent t-matrix approximation, the spectral conductivity α(E) becomes finite once DOS becomes finite (see Sec. 2.6), since CPA ignores the effects of Anderson localization due to the interference effects of scattered waves, which can lead to finite DOS even in the energy region where the conductivity is zero. It is known that every state is localized in one and two dimensions in the presence of finite scattering. 39) However, once the system size or temperature becomes finite, the effects of Anderson localization are greatly reduced. This situation is assumed in the present study and hence the band edges in the CPA are used to represent the effective mobility edges.
Density of States ρ
Once the self energies Σ c (E) and Σ v (E) are obtained via the above procedure, the density of states (DOS) can be determined as follows. Within the present approximations, the DOS per the unit cell for each spin and each orbital consists of two parts, as where ρ c (E) is the DOS including the contribution from the conduction and impurity bands, and ρ v (E) is the DOS of the valence band, which are given by where Im ± E − Σ c/v (E) − ∆ > 0. The signs + and -correspond to ρ c and ρ v , respectively. Equation (24) indicates that for the region of x with complex solutions of σ c/v (x), the DOS will be finite, i.e., in the shaded region in Figs. 2(a) and 2(b). One caution is that the DOS can be finite even for real a sharp peak near E = ±∆, which implies that the electrons in the conduction and valence electronic states are not significantly disordered by N impurities. We can see that as c increases, the DOS near E = ±∆ decreases strongly in comparison with that in the conduction and the valence bands.
Chemical Potential µ
At T = 0, the chemical potential (Fermi energy) lies in the impurity band. We now explain how to determine the Tdependence of the chemical potential µ(T ). Once the DOS in Eq. (24) is obtained, the T -dependence of the chemical potential µ(T ) can be determined with respect to the total electron density: where the left and right hand sides indicate the total amount of carriers per unit cell of the system at finite T and zero T , respectively. The factor 4 originates from the spin and orbital degeneracy of z-SWCNTs and E v = −|E v | is the valence-band top, which can be determined by the condition dx/dσ v = 0. Figure 4 presents the T -dependence of µ(T ) for Nsubstituted (20,0) SWCNTs with c = 10 −3 (blue solid curve), 10 −4 (red solid curve) and 10 −5 (black solid curve). The dashed curves are µ(T ) where the valence band is not taken into account in the calculation, which was previously discussed in Ref. 17. We now focus on the case of c = 10 −5 (black solid curve) as an example. The black solid curve shows characteristic changes around T ∼ 80 K and T ∼ 250 K, as indicated by the arrows. In the ionization region of T 80 K (see Appendix B), µ(T ) lies in the impurity band and decreases slowly with an increase in T . As T increases further, the system shows a crossover from the ionization region to the exhaustion region (see Appendix B). In this crossover region of 80 K T 250 K, µ(T ) decreases rapidly. Over T ∼ 250K, the black solid curve deviates upward from the black dashed curve and approaches the center of the band gap (E = 0) in the high-T limit because the valence band electrons begin to be thermally excited from the valence band to the conduction band. This temperature region, T 250K, is the so-called in- E=0.18eV trinsic region (see Appendix B). Similar features are evident in the red and blue solid curves. It should be noted that the two characteristic temperatures indicated by the arrows in Fig. 4 shift toward higher T as c increases.
Spectral Conductivity α(E)
Similar to the expression of the DOS in Eq. (22), the spectral conductivity α(E) can also be divided into two parts, as within the present approximation. Here, α c (E) is the spectral conductivity of conduction and impurity band electrons, and α v (E) is the spectral conductivity of valence band electrons. α c (E) and α v (E) are given by Refs. 16, 17, 40 where the factor 4 comes from the spin degeneracy and the orbital degeneracy of z-SWCNTs, 31,32) v k is the group velocity of an electron with wavenumber k, and V is the volume of the system. Here, G c/v (k, E) is the retarded Green's function Furthermore, within the effective-mass approximation for z-SWCNTs in Eq. (11), the k-summation in Eq. (27) can be performed analytically and α c/v (x) are given by where the signs + and − correspond to α c and α v , respectively, and A is the cross-sectional area of an SWCNT (A ≡ πd t δ w is conventionally used as the effective cross-sectional area of an SWCNT, where d t = 1.57 nm is the diameter of a (20,0) SWCNT and δ w = 0.34 nm is the van der Waals diameter of carbon). Figure 5(a) shows α(E) for N-substituted (20,0) SWCNTs with various concentrations of N impurities (c = 10 −3 (blue solid curve), 10 −4 (red solid curve) and 10 −5 (black solid curve)). α(E) has finite value once DOS is finite. With a decrease in c, α(E) in the energy regions of E ≥ E c and E ≤ E v is proportional to 1/c, as shown in Fig. 5(b). This can be understood by the BTT expression α c/v (E) ∝ τ c/v with the relaxation time τ c/v . Since τ c/v is proportional to 1/c within the t-matrix approximation, we obtain α c/v (E) ∝ 1/c (see Appendix C). In contrast to the conduction/valence-band energy region, α c (E) for the impurity-band energy region, which cannot be described by the BTT, is proportional to c, as shown in Fig. 5(c). This is because that the averaged distance of N impurities becomes short in proportion to c. 11 We now discuss the T -dependence of L 11 for N-substituted (20,0) SWCNTs, which can be calculated by the substitution of Eq. (29) into Eq. (4). Figure 6 shows the T -dependence of L 11 for c = 10 −3 (blue solid curve), 10 −4 (red solid curve) and 10 −5 (black solid curve). The dashed curves are L 11 where the valence band is not taken into account in the calculation. 17) Here, we focus on L 11 for c = 10 −5 (black solid curve) as an example. In Fig. 6, the black solid curve exhibits two rapid increases at T ∼ 40 K (see also the inset of Fig. 6) and T ∼ 250 K. The increase at T ∼ 40 K originates from the change in the transport regime of this system from impurity band con- duction to conduction-band conduction. On the other hand, at T ∼ 250 K where electrons begin to be excited from the valence band to the conduction band, the black solid curve begins to deviate upward from the dashed curve. This is because the valence-band holes contribute to L 11 in addition to the conduction band electrons at T 250 K. In the intermediate temperature region of 170 K T 250 K, which corresponds to the exhaustion region, the conduction band electron density is almost constant with T , as shown in Appendix B, and therefore the T -dependence of L 11 is weak. The T -dependence of L 11 in the exhaustion region is discussed in Appendix D.
Electrical Conductivity L
In the last part of this section, we consider the other solid curves (the red and blue solid curves in Fig. 6) to clarify the c-dependence of L 11 . In the extremely low-T region shown in the insets of Fig. 6, L 11 increases with c, in contrast to the high-T L 11 . This is due to the opposite tendency of the c-dependency of α c (E) for the conduction-band and impurityband energy regions [see Figs. 5(b) and 5(c)].
Thermoelectrical Conductivity L 12
In this section, we discuss the T -dependence of L 12 for Nsubstituted (20,0) SWCNTs, which can be calculated by the substitution of Eq. (29) into Eq. (5). Figure 7 shows the Tdependence of the L 12 value of N-substituted (20,0) SWC-NTs with c=10 −3 (blue solid curve), 10 −4 (red solid curve) and 10 −5 (black solid curve). The dashed curves are L 12 where the valence band is not taken into account in the calculation. Here, we focus on the case of c = 10 −5 (black solid curve) as an example. In Fig. 7, the black solid curve shows a rapid increase at T ∼ 30 K (see also the inset of Fig. 7) and deviates downward from the black dashed curve at T ∼ 250 K. The rapid increase at T ∼ 30 K is due to the contribution to L 12 from the conduction band electrons becoming more dominant than that from the impurity band electrons. It should be noted that the crossover temperature (T ∼ 30 K) of L 12 is lower than that of L 11 (T ∼ 40 K) shown by the black solid curve in the inset of Fig. 6. This difference implies that L 12 is more sensitive to the thermal excitation of carriers than L 11 , and the difference determines the low-T behavior of the Seebeck coefficient, as explained in Sec. 3.3. On the other hand, the deviation of the black solid curve from the dashed curve at T ∼ 250 K is due to cancellation between the contributions from conduction band electrons and valence band holes to L 12 . In addition, we discuss the T -dependence of L 12 in the intermediate T region of 170K T 250 K in Appendix D.
Before closing this section, we consider the c-dependence of L 12 . In the extremely low-T region where the impurityband conduction dominates as seen in the inset of Fig. 7, |L 12 | increases with c, in contrast to the high-T |L 12 |, which originates from the opposite tendency of the c-dependence of α c (E) for the conduction-band and impurity-band energy regions [see Figs. 5(b) and 5(c)].
Seebeck Coefficient S
The Seebeck coefficient S can be calculated using the relation of S = 1 T L 12 L 11 in Eq. (2). Figure 8 shows the T -dependence of the S value of N-substituted (20,0) SWCNTs for c = 10 −3 (blue solid curve), 10 −4 (red solid curve) and 10 −5 (black solid curve). The dashed curves are S where the valence band is not taken into account in the calculation. 17) Here, we explain S with a focus on the case of c = 10 −5 (black solid curve) as an example. In the low-T region, |S | increases sharply near 30 K due to the rapid increase in |L 12 | near 30 K, as shown in the inset of Fig. 7. At extremely low T , much lower than 30 K, |S | is proportional to T in accordance with the Mott formula, 41) despite the impurity band conduction that cannot be described by the BTT. 17) As T increases, |S | deviates rapidly upward from the Mott formula and has a large peak at T ∼50 K and then decreases with further increase in T . The large peak originates from the thermal excitation from the impurity band to the conduction band with a small m * , which is a different mechanism from a large S of 1D semiconductors with pudding-mold-type band, i.e, a large m * . 42,43) The T dependence of |S | can be explained in terms of the T dependence of L 12 /L 11 . As shown in the inset of Fig. 6, L 11 begins to increase sharply at T ∼40 K. The |S | peak at T ∼50 K indicates that L 12 /L 11 is proportional to T , i.e., dS /dT = 0. Beyond T ∼50 K, the T -dependence of L 12 /L 11 becomes weaker than T linear and |S | decreases with T . In the region of 170K T 250 K, which corresponds to the exhaustion region, |S | is insensitive to T because the conduction band electron density is almost constant. Over T ∼250 K entering the intrinsic region, |S | decreases rapidly and approaches zero at the high-T limit where the chemical potential µ is located at the center of the band gap, as represented in Fig. 4. This is because S due to conduction band electrons is perfectly cancelled out by valence-band holes in the limit T → ∞. Similar features are evident in the red and blue solid curves.
Power Factor PF
The power factor (PF) can be calculated using the relation of PF = L 11 S 2 in Eq. (3). Figure 9 shows the T -dependence of the PF for N-substituted (20,0) SWCNTs with c = 10 −3 (blue solid curve), 10 −4 (red solid curve) and 10 −5 (black solid curve). Here, we explain the PF with a focus on the case of c = 10 −5 (black solid curve) as an example. Figure 9 shows that the PF increases rapidly from T ∼30 K, at which S rises sharply, as shown in Fig. 8. In the exhaustion region of 170K T 250 K, the PF shows weak dependence with respect to T because the T -dependency of L 11 and S are weak in this region. When T exceeds approximately 250 K, entering the intrinsic region, the PF drops rapidly and goes to zero due to the sharp decrease in S . Similar T -dependence of the PF can be observed for c = 10 −4 (red solid curve) and 10 −3 (blue solid curve), as represented in Fig. 9. In addition, the characteristic temperatures at which the solid curves deviate from the dashed curves shift toward high T as c increases. Due to this shift, the optimal concentration c opt that gives the maximum PF is dependent on T .
To show c opt at a fixed temperature, we present the cdependence of the PF within 10 −6 ≤ c ≤ 10 −2 at various temperatures in Fig. 10(a). At 200 K, the PF increases monotonically with a decrease in c within the present range of c. This is because the thermal excitation from the valence band to the conduction one is negligible and the monotonic increase of the PF is given by PF ∝ (ln c) 2 (c 1) as discussed for N-substituted (10,0) SWCNTs in our previous report. 17) In contrast, at T =250 K, 300 K, 350 K and 400 K, the PFs exhibit the maximum values at c opt = 4.7 × 10 −6 , 3.1 × 10 −5 , 1.2 × 10 −4 and 3.4 × 10 −4 , respectively (see Table I). Thus, we can see that c opt increases with increasing T . In order to clarify what determines the value of c opt , we show PF as a function of c/n hole in Fig. 10(b). Here, n hole is the number of valence-band holes, which is defined by where E v (< 0) is the valence-band top and the factor 4 comes from the spin degeneracy and the orbital degeneracy. As shown in Fig. 10(b), each PF curve exhibits a peak at c opt /n hole ∼ 20, which means that PF becomes the maximum when the N concentration reaches about 20 times the number of thermally excited holes. Note that this condition (c opt /n hole ∼ 20) is not satisfied for N-substituted (10,0) SWC-NTs with c = 10 −2 ∼ 10 −5 at T ≤ 400 K. As a result, PF does not show the maximum for the condition of c = 10 −2 ∼ 10 −5 and T ≤ 400 K as shown in Appendix A.
Electronic Thermal
Conductivity λ e Thermal conductivity is known to be due to electrons and phonons: the former, electronic thermal conductivity, λ e , is defined as follows, where L (e) 22 is given by (32) in the present case of N-substituted SWCNTs (See Appendix E). Figure 11 shows the T -dependence of L (e) 22 for N-substituted (20,0) SWCNTs with c = 10 −3 (blue solid curve), 10 −4 (red solid curve) and 10 −5 (black solid curve). As seen in Fig. 11, L (e) 22 increases monotonically with increasing T for all c. At a fixed T , L (e) 22 increases as c decreases except for extremely low T at which the impurity-band conduction is dominant. This is due to the fact that α(E) in the conduction band contributing at finite T because of thermal excitations is proportional to 1/c as shown in Fig. 5 (b). On the other hand, L (e) 22 /c is proportional to T 2 and is independent of c in the limit of low T as seen by the Sommerfeld expansion as with α(E F ) ∝ c at the Fermi energy E F lying in the impurity band (See Fig. 5(c)). Figure 12 displays the T -dependence of λ e for c = 10 −3 (blue solid curve), 10 −4 (red solid curve) and 10 −5 (black solid curve). Similar to L (e) 22 , λ e increases as T increases and as c decreases except for extremely low T , while at an extremely low T , λ e /c is proportional to T and is independent of c as shown in the inset of Fig. 12. This can be understood by the
Sommerfeld expansion as
and α(E F ) ∝ c as shown in Fig. 5(c). As seen from Fig. 9, the contribution of second term L 12 L 21 /(T L 11 ) = PF × T in Eq. (31) is negligible in comparison to the first term L (e) 22 /T except for the exhaustion region. Figure 13 illustrates the low-T behavior of electronic contribution to the Lorenz ratio L e (T ) ≡ λ e (T )/(T L 11 (T )) scaled by the universal Lorenz number L 0 ≡ π 2 k 2 B /(3e 2 ) for c = 10 −3 (blue solid curve), 10 −4 (red solid curve) and 10 −5 (black solid curve). All curves in Fig. 13 approach unity in the low-T limit. This means that the Wiedemann-Franz law holds even for the impurity-band conduction. As T increases, L e (T ) deviates downward from L 0 in proportion to T 2 as From Fig. 12, we note λ e is much smaller than the phonon thermal conductivity, λ ph . It is known that room temperature λ ph of SWCNTs without N impurities is of the order of 1,000 W/Km, [44][45][46][47][48][49][50][51] which is comparable to that of Nsubstituted SWCNT with dilute N concentration. 52,53) Hence in the figure of merit ZT = (PF/λ)T , PF is determined by electrons while λ by phonons, ZT ≈ (PF/λ ph )T , and then the analysis of optimal condition for PF applies also for ZT resulting in ZT ∼ 0.1 for N-substituted (20,0) SWCNTs with c opt = 3.1 × 10 −5 at 300K.
Summary
The thermoelectric effects of N-substituted SWCNTs were investigated using the Kubo-Lüttinger theory combined with the Green's function technique. We have clarified the temperature dependence of the electrical conductivity L 11 and thermoelectrical conductivity L 12 , as well as the Seebeck coefficient S and power factor PF for a wide temperature range from the ionization region to the intrinsic region through the exhaustion region. S and PF decrease rapidly toward zero around a crossover temperature from the exhaustion region to the intrinsic region, and the crossover temperature shifts toward higher temperature with an increase in the impurity concentration. Due to this doping dependence of the shift of the crossover temperature, the optimal impurity concentration c opt that gives the maximum PF changes depending on temperature. As shown in Table I, we have determined c opt for various temperature for N-substituted (20,0) SWCNTs. In addition, using the Sommerfeld-Bethe expression of L (e) 22 , we elucidate the temperature dependence of λ e ≡ (L (e) 22 − L 12 L 21 /L 11 )/T and show that the Wiedemann-Franz law for λ e /L 11 is valid in the limit of low T even for the impurity-band conduction. The optimal condition for PF applies also for the figure of merit ZT because the electronic thermal conductivity λ e is much smaller than the phonon thermal conductivity λ ph . We estimate ZT ∼ 0.1 for N-substituted (20,0) SWCNTs with c opt = 3.1×10 −5 and λ ph = 1, 000W/Km at room temperature.
Finally, we note that the results obtained in the present study can also be applied to boron-substituted SWCNTs by replacement of the impurity potential from an attractive potential to a repulsive potential.
In this Appendix, we discuss the contribution of thermal excitation from the valence band to the conduction band to the PF of an N-substituted (10,0) SWCNT. Figure A·1 shows the c-dependence of PF of the N-substituted (10,0) SWCNT at T = 200 K, 300 K and 400 K. The solid curves indicate PFs for systems including N-impurity bands with both conduction and valence bands self-consistently as discussed in Sec. 2.3 and the dashed curves are PFs shown in Fig.8(c) of the previous paper 17) where the valence band is not incorporated into electronic states of N-substituted SWCNTs. As seen in Fig. A·1, the solid curves fit the dashed curves c 10 −5 even at T = 400 K. This means that the valence band does not contribute to the PF of N-substituted (10,0) SWCNT. In addition, all the solid curves increase with decreasing c within c ≥ 10 −6 and do not exhibit the maximum, which is different from the N-substituted (20,0) SWCNT discussed in this paper. Figure B·1 shows the T -dependence of the conduction band electron number n c per unit cell for each spin and each orbital of the N-substituted (20,0) SWCNT with c = 10 −5 . Here, n c is determined by
Appendix B: Temperature Dependence of Electron Number in Conduction Band
where E c is the conduction-band bottom, which can be determined by the condition dx/dσ c = 0. The T -dependence of n c has three regions: the ionization region at low T , the exhaustion region at middle T and the intrinsic region at high T . In the ionization region, most N atoms (donors) still capture valence electrons, i.e., they are not thermally ionized, and the T -dependence of n c is given by n c ∼ exp(−E b /k B T ) (see Fig. B·1). In the exhaustion region, most N atoms are thermally ionized, and the valence band electrons are still frozen out. In this case, n c is almost equal to the density of N atoms and is independent of T (see Fig. B·1). In the intrinsic region, the valence band electrons are thermally excited from the valence bands to the conduction bands, and the T -dependence of n c is given by n c ∼ exp(−∆/k B T ) (see Fig. B·1). and with µ = k B T ln( c 2 πt k B T ). We have confirmed that these analytical results are in good agreement with the numerical results for L 11 and L 12 based on the Kubo-Lüttinger theory shown in Figs. 6 and 7.
Appendix E: Electronic Thermal Conductivity
Under the electric field E and the temperature gradient dT/dz, the thermal current density J Q is expressed as within the linear response with E and dT/dz. Here, L 21 is called electrothermal conductivity and is connected to L 12 by Onsager's reciprocal relation L 21 = L 12 . 54 1)). According to Kubo's linear response theory, L (e) 22 can be obtained as where χ 22 (iω λ ) is the J Q -J Q correlation function, expressed as where β ≡ 1/(k B T ) is the inverse temperature, T τ is the imaginary-time-ordering operator, · · · denotes the thermal average in equilibrium, and V is the volume of a system with J Q being the thermal current density. In the present case, where electrons are scattered by elastic impurities, J Q is given by where v (±) k = ± k/m * , u(q) = V 0 j e −iq j /N, Φ k (τ) = e τH c † k e −τH , e τH d † k e −τH (E·7) and Φ k (τ) = e τH c k e −τH e τH d k e −τH .
Substituting Eq. (E·6) into Eq. (E·5) and performing a similar procedure as Ref. 40 by Jonson and Mahan, we can straightforwardly obtain the SB type expression of L (e) 22 in Eq. (32). | 2021-01-29T02:16:24.769Z | 2021-01-28T00:00:00.000 | {
"year": 2021,
"sha1": "1424e85b5e4f29893347f1b61947d6271554e8a8",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7566/jpsj.90.044702",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "e2c15504abc16bb883bd9e9be6894b809695748e",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
55406254 | pes2o/s2orc | v3-fos-license | Evidence for chiral logarithms in the baryon spectrum
Using precise lattice QCD computations of the baryon spectrum, we present the first direct evidence for the presence of contributions to the baryon masses which are non-analytic in the light quark masses; contributions which are often denoted"chiral logarithms". We isolate the poor convergence of SU(3) baryon chiral perturbation theory to the flavor-singlet mass combination. The flavor-octet baryon mass splittings, which are corrected by chiral logarithms at next to leading order in SU(3) chiral perturbation theory, yield baryon-pion axial coupling constants D, F, C and H consistent with QCD values; the first evidence of chiral logarithms in the baryon spectrum. The Gell-Mann--Okubo relation, a flavor-27 baryon mass splitting, which is dominated by chiral corrections from light quark masses, provides further evidence for the presence of non-analytic light quark mass dependence in the baryon spectrum; we simultaneously find the GMO relation to be inconsistent with the first few terms in a taylor expansion in m_s - m_l, which must be valid for small values of this SU(3) breaking parameter. Additional, more definitive tests of SU(3) chiral perturbation theory will become possible with future, more precise, lattice calculations.
Introduction
Lattice QCD calculations are now performed with light quark masses at or near their physical values [1], opening a new era for detailed comparisons with chiral perturbation theory (χPT). While this program has been very successful for mesons [2], the application to baryon properties has been wrought with significant challenges mainly from issues of convergence of the perturbative expansion. Recent analysis suggests the convergence of the two-flavor expansion for the nucleon mass is limited to m π 300 MeV [3,4]. The SU(3) chiral expansion has similar but more severe problems. In heavy baryon χPT [5] (HBχPT), the small expansion parameter is given by ε ∼ m K /Λ χ , whereas for the pion-octet χPT, the small expansion parameter is ε φ ∼ ε 2 . Several offshoots of HBχPT have been developed in an effort to improve the convergence of the theory [6]. We review a new application of an old idea: combining the large N c expansion with the SU(3) chiral expansion [7]. This approach has a few formal advantages over the other methods. In the large N c limit, there is an extra symmetry, the contracted spin-flavor symmetry allowing for an unambiguous field-theoretic method to include the low lying decuplet baryon resonances in the theory; in the large N c limit, the spin-1/2 and -3/2 baryons become degenerate and infinitely heavy.
Having a controlled expansion is necessary but not sufficient to claim success. The principle prediction from χPT are the contributions to hadronic observables which are non-analytic in the light quark masses, arising from pion-octet loops, which often contribute ln(m 2 K,π,η ) terms to hadronic observables, and are commonly referred to as chiral logs. These contributions can not arise from a finite number of local counterterms but only from the long range contributions from the light pion octet degrees of freedom, the pion cloud. Isolating this predicted light quark mass dependence in lattice QCD results has been a major challenge for many years. The definitive identification of these contributions is hailed as a signal that the up and down (and strange) quarks are sufficiently light that the lattice results can be described accurately by χPT. This task has proved to be very challenging, as often, these non-analytic light quark mass contributions are subleading, or masked by other systematics.
We report on the first substantial and direct evidence of the presence of non-analytic light quark mass dependence in the baryon spectrum, work which was performed in Ref. [8].
Evidence for non-analytic light quark mass dependence
In Ref. [9], linear combinations of the ground state baryon spectrum were constructed to isolate various operators in the combined SU(3) and large N c expansions. These mass relations were compared with lattice calculations and it was demonstrated the predicted mass hierarchy persisted over a large range of quark masses [10]. Here, we focus on three of these mass relations in addition to the Gell-Mann-Okubo relation, and provide evidence for the presence of non-analytic light quark mass dependence in the baryon spectrum. The heavy baryon Lagrangian was formulated in the 1/N c expansion in Ref. [11], providing relations amongst the various LECs. In particular, the leading quark mass dependent operators satisfy the following relations at subleading order in 1/N c while the axial couplings satisfy the relations at leading order in 1/N c significantly reducing the number of LECs to be determined in the analysis. The numerical data is take from Ref. [3], which is a mixed-action lattice calculation with domain-wall valence fermions on the dynamical MILC configurations. While the relevant mixedaction EFT is known [12], the lattice results exist at only a single lattice spacing. We therefore restrict our analysis to that of the continuum HBχPT.
Mass relation R 1
We begin with the flavor singlet mass relation R 1 : To NLO in the chiral expansion and using the large N c operator relations (2.1), (2.2), [8], encoding the leading non-analytic light quark mass dependence in the baryon spectrum. Both LO (a 1 = 0) and NLO fits were performed to the lattice data, for a variety of ranges of the light quark masses. The NLO analysis yielded the LECs Figure 1 displays representative fits. The lower error band is obtained by setting m latt s → m latt s,phy , determined from an NLO χPT [13] analysis of the pion and kaon spectrum. The small value for the axial coupling, a 1 signals a lack of contributions from the non-analytic light quark mass effects, consistent with the SU(2) chiral extrapolation analysis of the nucleon mass [3,4], but inconsistent with their phenomenological determination [14] or direct computation from lattice QCD [15]. One is left to conclude that SU (3) HBχPT does not provide a controlled perturbative expansion for R 1 over the range of quark masses explored in this work.
Mass relations R 3 and R 4
We next examine the flavor-octet mass relations R 3 and R 4 These mass relations vanish in both the SU(3) chiral and vector limits, making them more sensitive to the non-analytic light quark mass dependence appearing at NLO in the chiral expansion. To NLO in the chiral expansion and using the large N c operator relations (2.1), (2.2), The LO expressions (a 1 = 0) fail to describe the numerical results; it is clear higher order contributions are necessary for the extrapolations of these mass relations. At NLO, the analysis of R 3 and R 4 becomes correlated. The full covariance matrix is constructed as described in Ref. [10]. The significance of this is prominent; the large value of the axial coupling is strong evidence for the presence of the non-analytic light quark mass dependence in these mass relations. Further, this is the first time an analysis of the baryon spectrum has returned values of the axial couplings consistent with phenomenology. 1 However, caution is in order. Examining the resulting contributions to R 3 and R 4 from LO and NLO separately, one observes a delicate cancellation between the different contributions, see Fig. 2. Further studies are needed with more numerical data sufficient to also constrain the sub-leading large N c axial coefficient a 2 as well as the NNLO contributions.
Gell-Mann-Okubo Relation
The last mass relation we study is the flavor-27 Gell-Mann-Okubo relation It is interesting to note that while the SU(3) chiral expansion for the baryon spectrum is not convergent, it was found that the volume dependence of the octet baryon masses is consistent with SU (3) HBχPT. Analysis of the volume dependence yielded a large value of g πN∆ (C) with g A fixed to its physical value [16]. The first non-vanishing contribution to ∆ GMO comes from the NLO chiral loops, which are nonanalytic in the light quark masses. For this reason, the GMO relation is of particular interest to study with lattice QCD. We extend the previous analysis [17,3] in a few important ways. Close to the SU(3) vector limit, the GMO relation can be described by a taylor expansion in m s − m l , The leading term proportional to (m s − m l ) must vanish as it transforms as a flavor-8. The first non-vanishing contribution is equivalent to a next-to-next-to-leading order (NNLO) contribution from HBχPT and the (m s − m l ) 3 contribution is equivalent to an NNNNLO HBχPT contribution. We demonstrate these first few terms in the Taylor expansion about the SU(3)-vector limit are inconsistent with the lattice data as m latt l → 0. We extend the previous analysis to include the NNLO HBχPT contributions, with the axial couplings constrained by the analysis of R 3 and R 4 . It is found only NNLO HBχPT, which is dominated by the non-analytic light quark mass contributions, can naturally accommodate the strong light quark mass dependence observed in the numerical results. At NLO in the chiral expansion and using the large N c operator relations (2.1), (2.13) The full NNLO formula, determined from Ref. [18] can be found in Ref. [8]. In Fig. 3, four plots are displayed. The first plot (upper left) is the result of an NLO analysis of the GMO formula, allowing the axial coupling to be determined from the data, resulting in a small, but non-zero value for a 1 . The second plot (upper right) displays the predicted value of the GMO relation from NLO taking the determination of a 1 from the analysis of R 3 and R 4 . The third plot (bottom left) shows the result of a taylor expansion about the SU(3) vector limit fitting the first two non-vanishing terms. Finally, the NNLO analysis is displayed, using the value of a 1 determined from R 3 and R 4 (bottom right). Only the NNLO analysis is consistent with the values of the numerical data over the full range of light quark masses, in particular, the steep rise observed as m latt l → 0, as well as the value of the axial coupling a 1 determined from phenomenology. This is further evidence for non-analytic light quark mass dependence in the baryon spectrum.
Conclusions
We have presented the first substantial evidence for the presence of non-analytic light quark mass dependence in the baryon spectrum, with further analysis details in Ref. [8]. This was achieved by comparing the predictions from HBχPT combined with the large N c expansion to relatively high statistics lattice computations of the octet and decuplet baryon spectrum. An analysis of mass relations R 3 and R 4 provided for the first time, values of the axial couplings which are consistent with the phenomenological determination, signaling significant contributions from non-analytic light quark mass dependence in R 3 and R 4 : utilizing the leading large N c expansion, D = 0.70(5) , F = 0.47(3) , C = −1.4(1) , H = −2.1 (2) .
It was further demonstrated that the Gell-Mann-Okubo relation is inconsistent with the first two non-vanishing terms in a taylor expansion about the SU(3) vector limit, and that the steep rise in the numerical data, observed as m latt l → 0, can only be described by the NNLO heavy baryon χPT formula which is dominated by chiral loop contributions. Taken together, these observations indicate the first significant evidence for the presence of non-analytic light quark mass dependence in the baryon spectrum.
However, there are several known systematics which were not addressed in the present article, and require future, more precise lattice results: the numerical data used [3] exist at only a single lattice spacing: continuum χPT analysis was performed: there may be contamination from finite volume effects [16]: the convergence issues need further examination: more precise numerical results are needed to explore mass relations R 5 -R 8 which are more sensitive to non-analytic light quark mass dependence: results with smaller values of the light quark mass are desirable: the strange quark mass used in this work is known to be ∼25% to large [19]. | 2011-12-14T00:34:47.000Z | 2011-12-14T00:00:00.000 | {
"year": 2011,
"sha1": "38180b6ab95631fb34f8366d527c24809a10e266",
"oa_license": "CCBYNCSA",
"oa_url": "https://pos.sissa.it/139/114/pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "38180b6ab95631fb34f8366d527c24809a10e266",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
35743701 | pes2o/s2orc | v3-fos-license | Mechanism Analysis on Hydraulic Fracture Initiation under the Influence of Induced Stress in Shale Gas
To study the fracture network propagation mechanism in shale gas reservoirs and determine the influence of induced stress from the growth of multiple fractures, this paper describes the crack initiation pattern in a shale reservoir based on a core laboratory experiment and volume fracturing concept. In accordance with linear elastic fracture mechanics and based on the mathematical model of induced stress field with single fracture as reference, the rock surrounding stress equation of multiple fractures in a horizontal well was deduced. Prediction models of fracture pressure under different initiation patterns were established. Induced stress correction factor was proposed to simplify and correct the prediction models. Results demonstrate that the mechanical parameters of rock directly affect the fracture initiation pattern in shale reservoirs. Tensile failure on the bedding surface primarily occurs, along with shear slippage damage for brittle rocks, whereas shear was mainly observed in plastic rocks. The morphology and distribution of fractures are closely related to induced stress field. Simulation results show that induced stress is positively correlated with fracture height and negatively correlated with fracture interval. Dimensionless fracture interval between one to two is the “gold window” to create a fracture network in fracturing design. Minimum induced stress occurs at 30° and 150° with the minimum horizontal stress direction. The study significantly contributes in the research of crack initiation law and optimization of fracture design.
Introduction
Shale gas exploration in China is rapidly developing, and multi-stage fracturing technology revolutionized this field [1].However, studies on shale gas reservoirs indicate that subsurface conditions are complex in new blocks.The substantial depth, complex geological background, evident variance of rock constituents, characteristics of the mechanical parameters of rock, and crustal stress in different zones cause difficulties in fracturing design and field operation [2], [3].Research on how a rock constitutive model affects the mechanism of fracture propagation is lacking.Induced stress caused by open cracks is the key factor that affects fracture growth and the formation of a fracture network.The coupling of in-situ stress and induced stress also causes difficulties.Hence, developing a precise fracturing model is crucial to optimize technical parameters and enhance stimulation effects in shale gas reservoir exploration.
The spatial distribution and propagation mechanism of hydraulic fractures was analyzed in this paper by employing a core test in a laboratory and conducting theoretical analysis.The initiation patterns with different features of rock mechanics were discussed from the meso-mechanic perspective.Based on elastic constitutive theory, the principle of superimposed stress was applied in developing the circumferential stress analytical model.
State of the art
Hydraulic fracturing technology is based on linear elastic fracture mechanics in classical theory.Artificial fracture is hypothesized to be an open mode and shaped as a symmetrical bi-wing plane fissure.Fracture toughness equation is a criterion for fracture extension [4].However, micro-seismic monitoring results determined that hydraulic fractures (HF) develop like a complex fracture network in a plane and lengthwise but not in a symmetrical shape [5].Shearing slippage damage of nature fractures (NF) tend to occur when NF interacts with HF.Differential principle stress, approaching angle, and operation pressure were the main factors that cause this damage [6].Renshaw and Pollard [7] observed that fluids penetrate towards the fracture surface initially and propagate along the maximum horizontal stress direction when the influence of discontinuities space on HF propagation was studied.And the criterion when HF orthogonally crosses NF was also provided.Gu and Weng [8] expanded the R-P criterion and included the non-orthogonal condition.By embedding planar glass discontinuities into a cast hydrostone block as proxies for cemented natural fractures, Olson proposed that HF-NF interaction occurs in three forms: 1) HF bypassing the NF by propagating around it, 2) HF arresting into the natural fracture and then diverting along it, and 3) a combination of bypass and diversion [9], [15].Renard analyzed the morphology of HF propagation in a 3-D space with X-ray computed synchrotron microtomography.The results demonstrated that both hard structure (such as granules) and weak structure (such as pores and particulates) affect the HF extension pathway [10].Xu [11], [12], [13] and Meyer and Bazan [14] developed a new mathematical model to characterize and predict the growth of induced hydraulic fracture networks in naturally fractured formations.This model consists of two perpendicular sets of vertical planar fractures that mechanically interact and considers the presence of injected fluid.Mathematical models and laboratory experiments were popular methods in analyzing fracture growth, but both approaches were studied separately and studies on the effect of the constitutive properties of rocks were lacking.Thus, this paper combines laboratory testing with numerical simulation to develop a fracturing model that can be widely used.
The remainder of this paper is organized as follows: Section 3 presents how cracks occur in shale reservoirs and establishes the prediction model of fracture pressure under the effect of multiple fractures by induced stress field.Section 4 analyzes the significance of the technical parameters of the model and discusses the stress distribution around fractures through numerical simulation.Section 5 summarizes the conclusions.
Methodology
The mechanical property of shale is quite different from that of conventional reservoirs because of its special bedding and the development degree of NF.The anisotropy mechanical properties and failure damage characteristics of the Long Ma Xi Shale Formation in X block were studied through laboratory experiments.The experiments showed that the mechanical parameters and fracture failure characteristics of shale changed significantly and exhibited strong anisotropy because of the different geological properties of various reservoir sections.The results of the mechanical testing analysis of the core in various zones in the X-1 well of Block X were selected to comparatively examine the mechanical properties of the different layers of the shale reservoir.
Uniaxial compression test of rock mechanics
TAW -2000 computer-controlled servo triaxial testing machine was used in this test (Figure 1) to test normal and high temperature and pressure, statics, dynamics, and uniaxial and triaxial compression.Uniaxial compressive strength, elastic modulus, and Poisson's ratio can be determined by employing uniaxial compression test.The initiation mode can also be determined through fracture morphology.The samples of No.1 sublayer indicate that the mean value of uniaxial compressive strength is 239.9MPa, the elastic modulus is 30.2MPa, and Poisson's ratio is 0.
Tension failure criterion occurring within the rock itself of hydraulic fracture
Fracture pressure is closely related to crustal stress.The circumferential stress that the rock is subjected to exceeds the rock's strength of extension because of the overlarge density of the liquid in the well, that is: In the formula, σ θ is circumferential stress, MPa, and S t is the rock's tensile strength, MPa.
Shear failure criterion of hydraulic fracture along the natural fracture
Suppose that NF exists in the primary development zone, its development and approaching angle remain constant.Weak surface mode can be used to study the shear failure of HF along with NF.For naturally fractured formation, weak plane cohesion is zero.Thus, the fracture criterion [16] of HF along with NF is: ( ) In the formula, σ 1 and σ 3 represent the maximum and minimum principal stress, MPa, µ ѡ denotes the internal friction coefficient of plane of weakness, and β 2 is the angle between normal of weakness plane and σ 1 .
Fracture pressure prediction model under multiple fractures induced stress
With the constant improvement of stimulation technology in the horizontal well, the scale number and stage quantity in multi-staged fracturing increased.The interference effect among multiple fractures changed the stress field of the original wall rock [17], [18].Thus, examining the mechanism of induced stress, establishing an induced stress field mathematical model, and analyzing the influencing factors are crucial in investigating the fracture initiation regularity of shale reservoirs in a horizontal well.By considering the effect of multi-fracture induced stress on the pressure system and based on the pressure balance, In the formula, P w is bottom hole flowing pressure, MPa, P E is fracture propagation pressure, MPa, P pipe is pipeline friction pressure, MPa, ΔP i is perforation friction pressure, MPa, P c is closure pressure, MPa, σ ξ is induced stress, MPa, and P net is net pressure in fracture, MPa. Figure 12 shows that the induced stress field model can be simplified into a plane strain model [19].According to linear elasticity mechanics, when fracture is open, the induced stress at an arbitrary point [4], [20] is: Based on the generalized Hooke's law, the induced stress on y axis is: The mathematical relationships of the remaining parameters can be described as follows: In the formula, P is the net pressure over the fracture surface, MPa, and c is semi-fracture height, m.For n cracks, the induced stress produced by each cracks disturbs the original stress field, and the induced stress loading on the minimum horizontal stress direction at the stress superposition area is (10) By integrating Formula (10) in Formula ( 4), the pressure should increase for a multi-fracture stress field if each fracture must be stimulated with the same scale.Induced stress correction factor (ISCF) must be introduced.Formula (5) shows that induced stress is the function of c, r, and θ.In the actual fracturing process, the influence on principal stress that is generated by induced stress of various fractures should be considered.Thus, induced stress is the function of fracture height and fracture interval.By defining h/d as a dimensionless fracture space, the mathematical description of the ISCF obtained by regression is: In the formula, h is fracture height, m, and d is fracture interval, m.At the end, the fracture pressure with different fracturing initiating modes can be obtained by combining Formulas (1), ( 2), ( 3), (4), and (11).
Results and discussions
During the fracture growth process in shale gas reservoirs, stress distribution, stratum mechanical properties, properties of fracturing fluid loss, and pumping mechanisms are the main elements of fracture initiation and geometry.Therefore, determining the stress state of wall rock is crucial when the initiation and propagation law of a hydraulic fracture is analyzed.
Relationship between induced stress and fracture geometry parameters and laying patterns
Section III discussed that induced stress can be simplified into a function of fracture height and fracture interval.Combined with the data on the field, the relationship of induced stress loading in the minimum principal stress directions and fracture height and fracture space were obtained.Figure 13 shows that when fracture interval is constant, induced stress will increase with fracture height because the stimulated volume in lengthways elevates.Two issues should be considered for shale gas reservoirs: a) The volume of the fracture positively influences induced stress.Thus, the relationship between operation scale and induced stress should be considered in optimizing fracturing design; b) the extension in other fractures will be hindered if fracture height was uncontrolled at one crack in the fracturing treatment.
When fracture geometry is constant, induced stress decreases as fracture interval increases (Fig. 14).Fracture interval is an effective way to adjust the interference among fractures.Hence, when increasing the complexity of the fracture through fracture interaction is necessary, fracture interval must be reduced.When decreasing the complexity of the fracture is needed through fracture interaction, the fracture interval must be enlarged.When several fracture initiations occur simultaneously, the fractures will interfere with each other, which influence fracture propagation in next stage.The growth situation of two to five fractures were simulated, and the changes of induced stress can be characterized by IDCF. Figure 15 shows that the fractures interfere with each other during their propagation process.Interference is more evident if more fractures exist.With the interference of the fractures, the variation coefficient of horizontal principal stress reduces, which facilitates the formation of a complex fracture network.Thus, laying patterns are crucial in fracturing design.
Relationship between induced stress and dimensionless fracture space
The mathematical description of ISCF was presented in Section III, and the key point is to simplify original mathematical model through dimensionless fracture space.This paper analyzes the relationship between the dimensionless fracture space and the induced stress, as shown in Fig. 16.a) With the increase of the dimensionless fracture space, the induced stress in minimum principal stress direction will increase.(2) The curve is S-shaped.When the dimensionless fracture space is small or large, the increment of the induced stress is relatively steady.In the area of 1<h/d <2, the increasing induced stress improves.Thus, this area is the "golden window" for the complex fracture network.Hence, [1,2] can be the reference of the ratio of half fracture height and fracture interval in fracturing design.A complex fracture can be efficiently formed through the stress interference.
Numerical simulation of circumferential induced stress field
The influence of fracture extension pressure under multifractures induced stress was discussed in Section 3.3.The results of circumferential induced stress of single and numerous fractures, as shown in Figure 15, were obtained according to the deduced numerical equation of induced stress.The blue line represents the induced stress increment when a single fracture initiates, the red line denotes the induced stress increment when numerous fractures initiate, and the x axis is the direction of the maximum principal stress.The simulation results show that a) for an open single fracture, the circumferential induced stress exhibits very good symmetry.However, for open multiple fractures, induced stress increment varies at different azimuth angles because of stress interference.b) Maximum induced stress exists at the horizontal principle stress direction, whereas minimum induced stress occurs at 30° and 150° with the minimum horizontal stress direction.Most of drilling direction of the horizontal wells are along the minimum horizontal principle stress.Conventional fracturing technology could benefit from this process because forming a main fracture is easily carried out.However, this process could limit shale gas fracturing because high power is required to ensure fracture propagation.Shale gas fracturing necessitates high stimulated reservoir volume and not long fracture length.Thus, the interference among fractures is not fully utilized.30° oblique cross minimum horizontal principle stress effectively creates a complex fracture.
Conclusion
To address the substantial variation of structure, lithology, and reservoir properties among different blocks, the mechanism of complex fracture propagation was studied in theory.This paper initially analyzed reservoir geology and the mechanical characteristics of rocks based on core uniaxial testing and described the different initiating modes in shale reservoirs.The influence on rock initiation caused by its mechanical parameters was examined.Fracture pressure model under multi-fractures induced stress and different initiation approaches was developed based on elastic constitutive theory, combined with linear elastic fracture mechanics and using the principle of superimposed stresses.The following conclusions were drawn: 1) The main rupture mechanics of a shale reservoir was described in three modes: a) tensile failure with shale bedding planes accompanied by shear failure, b) shear failure with shale bedding planes accompanied by tensile failure, and c) shear failure.These three modes determine the fracture initiation criterion and fracture pressure prediction model.
2) ISCF was presented and deduced to simplify equation of the surrounding rock stress field.The geometry of the fracture and laying patterns directly affect the complexity degree of HF.The simulation results indicate that fracture geometry parameters are positively correlated with induced stress, whereas these parameters are negatively correlated with fracture interval and numbers.In the optimization of fracturing design, the ratio of the half fracture height or fracture interval that belongs to [1,2] area can be used as reference.
3) Maximum induced stress occurs at the horizontal principle stress direction, whereas minimum induced stress occurs at 30° and 150° with minimum horizontal stress 30° oblique cross between directions.The drilling direction of the horizontal section and minimum horizontal principle stress is proposed.
This study combined laboratory experiments and theoretical studies and presented new findings on the initiation law of complex fractures.The fracture pressure model was simplified and adapted to actual conditions.All the findings demonstrate the importance of the subsequent development of shale reservoirs.However, actual data on the on-site monitoring of fractures are limited.In future studies, combining monitoring data and this model will provide accurate results on the initiation law of complex fractures.______________________________ 20.These results indicate brittle characteristics with relatively high compressive strength, relatively high elastic modulus, and relatively low Poisson's ratio.The initiation mode of the samples is tensile failure along the dense lamellation plane accompanied by shear slippage damage (Figs. 2 & 3).
Fig. 7 .
Fig.7.Stress-strain curve of No.3 sublayer The samples of No.4 sublayer indicate moderate elastic modulus and relatively low Poisson's ratio.The mean value of uniaxial compressive strength is 219.5 MPa, the elastic modulus is 25 MPa, and Poisson's ratio is 0.23.The results denote moderate compressive strength, moderate elastic modulus, and relatively low Poisson's ratio.The initiation mode of the samples is shear damage along dense lamellation plane accompanied by open mode (Figs. 8 & 9).
Fig. 14 .Fig. 15 .
Fig.14.Relationship between induced stress and fracture interval in the minimum principal stress direction
Fig. 16 .
Fig.16.Relationship between induced stress and dimensionless fracture space in the minimum principal stress direction
Fig. 17 .
Fig.17.Numerical simulation of the induced stress field | 2017-11-03T13:02:12.124Z | 2016-08-01T00:00:00.000 | {
"year": 2016,
"sha1": "e00479ede3189a6c27fab7311e302a3276e5230e",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.25103/jestr.094.22",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "e00479ede3189a6c27fab7311e302a3276e5230e",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Engineering"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.