id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
131918543
|
pes2o/s2orc
|
v3-fos-license
|
Assessment of Knowledge and Practices ofMothers on the Home Management of Diarrheain the Northern Part of Cameroon
Knowledge and practices of mothers on the home management of diarrhea is a key element in the management of diarrhea in a child so as to intervene early and thereby avoiding complications. The knowledge and practices of mothers on the early management of diarrhea at home is important in the prevention of diarrhea related complications. Despite all the strategies put in place in communities and health facilities to reduce childhood mortality in Cameroon, the prevalence of diarrheal diseases remains high, especially in the northern regions. This study was aimed at assessing the knowledge and practices of mothers on the home management of childhood diarrhea in Ngaoundere, a town in the northern part of Cameroon where its prevalence still remains high. This was a cross sectional and analytical study in the urban zone of the Ngaoundere from February 1st to April 30th 2015. Five-hundred-and-fifty mothers participated in the study. A score was established to rate the variables ‘’knowledge’’ and ‘’practices’’ of mothers. Data were collected and entered in a questionnaire. Analysis was done with SPSS version 20.0. The threshold of significance was defined as a P-value less than 0.05. The majority (40.7%) of mothers were aged between 15 and 25 years. Most of them (40.4%) had reached secondary school level while 68.5% were housewives with 76.1% having adequate knowledge on diarrhea and good home management. The mothers’ and fathers’ level of education significantly (P=0.02 and 0.000 respectively) influenced the mothers’ knowledge. Good practices were performed by 66.3% of the mothers. The factors that influenced these good practices were: the academic level and profession of the parents (P <0.01), the household environment namely the use of taps and the presence of latrine toilets (P=0.013). Most mothers have sufficient knowledge as well as good practices towards the home management of diarrhea. These are influenced by the level of education and profession of the parents, health education and by the environment the family lives in.
Introduction
Acute diarrhea is defined as the emission of at least three watery stools per day or the emission of stools more frequently than usual by an individual [1]. It is the second cause of childhood mortality after pneumonia with 1.5 million deaths per year [1]. Diarrhea is a major public health issue in developing countries. It accounts for one of the major causes of infant morbidity and mortality, as a result of the severe consequences which are essentially due to acute dehydration and malnutrition [2]. Oral rehydration salts (ORS) and Oral rehydration therapy (ORT), adopted by the United Nations Children's Fund (UNICEF) and the World Health Organization
UPINE PUBLISHERS
Open Access L Progressing Aspects in Pediatrics and Neonatology
43
(WHO) in the late 1970s led to better management of cases of childhood diarrhea [3].
In Cameroon, despite the existence of effective and available measures, the management of diarrhea remains a major problem in our context as demonstrated in the last 2011 demographic survey in which 21% of children had diarrhea, with only 23% seeking medical treatment from a health facility. Furthermore, only 17% and 22% of the sick children received ORT [4]. Thus the knowledge and practices of mothers with regards to diarrhea at home remains a crucial point in the management so as to intervene early on the first signs and avoid complications. That is why we proposed to evaluate the knowledge and practices of mothers in the management of diarrhea in children under 5 in the town of Ngaoundere, in the northern region of Cameroon.
Study design and setting
It was a cross-sectional and analytical study done from February 1 st to April 30 th 2015 in the urban area of Ngaoundere. Ngaoundere is the capital town of the Adamawa region in the northern part of Cameroon. It has a population of 224,215 inhabitants of which 37,892 are children less than 5 years, and 54,484 women at child bearing age [5].
Sampling and data collection
We used a pretested questionnaire. For those who could not read or write, interpreters translated the questionnaire into the local dialect. Participation in the survey was voluntary and anonymity preserved. Informed consent was obtained from the parents before inclusion in the study.
Choice of quarters:
To have a representative population, four quarters were drawn at random per health area from the total number of quarters in the area provided they were easy to access.
Choice of homes:
We applied the method use by WHO for community surveys [6]. We visited the homes identified during the recent vaccination campaigns; these were the homes with children under five. The starting point in each quarter was the chiefdom. We randomly selected a direction with the use of a coin (heads or tails). Then we advanced in one direction until we reached the total number of households. When the total number was not obtained, we continued in the opposite direction. The number of mothers interviewed was evenly distributed in all the quarters chosen.
Selection criteria
We included the mothers of children aged less than 5years who had had at least one episode of diarrhea in the past and who were living in the town. Mothers and guardians with no permanent residence in the area were not included.
Variables studied
Demographic data: maternal age, marital status, educational level and occupation of the mothers and fathers.
Environmental factors: size of the household, access to drinking water, and use of latrines.To evaluate the knowledge and practices of the mothers, we used cutoff scores established in studies done in Iran in 2013 by Ghasemi et al. [7] and Khalili et al. [8]. The knowledge score was assessed using a 6-item questionnaire (sources of information on diarrhea, health education on diarrhea, definition of diarrhea, definition of dysentery, knowledge of the causes and consequences of diarrhea, knowledge of ORS and zinc and knowledge of preventive measures. The total score was 24 points (see annex). As for the practices of the mothers, they were asked to describe the home management of diarrhea. A9item questionnaire was used for this purpose (the treatment they administered, administration of ORS and zinc, other treatments administered to the child, feeding during diarrhea, and symptoms that prompted consultation, and the person consulted) with a total of 9 points (see Appendix).
The scoring was 1 point for a correct answer and 0 point for a wrong answer. The mean score was obtained by dividing the total score by 2. Every right answer received the score of 1, and every wrong answer got zero. The women could get a score between zero and 22. The scores below 11, between 11 and 17, and more than 18 were considered, low, medium and good knowledge respectively. Mothers scoring above average were considered to have adequate knowledge and good practices. Mothers with a score below average were considered as having poor knowledge and poor practices. The mothers' environment involved two items namely the presence of latrine and the use of home tap water.
Data analysis
Data were entered in to CS-Pro 4.1 and analyzed using SPSS 20.0 software. The Chi-squared test was performed to examine associations between the variables, knowledge, practices of mothers with sociodemographic and environmental factors. A P value of 0.05 was used to determine the threshold of significance.
Results
We interviewed 540 mothers of children below 5years. The majority of mothers (40.7%) were 15-25 years old. We noted that 40.7% of them attended secondary school and 23.7% were uneducated. Most of them were housewives (68.5%) and lived with a partner (84%). The majority of the fathers (42.4%) had reached secondary school level and (62.4%), were doing liberal professions. More than 55% of the mothers lived in households that used tap water and latrine, majority (76.1%) of which had sufficient knowledge about diarrhea. Health facilities were the most cited source of information by 85.7% of the mothers. Most of them (37.8%) knew the definition of diarrhea as the emission of more than 3 liquid stools per day. Most (91.7%) cited inadequate food hygiene as a cause of diarrhea and proper food hygiene as a means of prevention (93%).
The factors that influenced significantly the knowledge of mothers on diarrhea were the level of education of the mothers (P
44
= 0.026), and the level of education of the fathers (P = 0.000) ( Table 1). This knowledge increased as parental education increased. We found a statistically significant relationship between the profession of the mothers and their knowledge. Mothers who were civil servants had better knowledge than the others (P = 0.014) ( Table 2). The father's occupation had a significant influence on the knowledge of the mothers (P = 0.012). Mothers whose husbands were civil servants had a better knowledge. Health talks significantly influenced the knowledge of mothers (P = 0.03), and mothers who had received health talks had better knowledge on diarrhea (Table 3). Living in households with tap water and latrines also, appeared to be significantly associated with a better knowledge for the mothers (P = 0.000) ( Table 4).
Most mothers had good practices with regards to diarrhea (66.3%). The vast majority (90%) consulted a health facility anytime they had a case of diarrhea, and 67.2% and 31.3% reported using oral rehydration salts, and zinc respectively. As for food, 90.7% of mothers continued to feed the child during the diarrheal episode and 91.7% kept on breastfeeding. Mothers' practices were significantly improved with the level of parental education (P = 0.006) and with their profession (Table 5). Best practices were observed in mothers in the civil service (P =0.002) and in those whose husbands were civil servants (P= 0,007). Mothers who received health education had better practices (P = 0.000) ( Table 6). The availability of tap water and latrines had a positive effect on maternal practices (P = 0.013) ( Table 7). * Any profession exercised on the basis of well-defined qualifications, by an individual, independently and under the person's own responsibility [15].
** Professions exercised under the responsibility of a third party.
Referring to civil servants in our study [15]
Discussion
Most of the mothers in our study were in the 15-25 years age group, similar to the 2011 Demographic Health Survey where most of the women surveyed were in the 15 -19 years age group [4]. In addition, early marriages, which are frequent in the area, lowers the overall age of procreation. We noted that 40.4% of mothers had had secondary education, and 23.7% were not educated. These are contrary to the results of the survey on early childhood development in the Adamawa region done in 2007 in which 51% of the mothers were not educated [9]. This difference could be due to the fact that the survey took into account the urban and rural areas.
In our study, the majority of mothers (76.1%) had adequate knowledge on diarrhea, notably the definition, causes, prevention and signs that should motivate consultation. These results could be explained by the fact that community Integrated management of Childhood Illnesses was introduced in the Adamawa region, with the aim of reducing childhood mortality through education of families on the knowledge and management of common childhood illnesses in children at home including diarrhea. Amare et al. [10] in Ethiopia had similar results with 63.6% of mothers who had a good knowledge of diarrhea [10]. We observed a statistically significant relationship between the mothers' knowledge, their educational level and their professions. The knowledge of the mothers was much better when they had a good educational level and when they were civil servants. In fact, educated mothers have more access to the public service, and they understand better the information given on diarrhea, thus improving their knowledge. Similar findings were noted in Iran [7,8], Ethiopia [10], and Gambia [11].
We observed a statistically significant relationship between the knowledge of the mothers, fathers' level of education and their profession. The mother's knowledge was better when the father was educated and working in the public service. The educational level of the father would permit better integration of the information received, so consequently the educated father would better train the mother. In Iran, Ghasemi et al. [7] had the same relationship. We also noted that there was a statistically significant relationship between the knowledge of the mothers and receiving health education on diarrhea. This shows that health education of the mothers is a good means of information on diarrhea.
We had a statistically significant relationship between the knowledge of mothers and the household environment. Mothers who had tap water as source of water supply and who had latrines had better knowledge than those who did not have. This could be explained by the fact that these households would likely be of a higher socio-economic status and therefore have easier access to the various sources of information on diarrhea. We observed that 66.3% of the mothers had good practices towards diarrhea, with the use of oral rehydration therapy, feeding during diarrhea, and consultation at a health facility. Given that the level of knowledge was high in our study, it could explain the good practices that the mothers had towards diarrhea. In Ethiopia 45.9% of mothers had good practices in 2014 [10] against 2.3% in 1991 [12] towards diarrhea.
We had a statistically significant relationship between the level of education, profession of mother and practices towards diarrhea. Practices were best in mothers who were educated and were civil servants and these practices improved with the increase in the level of education. These results are similar to those of the Demographic Health Survey of 2004 in Cameroon, in which mothers with the secondary school level had the highest use of ORT [13]. Studies in Ethiopia [10], Nigeria [14] and Iran [7,8 ] observed similar findings. Indeed, educated mothers would easily understand and integrate the information received and this would influence the choice of treatment and hence improve their practices.
A statistically significant relationship was observed between the father's education, father's profession and practices of the mothers. The practices of the mothers improved with the father's educational level and also with fathers that were civil servants. This can be explained by the fact that educated fathers can easily 46 read and assimilate information and so educate mothers about the management of children on diarrhea. Also the father as head of the family would take care of the family financially and would thus have a decision-making power in the management of childhood illnesses.
We found a statistically significant relationship between the practices of the mothers and receiving health education. Mothers who received health education had better practices. Mothers who have received health education would be informed about the attitude, and the choice of treatment. We had a statistically significant relationship between the mothers' environment and her practices. Mothers who lived in households that were using tap water and latrines had better practices. This can be explained by the fact that mothers living in a favorable environment were of a higher socioeconomic level and therefore have easier access to information with good knowledge and therefore good practices.
Conclusion
This study shows that most mothers from this urban area in the Northern part of Cameroon have sufficient knowledge on diarrhea as well as good practices towards the home management of diarrhea. These are influenced by the level of education of both parents, health education and by the environment the family lives in. To sustain this knowledge and good practices key health messages should be given to all mothers on diarrhea prevention and home management. Zinc and oral rehydration salts should be made available at all times in all health facilities. Moreover, the community component of IMCI should be reinforced in that region.
|
2019-04-26T13:35:25.976Z
|
2018-03-23T00:00:00.000
|
{
"year": 2018,
"sha1": "2fce1305740f0f07ac8f22d96b3255514e47bff2",
"oa_license": "CCBY",
"oa_url": "https://lupinepublishers.com/pediatrics-neonatal-journal/pdf/PAPN.MS.ID.000112.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "c423003c2699953eacc7f3b27db8a8a5f75bf5ec",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
14643097
|
pes2o/s2orc
|
v3-fos-license
|
A Novel Alpha Cardiac Actin (ACTC1) Mutation Mapping to a Domain in Close Contact with Myosin Heavy Chain Leads to a Variety of Congenital Heart Defects, Arrhythmia and Possibly Midline Defects
Background A Lebanese Maronite family presented with 13 relatives affected by various congenital heart defects (mainly atrial septal defects), conduction tissue anomalies and midline defects. No mutations were found in GATA4 and NKX2-5. Methods and Results A set of 399 poly(AC) markers was used to perform a linkage analysis which peaked at a 2.98 lod score on the long arm of chromosome 15. The haplotype analysis delineated a 7.7 meganucleotides genomic interval which included the alpha-cardiac actin gene (ACTC1) among 36 other protein coding genes. A heterozygous missense mutation was found (c.251T>C, p.(Met84Thr)) in the ACTC1 gene which changed a methionine residue conserved up to yeast. This mutation was absent from 1000 genomes and exome variant server database but segregated perfectly in this family with the affection status. This mutation and 2 other ACTC1 mutations (p.(Glu101Lys) and p.(Met125Val)) which result also in congenital heart defects are located in a region in close apposition to a myosin heavy chain head region by contrast to 3 other alpha-cardiac actin mutations (p.(Ala297Ser),p.(Asp313His) and p.(Arg314His)) which result in diverse cardiomyopathies and are located in a totally different interaction surface. Conclusions Alpha-cardiac actin mutations lead to congenital heart defects, cardiomyopathies and eventually midline defects. The consequence of an ACTC1 mutation may in part be dependent on the interaction surface between actin and myosin.
Methods and Results
A set of 399 poly(AC) markers was used to perform a linkage analysis which peaked at a 2.98 lod score on the long arm of chromosome 15. The haplotype analysis delineated a 7.7 meganucleotides genomic interval which included the alpha-cardiac actin gene (ACTC1) among 36 other protein coding genes. A heterozygous missense mutation was found (c.251T>C, p.(Met84Thr)) in the ACTC1 gene which changed a methionine residue conserved up to yeast. This mutation was absent from 1000 genomes and exome variant server database but segregated perfectly in this family with the affection status. This mutation and 2 other ACTC1 mutations (p.(Glu101Lys) and p.(Met125Val)) which result also in congenital heart defects are located in a region in close apposition to a myosin heavy chain head region by contrast to 3 other alpha-cardiac actin mutations (p.(Ala297Ser),p. (Asp313His) and p.(Arg314His)) which result in diverse cardiomyopathies and are located in a totally different interaction surface.
Introduction
The prevalence of congenital heart defects (CHD) is about 0.8% of live birth and higher in still birth. The etiology of CHD is complex and combines both environmental and genetic causes. However, there are already more than 50 genes associated with CHD in humans [1]. Familial cases of CHD were very useful to discover most of these CHD genes. Although these discoveries concern a very small percentage of CHD cases, they shed a new light on cardiac diseases because they demonstrated in several cases that mutations in a single gene can result in CHD, arrhythmia and/or cardiomyopathies. Thus, mutations in TBX20 [2,3], MYH7 [4,5], NKX2-5 [6,7], GATA4 [8], and ACTC1 [9,10] lead to a variety of cardiac anomalies including CHD, arrhythmia and cardiomyopathies. It is also the case in Noonan syndrome-now part of a larger group of disease referred to as RASopathies-where CHD and hypertrophic cardiomyopathies are found [11,12]. This is not yet the case of all cardiac genes but it showed that genes which are important for cardiac development might also be important for cardiac function during adulthood [6,13]. It is not clear yet why mutations in a single gene results in such diverse cardiac anomalies. It could either be related to modifier genes when a single mutation results in various cardiac diseases or it could be due to diverse protein dysfunctions when different mutations in a single gene result in various cardiac anomalies.
In this study, we report on a large Lebanese family with 13 affected members suffering from various congenital heart defects, arrhythmia, and valvular and conduction anomalies. In addition, several cardiac patients have also midline defects. A missense mutation was found in the alpha-cardiac actin gene which cosegregated perfectly within the family. This mutation (p. (Met84Thr)) and 2 other mutations also responsible for CHD (p.(Glu101Lys) and p.(Met125-Val)) mapped to a small domain of the actin protein in very close contact to the myosin heavy chain, suggesting that disruption of this interaction domain leads to an altered cardiac development.
Patients
Members of this Lebanese family were examined by a cardiologist (clinical examination, rest ECG and echocardiography) and by a geneticist (clinical examination). After signing an informed consent, a peripheral blood sample was obtained to extract DNA with a standard protocol. The study was approved by the ethical committee of Hôtel Dieu hospital, Beirut, Lebanon.
Genotyping and linkage analysis
Genotyping from all available DNA samples was performed at the Genotyping National Center (CNG, Evry). A panel of 399 microsatellites was tested (Life Technologies, Evry, France). Multipoint linkage analysis was prepared with easyLINKAGE [14] and performed with GeneHunter v2.1r5 [15] initially with a disease allele frequency of 0.01%, a fully penetrant disease and a 0% phenocopy rate and then with decreasing penetrance values to 85%. Haplotypes of the region of interest were prepared with GeneHunter. Genehunter output files were used to visualize haplotypes with the Haplopainter 1.043 software [16].
DNA sequencing
The sequence of NKX2-5 (ENST00000329198) and GATA4 (ENST000335135) exons was obtained by Sanger sequencing. The 6 coding exons of the ACTC1 gene (ENST00000290378) were amplified by PCR with 60ng of genomic DNA, 1.5 mM of MgCl2, 0.5 μM of forward and reverse primers, 0.2 mM of dNTPs and 1 U of Taq Platinum DNA polymerase (Invitrogen, San Diego, CA, USA) with appropriate buffer. PCR products were purified with the NucleoFast 96 PCR Clean up kit (Macherey-Nagel). Sanger sequencing was done with 0.8 μL BigDye terminator V1.1 or V3.1 with the appropriate buffer, 0.5μM of each primer and 1μL of PCR product denatured at 95°for 1 minute, then 25 cycles at 95°for 1 min, 50°for 1 min and 60°for 4 min 30 s. X-terminator product purification was done before it was sequenced with an Applied Biosystems 3730 DNA Analyzer. Sequence analysis was carried out with SeqScape v2.5 software. Variant analysis was carried out with Visual Alamut 2.6.1 Software. The variant was submitted to LOVD 3.0 shared installation (http://databases.lovd.nl/shared/) and received the variant ID # 0000064762.
Structural interpretation of the actin and myosin variants
The interpretation of the structural consequences of specific actin and myosin variants was performed by analyzing the recently published Rigor Actin-Tropomyosin-Myosin complex (PDB accession codes 4A7F, 4A7H, 4A7L and 4A7N), reported as the first subnanometer-resolution structure of the actin-tropomyosin-myosin complex in the rigor (nucleotide-free) state determined by cryo-EM [22]. Images were prepared using the Molmol software [23].
Linkage analysis and gene mutation identification
The screening of the NKX2-5 and GATA4 genes in the proband DNA (IV:8) discovered no mutations. A set of 399 poly(AC) markers was used to genotype all available DNA of this family (13 affected and 9 unaffected individuals) (Fig 1). After a parametric analysis, we found that there were only 3 peaks above 0 (S1A Fig). The tallest peak reached a lod score of 2.98. It is located on chromosome 15. This result was robust to re-analysis with decreased penetrance values (to 85%). With non parametric parameters, there was a peak nearly exclusive on chromosome 15 (S1B Fig). Haplotypes of the chromosome 15 region obtained from linkage analysis demonstrated that all affected relatives had the same allele on the marker D15S1007 which is located at genomic position 33,545,560-33,545,736 (GRCh38) on the long arm of chromosome 15 at the q14 band ( S2 Fig). Moreover, a recombination between D15S165 and D15S1007 on the centromeric side (individual II:3) and one between D15S1007 and D15S1012 on the telomeric side (individual IV:8) gave the limits of the genomic interval for the causal mutation (D15S165 to D15S1012). This genomic interval of about 7,747,000 nucleotides includes 37 coding genes among which the ACTC1 gene is found. This gene encodes alpha cardiac actin 1. Since this gene appeared as the best candidate, the 6 coding exons were sequenced. A single heterozygous variant was found in the 3 rd exon: c.251T>C changing the Methionine at position 84 to a Threonine. This variant is absent in 1000 genomes and in the Exome Variant Server. The Met84 is highly conserved across species to yeast and the physico-chemical properties of Methionine and Threonine are significantly different (Grantham score of 81 on a range of 0 to 215). Three software (GVGD, SIFT, Mutation Taster) predicted that it is a disease causing variant but PolyPhen2 predicted the variant to be benign. This variant was found in all affected individuals and was absent from all unaffected relatives (Fig 1). Taken together these data, we concluded that this variant is the causal mutation.
3D structure analysis
The Met84 residue is found within a surface exposed helix of the globular part of F-Actin. Interestingly, as observed in all the previously published F-actin fiber structures, there is no evidence of a native actin-actin contact surface involving residue 84 in actin filaments reconstructions (Fig 2A). Looking closely at the 3D structure of globular actin, this mutation appeared to reside in a region of the actin filament in extremely tight apposition to the myosin head ( Fig 2B). Furthermore, two other well-characterized mutations of actin (p.(Glu101Lys) and p.(Met125Val)), which both also resulted mainly in atrial septal defects [9,25,26] are located in the same subdomain (Fig 2B). Disease-causing mutations disrupt both electrostatic and hydrophobic contacts, thereby directly perturbing the interaction between actin, tropomyosin and myosin. All published ACTC1 mutations in humans are summarized on Table 1. Note that in several publications, the amino acid position was given after subtraction of the first two residues that are removed during actin maturation (the original reported Met123Val mutation should actually be described as Met125Val). Interestingly, ACTC1 mutations resulting in congenital heart defects (essentially atrial septal defects) are restricted to the first half of the protein (from residue Met84 to residue Met178 [27]). Beyond residue Met178, all reported ACTC1 mutations result in diverse cardiomyopathies [10,[28][29][30][31] with an unusual prevalence of noncompaction of the apex or hypertrophied apex.
Discussion
The list of genes involved in congenital heart defects (CHD) is growing [1]. At the same time, the frontier between CHD and cardiomyopathies (CM) is becoming blurred. There are reports on familial cases with relatives being affected with CM or CHD or both. In addition, the list of genes which can lead to CM and/or CHD is also growing [2][3][4][5][6][7][8]11,12]. The actin gene is one of these genes. ACTC1 mutations were first identified in a series of dilated CM [30]. Other mutations can result in apical hypertrophic CM, while others are associated with CHD and in particular atrial septal defects (ASD). In this report, mutation carriers of the p.(Met84Thr) missense suffer from a variety of CHD (ASD, Ebstein anomaly, VSD) but also from valvular anomalies (aortic and pulmonary stenosis, mitral regurgitation and stenosis), conduction tissue anomalies (sinus bradycardia, WPW syndrome), and CM (hypertrophic CM in one patient and presumably dilated CM in the founder and his sister). Thus, alpha cardiac actin 1 mutations can result in a variety of cardiopathies even within a single family suggesting that modifying factors might modulate the expressivity of ACTC1 mutation.
In addition to cardiac signs, several patients showed midline anomalies (diastema of the upper incisors, cleft lip, hypertelorism, kyphoscoliosis and pectus excavatum). Midline defects are very rare and the occurrence of 5 cases (6 cases if the grand-aunt I:3 is included) in a single family cannot be fortuitous. In addition, each relative with a midline defect actually has more than one midline defect sign. The co-segregation of midline defects with cardiac anomalies and the Met84Thr mutation suggests that cardiac and midline defect could be secondary to the ACTC1 Met84 mutation. The co-occurrence of cardiac and midline defects was never reported previously in ACTC1 mutation carriers. Alternatively, it is possible that a second mutation in another gene is present in this minimal genomic region. Actually, there exists a single gene among the 36 genes-excluding the ACTC1 gene-(S1 Table) which can lead to midline defects. This gene, SLC12A6, leads to mild midline defects similar to the ones reported in this study. However, it also leads to mental disability, complete or partial agenesis of the corpus callosum, and severe peripheral neuropathy [32]. This severe disease is autosomal recessive, so we can rule out this type of inheritance in this Lebanese family because individuals with midline defect share only one parental haplotype as evidenced by linkage analysis. This report prompts cardiologists to pay attention to midline anomalies in familial ASD. It is possible that cardiologists have overlooked midline defects and failed to report this type of anomaly in previous ACTC1 mutation reports.
The actin p.(Met84Thr) mutation, which was identified in this work, as well as the previously identified p.(Glu101Lys) [9,25] and p.(Met125Val) [26] mutations, result mainly in ASD. Those 3 mutations occur in a small spatially well-defined region, which is on a surface exposed region of F-actin in very close contact with myosin monomers, as observed in the most accurate muscular fiber reconstruction published to date. The importance of this region for the Actin-Myosin interaction is supported by previous mutagenesis studies demonstrating that altering the charge of residue Glu101 by histidine substitution reduces in vitro motility by fivefold [33] whereas the p.(Met125Val) substitution showed a significantly reduced affinity for myosin [26]. Another argument to support the importance of the tight actin-myosin interaction is found by carefully analyzing the position of residues 297 [29], 313 [28] and 314 [30] of actin in the 4A7L actin-myosin complex. Mutations on those residues have been known to lead to various cardiomyopathies and they also appear to make extremely close contact with the adjacent myosin monomer, but using a totally different interaction surface compared to residues 84 (current study), 101 [9,10,25] and 125 [26] (Fig 2C).
The human ACTC1 gene produces a protein with 94% homology to the gamma actin gene (ACTG1). Mutations in this latter gene were associated with dominant progressive deafness [34], a disease that displays sensorineural hearing loss beginning in the twenties in the high frequencies and steadily progressing to include all frequencies. Although the two non-muscular actin genes (ACTG1 and ACTB) are expressed concomitantly in all mammalian cells, the auditory hair cell is one of the rare cell types where the predominant isoform is gamma actin (ACTG1). In auditory hair cells, the gamma actin protein is found in stereocilia, the cuticular plate, and adherens junctions. One of the six mutations found in the initial report is a threonine to isoleucine change at position 89, a position very close to the congenital cardiac disease ACTC1 mutations. Although the gamma actin residue at the corresponding site is not a threonine but a valine, it is interesting to comment on the changes induced by this p.(Thr89Ile) missense variant. It was tested in the yeast Saccharomyces cerevisiae because yeast actin is 91% identical to human gamma actin and the Thr89 residue is conserved in both species. The Thr89Ile mutation resulted in a higher population of cells with fragmented and/or depolarized cables and uniform distribution of patches in both mother cell and bud in comparison to wild-in green, the myosin head is shown in brown. The interatomic distances measured in the complex between residues 84, 101 and 125 and the myosin surface typically range from 3 to 10 Å. The 562-571 region of the myosin head makes numerous contacts with the surface of the actin filament, and interacts closely with residues 84, 101 and 125 on the surface of actin. (C) Close up showing one myosin head interacting with the region 84, 101, and 125 region of the actin monomer, and the 297, 313 and 314 region of an adjacent actin monomer. The 562-571 region of the myosin head closely interacts with residues whose mutation leads to atrial septal defects (84, 101, and 125, in red), whereas the 367-365 region (human numbering) of the same myosin head interacts directly with an adjacent actin monomer (residues 297, 313, 314, in green), whose mutation leads to cardiomyopathies. The orientation of the actin monomers in panel A and B is similar whereas the molecules in panel C have been rotated for a better view of the interaction with residues 297, 313, and 314.
doi:10.1371/journal.pone.0127903.g002 type [35]. However, the in vitro ability of purified Thr89Ile mutant actin to polymerize was not grossly modified suggesting an altered in vivo interaction with one or more of the numerous acting-binding proteins known to control actin cytoskeletal function and dynamics. It was actually the case since the p.(Thr89Ile) variant F-actin is much more susceptible to cofilin In the first column (familial/sporadic), the number of reported cases is notified (for instance: 13x: 13 reported cases). In de novo mutations, the age at disassembly than wild type actin [36]. Cofilin severs F-actin and sequesters actin monomers. This result suggests that the Thr89 residue is involved in non-actin protein interaction and that mutations in this region destabilize protein interaction with cofilin in a presumably similar way as the cardiac actin Met84 mutation might destabilize the actin/myosin interaction.
In conclusion, we reported a novel ACTC1 gene mutation which resulted in various congenital heart defects and arrhythmia. This family study suggested that ACTC1 mutation could also lead to midline defects. Finally, we provided evidence to possibly explain the pleiotropic consequences of ACTC1 gene mutations by pointing to particular molecular domains where actin and myosin heavy chain are in close contact. Depending on the domain, ACTC1 mutation can lead rather to congenital heart defects or to cardiomyopathies.
|
2018-04-03T02:42:16.770Z
|
2015-06-10T00:00:00.000
|
{
"year": 2015,
"sha1": "c7f5b267dec07d268777f2a8b1e8a1910f82ad22",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0127903&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c7f5b267dec07d268777f2a8b1e8a1910f82ad22",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
245775279
|
pes2o/s2orc
|
v3-fos-license
|
Plastic Antibodies Mimicking the ACE2 Receptor for Selective Binding of SARS‐CoV‐2 Spike
Abstract Molecular imprinting has proven to be a versatile and simple strategy to obtain selective materials also termed “plastic antibodies” for a wide variety of species, i.e., from ions to macromolecules and viruses. However, to the best of the authors’ knowledge, the development of epitope‐imprinted polymers for selective binding of severe acute respiratory syndrome coronavirus 2 (SARS‐CoV‐2) is not reported to date. An epitope from the SARS‐CoV‐2 spike protein comprising 17 amino acids is used as a template during the imprinting process. The interactions between the epitope template and organosilane monomers used for the polymer synthesis are predicted via molecular docking simulations. The molecularly imprinted polymer presents a 1.8‐fold higher selectivity against the target epitope compared to non‐imprinted control polymers. Rebinding studies with pseudoviruses containing SARS‐CoV‐2 spike protein demonstrate the superior selectivity of the molecularly imprinted matrices, which mimic the interactions of angiotensin‐converting enzyme 2 receptors from human cells. The obtained results highlight the potential of SARS‐CoV‐2 molecularly imprinted polymers for a variety of applications including chem/biosensing and antiviral delivery.
Introduction
Molecular imprinting strategies have been extensively applied for the synthesis of selective polymeric materials, especially for low-molecular-weight molecules. [1] The ability to selectively bind to a target species led to molecularly imprinted polymers (MIPs) also termed "plastic" or "artificial" antibodies. [2] They corresponding receptors. [10] Epitope imprinting requires identifying those active protein sites, synthesizing the epitope peptide, and using it as a template for molecular imprinting. [9] The resulting MIP may then recognize the entire target protein via binding of the selected epitope region to the imprinted moieties at the surface of the MIP. This approach overcomes the drawbacks of using entire proteins as templates, as the epitope structure is simpler, more resistant to the synthesis conditions, and can be more easily removed from the resulting polymer matrix. Additionally, epitope peptides can be custom synthesized and are significantly cheaper versus native proteins. Furthermore, epitope imprinting is an attractive approach to produce imprinted materials for virus recognition. [11,12] Next to the advantages mentioned above, using epitopes as templates-as in the present study-also prevents direct contact with infectious viruses during MIP synthesis, and it does not require facilities with appropriate biological safety protocols. [13] Coronaviruses are a group of RNA-enveloped viruses that can infect mammals and birds and may cause respiratory diseases that can range from mild to lethal problems. Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has been causing a severe outbreak worldwide since 2019 with millions of deaths and exceedingly high economic losses. SARS-CoV-2 is composed of four main structural proteins: nucleocapsid protein, envelope protein, membrane protein, and spike protein. [14] The latter is located on the virus surface and is the key interaction point to infect host cells. Hence, substantial attention has been attributed to the SARS-CoV-2 spike protein due to its role in receptor binding. Angiotensin-converting enzyme 2 (ACE2) is the human receptor for SARS-CoV-2, and promotes the entry of the virus into cells. [15,16] Therefore, the investigation of compounds that may interact with ACE2 and subsequently block SARS-CoV-2 infections is one of the most promising approaches to treat and prevent such virus infections. [17,18] Consequently, a variety of natural products have been identified that could act as therapeutic agents against SARS-CoV-2 via blocking of the spike protein sites. However, identifying such molecules is a complex task involving extensive computational simulations, extraction, isolation, and purification of the target compounds in sufficient quantities to perform affinity and antiviral experiments. [19][20][21] Hence, molecular imprints are a tool with substantial potential to obtain synthetic materials that may act similar to natural inhibitors by selectively interacting with specific sites of viruses, and consequently blocking the infection. Organic molecularly imprinted polymers for SARS-CoV-2 recognition were recently presented using the receptor-binding domain of SARS-CoV-2 and the nucleoproteins as templates on the molecular imprinting process. [22,23] In this context, nanomaterials have also been presented as promising alternative approaches to treat viral infections through different strategies as recently reviewed. [24,25] Consequently, the present study aimed at designing and synthesizing the very first silane-based silica core/shell MIPs using an epitope peptide from the SARS-CoV-2 spike protein as a template, which could then act as a synthetic ACE2 receptor and bind to SARS-CoV-2.
Identification of the Epitope Template
The identification of the epitope employed as a template is extremely important to achieve selective polymers for the whole protein or virus. The epitope must be located at the protein surface and available to interact with external receptors. SARS-CoV-2 presents an envelope with homotrimeric spike glycoprotein composed of S1 and S2 subunits in each monomer that binds to cellular receptors. The initial SARS-CoV-2 infection step occurs through the interaction of the virus spike protein with angiotensin-converting enzyme 2 (ACE2) from human cells, which is the entry point for SARS-CoV-2. The structure of the receptor-binding domain (RBD) of the spike protein of SARS-CoV-2 when bound to the cell ACE2 receptor was previously elucidated, as presented in Figure 1, [15,16] which provides information to identify a suitable epitope candidate for molecular imprinting.
The epitope template was selected based on the maximum contacting residues with ACE2. A peptide comprising 17 residues (F486-G502) was selected, which presents ten contact moieties with ACE2 during interaction with the following amino acid sequence: FNCYFPLQSYGFQPTNG. [15]
Computational Simulations
The SARS-CoV-2 epitope target is composed of 17 amino acids with different chemical functions. Six residues present hydrophobic side chains (three phenylalanine, two tyrosine, and one leucine), and five of which are aromatic, six residues present polar amino-functionalized side chains (two asparagine, two glutamine, one serine, and one threonine). Based on the epitope amino acid composition, phenyltriethoxysilane (PTES), 3-aminopropyltriethoxysilane (APTES), ureidopropyltrimethoxysilane (UPTES), and tetraethylorthosilicate (TEOS) were selected as functional monomers to cover a wide range of chemical interactions with the epitope template and were used on the theoretical simulations. We report for the first time, modeling of four organosilane monomers with a target SARS-CoV-2 epitope to substantiate the monomer-template complex formation in a pre-polymerization mixture. The qualitative assessment of each complex is based on the assigned quantitative score intended to correlate with the free energy of binding. [26] The basic framework and scoring function of Autodock is used for docking and revealing the dynamics of the monomer peptide interactions in the multicomponent system.
The principal analysis of the docking outcomes for MIP development is fundamentally different from that used for drug design. Unlike drug design approaches, which start with screening lead compounds with the strongest binding affinity to a targeted active site of a protein, molecular imprinting aims at effectively mimicking antigen-antibody interactions forming a complex resulting from multipoint noncovalent (i.e., weak) interactions. Therefore, we consider the free energy of binding of each monomer relative to the entire protein and all the binding modes in a fixed energy range. In this way, we analyze similar binding affinities of different monomers at multiple residues and map the interactions to cover the entire epitope.
The monomers used in this study yielded binding affinities in the range of −2.05 to −3.31 kcal mol −1 . The molecular and monomer-peptide interactions were modeled for each mono mer based on the highest docking score (Figures 2 and 3). Considering a large number of docking runs, first the mean binding energies of the first cluster rank as obtained in the clustering histogram were used to compare monomer-peptide affinities assigned by the empirical scoring function of Autodock ( Table 1). As analyzed then during repeated docking studies, the PTES monomer has always yielded the lowest binding energy indicating the highest affinity with the peptide. The choice of this monomer is predominantly based on its interactions with the hydrophobic residues of the peptide (Figure 2a). The peptide contains three phenylalanine (PHE) and two proline (PRO) residues contributing to the major interactions with PTES, which are predominantly stacked π-π interactions and π-alkyl interactions, respectively. The polar hydroxyl groups interact with the asparagine (ASN) and glutamine (GLN) through hydrogen bonding. Interestingly, the molecular models show GLN and PRO residues forming carbon-hydrogen bonds and alkyl-based interactions with the ethoxy side chains of PTES, respectively ( Figure 2a).
Considering that organosilanes readily undergo reactive hydrolysis during the formation of crosslinked polymer networks, it is obvious to expect the silanol groups (SiOH) to participate in non-covalent interactions in the pre-polymerization complex. However, the differential rate of hydrolysis for each type of monomer, which is also affected by the presence of the peptide in solution may facilitate interactions of the alkoxy groups in the monomers to coexist in solution. These can directly impact the preservation of peptide conformation during the imprinting process. The latter are unique interactions possible only in the case of MIPs and not NIPs, thus determining the resulting selectivity factor.
In the case of UPTMS, the urea moiety is the main center involved via H-bonding with the peptide (Figure 2b). Moreover, the propyl chain interacts via van der Waals forces with the aliphatic amino acids (leucine (LEU), glycine (GLY)), and π-alkyl interactions with the aromatic amino acids (PHE, tyrosine (TYR)). The highly polar silanol groups in the case of UPTMS can form large, interconnected networks of H-bonds with asparagine, glutamine, serine, and tyrosine. Methyl side groups however form fewer carbon-hydrogen bonds with PHE, TYR, and GLN. Most of the interactions with this monomer are distributed over the neutral and hydrophilic residues, unlike PTES.
The ethyl side chains in APTES (like PTES) participate in a range of hydrophobic interactions such as alkyl, π-alkyl, and π-sigma interactions with amino acids like PRO, LEU, TYR, and PHE (Figure 2c). The polar amino group and the central propyl chain participate in H-bonding and van der Waals interactions, similar to UPTMS. Owing to the structural similarities, APTES shares the set of interactions with PTES and UPTMS, thus enhancing the overall strength of binding to the peptide.
TEOS is a typical representative of the silane-based monomers and is limited by the number of functional groups, unlike others. The main interactions are possible via the silanol and ethoxy groups (Figure 2d). The results with TEOS were only used for studying the interactions with the peptide and not for comparison with other monomers, provided its main role as a cross-linker. For the other monomers, we have analyzed the binding energy range of 3.31 to −2.05 kcal mol −1 via 2D interaction maps generated in the Discovery Studio Visualizer. This is represented by a color-coded map of the sequence indicating the multipoint interactions realizable with the peptide in solution ( Figure 3). The unique complementarity to each residue, as predicted from the molecular models enabled the rational selection of monomers for the design of MIPs against the SARS-CoV-2.
For the functional monomers, the computed binding energies correlate well with the experimental results. For example, PTES has the highest affinity with the peptide. When applied in an optimized combination, the binding capacities of the MIPs may substantially improve, as evident in combination 1 versus 6.
Epitope Immobilization onto Silica Particles
The immobilization of the template at the surface of the silica particles improves the imprinting efficiency by positioning the template around the silica particle prior to the polymerization. The SARS-CoV-2 epitope was immobilized at the surface of the silica particles functionalized with glutaraldehyde, which may react with several amino acids at the epitope, such as lysine, tyrosine, tryptophan, and phenylalanine. [27,28] Because the target SARS-CoV-2 epitope presents three phenylalanine and two tyrosine along the amino acid chain, the epitope can be anchored on the silica particle surface through five sites, which limits the free movement of the epitope during polymerization resulting in more effective imprinted moieties, as represented in the scheme in Figure 4. This approach was previously described for the immobilization of peptides serving as templates during MIP synthesis for the purification of human hemoglobin. [29] The epitope concentration was monitored during the incubation with glutaraldehyde-functionalized silica particles to ensure its immobilization. After 1 h of incubation, 80% of the added epitope was immobilized at the silica particle surface.
MIP Design and Synthesis
The imprinted polymer particles were based on silica cores using organically modified silanes (aka ormosils), which present some advantages compared to purely organic-based polymers such as low reactivity in a variety of conditions (e.g., strong acids and bases, oxidizers, etc.), which results in a robust matrix that can be applied in a range of chemical and biochemical environments. Additionally, very well-defined binding moieties are achieved due to the rigid and highly crosslinked silane/silica structure, which results in superior selectivity compared to purely organic polymers that are more flexible. Silica also presents irrelevant swelling at different solvent conditions, which contributes to the maintenance of the size and shape of the binding moieties. Additionally, organosilica hybrid materials are readily obtained using molecular precursors that can take part in the hydrolysis and condensation reactions as the metal alkoxide, which is a versatile approach to adjust the selectivity of the MIP by selecting the most appropriate functional groups to interact with the epitope template.
Different monomer proportions were evaluated to achieve optimized imprinting efficiency ( Table 2). The amount of TEOS on the polymer compositions was kept constant, as it acts as a cross-linker and presents only a minor influence on the intermolecular interactions between the epitope and the polymer due to the absence of functional groups. The PTES proportion was evaluated in a wider range due to the high number of amino acids with aromatic side chains on the SARS-CoV-2 epitope, which could improve imprinting efficiency through the π-interactions between polymer matrix and target epitope.
The MIP performance varied according to the polymer composition, as presented in Figure 5. While PTES presents the
MIP/NIP Characterization
The polymers obtained at the optimum composition were physic and chemically characterized via scanning electron microscopy (SEM) and energy-dispersive X-ray spectroscopy (EDX). The surface chemical composition presented in Table 3 confirms the modification of the silica particle surfaces through the synthesis steps. Glutaraldehyde-modified silica particles present higher content of C and O due to the glutaraldehyde functionalization of the surfaces of the particles and thus lower Si content. Silica particles covered by MIP and NIP presented higher C content due to the organic functional groups of the monomers employed for the polymer synthesis. The particles had a diameter of ≈0.5 µm, as shown in the SEM images in Figure 6. The MIP and NIP layer around the silica particle does not significantly change its diameter since it must be thin enough to avoid covering the immobilized epitopes on the silica particle surface, that are removed after the polymerization to create the imprinted sites. Measurements of zeta potential of the particles at different steps of the synthesis of the polymers demonstrate the changes in the composition of the surface of the particles as presented in Table 4. Significant differences in the surface charges were observed after functionalization of the particles with glutaraldehyde. Despite MIP and NIP having the same chemical composition, MIP presented a lower zeta potential probably due to the different orientation of the functional groups on the surface during the formation of the imprinted sites.
Selectivity of the Polymers
The selectivity of the MIP and NIP was evaluated against three peptides with different amino acid compositions but similar lengths (peptide 1: MIVNDTGHETDENRA; peptide 2: TECSN-LLLQYGSFCTQL; peptide 3: KLPDDFTGCV) and two human proteins (HAS: human serum albumin, HH: human hemoglobin). MIP and NIP presented a similar binding capacity for peptides and proteins indicating that the molecular imprinting for the SARS-CoV-2 epitope does not improve the adsorption of other peptides, as shown in Figure 7.
Besides the enhanced recognition ability against the epitope peptide used as the template, it is anticipated that the MIP also provides a higher affinity for the entire SARS-CoV-2 spike protein, and consequently, the SARS-CoV-2 virus. To investigate this, MIP and NIP particles were incubated with SARS-CoV-2-spike containing pseudoviruses and residual infectivity in the supernatant quantified, revealing the ability of MIP/ NIP to capture spike-containing viral particles. The MIP presented a significantly better affinity for the SARS-CoV-2 spike particles versus the NIP achieving an IF of 4.1, while an IF of 1.2 was observed for VSV-G glycoprotein particles, as shown in Figure 8. The same procedure was performed with phosphate-buffered saline (PBS) buffer as control (i.e., in absence of any polymers) and neutralizing agents for the spike protein (anti-S mAb, Bamlanivimab 35 µg mL −1 ) and VSV-G (anti-G Hy, I1-Hybridoma supernatant) to confirm that infectivity is solely conferred by SARS-CoV-2-Spike or VSV-G in the respective pseudoviruses and may be blocked by specific agents entirely. The performance of the developed MIP highlights their potential as "plastic antibody" for the treatment of SARS-CoV-2 via strategies including but not limited to drug-free therapeutics, whereby the MIPs may act as artificial ACE2 receptor targeting the active sites of SARS-CoV-2 spike protein thereby preventing the infection of cells. Furthermore, MIPs may be loaded with antiviral agents and act in a dual-mode approach combining drug delivery with binding to the spike protein. In the future, MIPs may also serve as an immunoprotective vaccine or a tool to improve chem/biosensors for the detection of SARS-CoV-2.
The obtained results also highlight the viability of using an epitope as a template for the molecular imprinting process to achieve excellent selectivity and recognition ability for an entire protein or virus. In addition, using peptides as templates has several advantages including easy and cheap access to customizable sequences versus native proteins, which are usually expensive and available only in small quantities, which limit their application as a template in molecular imprinting routines. Last but not least, using epitope peptides as templates does not require special facilities to handle pathogenic species during the MIP synthesis.
Conclusions
A silane/silica-based core/shell MIP that mimics ACE2 receptor and efficiently binds to SARS-CoV-2 spike RBD protein was developed. Computational simulations enabled the rational selection of the most suitable silane-based monomers via screening the interactions with the SARS-CoV-2 epitope, which resulted in MIPs of superior binding capacity for the selected peptide. The developed MIPs presented superior recognition abilities against pseudoviruses containing SARS-CoV-2 spike proteins, which highlights their substantial potential during the treatment and diagnosis of SARS-CoV-2 viruses. Since the developed MIPs mimic the ACE2 receptor and bind to the SARS-CoV-2 virus, they offer future perspectives on drug-free therapeutics by blocking the infection of healthy cells. Finally, using an epitope peptide as the template during the molecular imprint synthesis renders this strategy safe, cheap, and easy to apply for a wide range of similar scenarios.
For the production of VSV pseudo particles expressing SARS-CoV-2 Spike (Wuhan Hu-1) or VSV-G (VSV serotype Indiana), HEK293T cells were transfected with expression plasmids as described previously. [30] One day after transfection, cells were inoculated with VSV * ΔG-FLuc and incubated for 1-2 h at 37 °C. VSV * ΔG-Fluc is a replication-deficient VSV vector in which the genetic information for VSV-G was replaced by genes encoding two reporter proteins, enhanced green fluorescent protein, and firefly luciferase (FLuc), and was kindly provided by Gert Zimmer, Institute of Virology and Immunology, Mittelhäusern, Switzerland. [31] The inoculum was removed, cells were washed with PBS and fresh medium containing anti-VSV-G antibody (I1-hybridoma cells; ATCC CRL-2700) to block residual VSV-G particles added when producing Spike-containing pseudo particles (not for VSV-G containing pseudo particles). After 16-18 h, the supernatant was collected and centrifuged (2000× g, 10 min, room temperature) to clear cellular debris. Samples were then aliquoted and stored at −80 °C.
Equipment: The morphology of the particles and their composition were investigated using a Quanta 3D FEG SEM equipped with a focused gallium ion beam (FIB) (FEI Corp., Eindhoven, The Netherlands), and an EDX detector (Apollo XV SDD, EDAX). Peptide detection was performed in a Specord S600 UV/Vis spectrophotometer (Analytik Jena). Zeta potential measurements were performed with a Zetasizer NANO ZSP (Malvern, Herrenberg, Germany).
Computational Simulations: The structural files for PTES, APTES, UPTMS, and TEOS were obtained from PubChem databank. The epitope/peptide structure (FNCYFPLQSYGFQPTNG) was extracted from the crystal structure of the SARS-CoV-2 receptor-binding domain (ID:7JMO) using UCSF Chimera. [32] Autodock tools were used for molecular docking. Autodock is probably the most commonly used open-source molecular docking software based on the AMBER force field suitable for proteins, nucleic acids, and other organic molecules. [33] The ligand files were converted to pdbqt files after setting the torsional degrees of freedom based on the detected rotatable bonds. For docking of silane molecules, the parameters for Si were added in the AD4.1_bound and AD4_parameters data files (see the Supporting Information). Polar hydrogens were added to the peptide. Any water molecules were removed. A grid box of the dimensions 62 × 98 × 66 Å was centered around the peptide. The number of energy evaluations was set to the maximum (25 million evals) to improve the reproducibility and accuracy of the calculations. Furthermore, the number of docking runs was set to 100, specified by the ga_run parameter. For molecular docking, the Lamarckian Genetic Algorithm (LGA) was used. Docking results were analyzed using BIOVA Discovery Studio Visualizer software and UCSF Chimera. [32,34] Silica Particle Synthesis and Functionalization: Silica particles were synthesized based on the Stöber method. [35] Briefly, ethanol (70 mL), ammonia (40 mL), and deionized water (20 mL) were added into a 250-mL round-bottom flask and stirred at 600 rpm by 5 min followed by the addition of TEOS (10 mL). The mixture was stirred for 20 h at 600 rpm and at room temperature (22 °C). The obtained silica particles suspension was centrifuged at 6500 rpm for 10 min and washed with one portion of ethanol (≈30 mL) and three portions of water (≈30 mL). The silica particles were dried in a vacuum oven at 40 °C and 200 mbar for 48 h.
Amino functionalization of the silica particles was performed as follows: silica particles (300 mg) were suspended in water (15 mL) in an ultrasonic bath for 30 min, followed by the addition of APTES (105 µL) and stirring at 600 rpm for 30 min. The amino-functionalized particles (NH 2 -SP) were washed three times with deionized water to remove unreacted APTES.
Glutaraldehyde functionalization was performed by resuspending NH 2 -SP in deionized water (15 mL) followed by the addition of glutaraldehyde (180 µL) and stirring at 600 rpm for 30 min. Glutardialdehyde-functionalized silica particles (Glut-SP) were washed with three portions of deionized water and dried in a vacuum oven at 40 °C and 400 mbar overnight.
Synthesis of MIP and NIP: The MIP synthesis consisted of three steps. I) immobilization of the epitope template on the surface of the Glu-SP: Glu-SP (30 mg) were resuspended in PBS buffer (5.0 mL, pH 7.4), and peptide solution (300 µL, 10 mg mL −1 ) was added into the suspension and incubate by 1 h under 700 rpm stirring. II) Silica-based MIP synthesis: different molar ratios of TEOS, APTES, PTES, and UPTES were added into the Glu-SP suspension with immobilized epitopes and incubated for 80 °C for 80 min. III) Template removal: MIP particles were washed six times with HCl solution (30 mL, 0.1 mol L −1 ). The NIP was synthesized at identical conditions without the addition of the epitope template.
Binding Studies: The binding studies were performed as follows: MIP or NIP (10 mg) were suspended into PBS buffer (2 mL, pH 7.4) and kept under stirring for 10 min, followed by the addition of the peptide solution (50 µL, 1 mg mL −1 ). The mixture was kept under constant stirring in a rocking platform for 1 h at room temperature. The particles were separated by centrifuging at 5500 rpm for 10 min. The supernatant was analyzed in a Uv/Vis spectrophotometer.
For pseudovirus capture assay, VeroE6 were seeded in 96-well plates one day prior (6000 cells/well). Pseudovirus was then added to particles at twofold dilution and incubated for 1 h on a rocking platform at room temperature. Particles were then separated by centrifuging at 5500 rpm for 10 min. The supernatant was then added to VeroE6 at tenfold dilution. After an incubation period of 16-18 h, transduction efficiency was analyzed. For this, the supernatant was removed, and cells were lysed by incubation with Cell Culture Lysis Reagent (Promega) at room temperature. Lysates were then transferred into white 96-well plates and FLuc activity was measured using a commercially available substrate (Luciferase Assay System, Promega) and a plate luminometer (Orion II Microplate Luminometer, Berthold). For analysis of raw values (RLU/s), the background signal of an uninfected plate was subtracted, and values normalized to pseudovirus incubated in PBS only (without MIP/NIP).
|
2022-01-07T16:20:45.928Z
|
2022-01-05T00:00:00.000
|
{
"year": 2022,
"sha1": "259819f90745597e93f634ea2cd3337d46d2abf2",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/admi.202101925",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "7e34ff91b4caaf60593470e0724499703863f6da",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
12601858
|
pes2o/s2orc
|
v3-fos-license
|
Changes in Neural Connectivity and Memory Following a Yoga Intervention for Older Adults: A Pilot Study
Background: No study has explored the effect of yoga on cognitive decline and resting-state functional connectivity. Objectives: This study explored the relationship between performance on memory tests and resting-state functional connectivity before and after a yoga intervention versus active control for subjects with mild cognitive impairment (MCI). Methods: Participants ( ≥ 55 y) with MCI were randomized to receive a yoga intervention or active “gold-standard” control (i.e., memory enhancement training (MET)) for 12 weeks. Resting-state functional magnetic resonance imaging was used to map correlations between brain networks and memory performance changes over time. Default mode networks (DMN), language and superior parietal networks were chosen as networks of interest to analyze the association with changes in verbal and visuospatial memory performance. Results: Fourteen yoga and 11 MET participants completed the study. The yoga group demonstrated a statistically significant improvement in depression and visuospatial memory. We observed improved verbal memory performance correlated with increased connectivity between the DMN and frontal medial cortex, pregenual anterior cingulate cortex, right middle frontal cortex, posterior cingulate cortex, and left lateral occipital cortex. Improved verbal memory performance positively correlated with increased connectivity between the language processing network and the left inferior frontal gyrus. Improved visuospatial memory performance correlated inversely with connectivity between the superior parietal network and the medial parietal cortex. Conclusion:Yoga may be as effective as MET in improving functional connectivity in relation to verbal memory performance. These findings should be confirmed in larger prospective studies.
INTRODUCTION
The global population is aging at a rate unprecedented in human history. This increase in aging, whereby over 2 billion people will be ≥ 60 years of age by 2050 [1], will obviously be paralleled by a rise in age-related cognitive impairments. Mild cognitive impairment (MCI) has been noted to range between 10-20% in samples of population-based studies of older adults [2]. Individuals with MCI have a 2.5fold high risk for developing all-cause dementia, and MCI is often a precursor to Alzheimer's disease (AD) [2]. MCI is thought to represent a continuum between healthy aging and AD [3], and thus can be used as models for preventive interventions. As such, there is increasing interest in the neural circuits and other factors that affect and predict progress to AD; as well as novel interventions.
Yoga is one such technique that is rapidly gaining interest and popularity in the West (although it has been used in the East for thousands of years), mainly being used by millions of people for stress-reduction, according to surveys. For example, in England there was a rise from 0.46 to 1.11% of population between 1999 and 2008 [4]; and in the US, 2012 data suggests 9.5% of the US population uses yoga therapies [5]. New data is now suggesting that mind-body interventions including yoga, tai chi, mindfulness meditation, and qi gong, have promising effects on cognitive issues related to aging [6][7][8]. A recent study by Gothe et al. [9] compared the effects of Hatha yoga and stretching-strengthening control on cognition over 8 weeks in 118 community-dwelling older adults, and found significantly improved executive function, working memory and efficiency of mentalset shifting and flexibility in the yoga group compared with the control group. A study by Hariprasad et al. [10] compared yoga with waitlist controls over 6 months among 87 residents of nursing homes in India. Findings showed that individuals in the yoga intervention group showed marked improvements in an array of cognitive domains compared to wait-list controls.
However, there are no studies to date examining functional brain connectivity in relation to memory following a yoga intervention among older adults [11]. In a previous pilot study using Kirtan Kriya (KK, a meditation that involves chanting mantra, hand movements, and visualization), older family dementia caregivers showed significant improvements in cognition, memory, and brain metabolism compared to a relaxation control condition [12]. A pilot study by Wells et al. [13] explored the effect of mindfulnessbased stress reduction (MBSR) versus usual care on neural outcomes among 14 MCI subjects. They explored resting state functional magnetic resonance imaging (rs-fMRI) and structural MRI data around the default mode network (DMN) and hippocampus, and found enhanced DMN activity and lesser bilateral hippocampal atrophy. These results are very relevant for subjects with MCI and AD, which typically affects DMN and hippocampus very early in the disease process, and some have used these are early markers of AD [14].
Memory enhancement training (MET) is another technique that has been increasingly explored for the prevention of cognitive decline, and is thus considered a 'gold-standard' and rigorous control for studies of cognitive impairment. The goal of MET is to optimize cognitive function to support individuals' life functioning and quality of life [15]. There is evidence suggesting that MET promotes significant improvements in cognitive performance among healthy, older adults [16]; however there is no clear consensus on the mechanisms of action underlying reported benefits [17]. A number of systematic reviews have been conducted on the effects of MET in individuals with MCI. For example, one review by Jean et al. [18] analyzed 15 studies and found effects on memory, and quality and life. Simon et al. [19] explored this field for amnestic MCI patient and ground effects on cognitive performance and subjective perceptions of cognition. There are no studies to date exploring the effect of MET on rs-fMRI.
To our knowledge, yoga has never been compared to MET in a direct head-to-head comparative clinical trial and this has not occurred with in vivo imaging biomarkers. In this preliminary study of KK meditation (with weekly Kundalini yoga classes as boosters) and MET, we explored associations between memory improvements and functional plasticity in brain networks relevant for memory. Our primary focus was on the relationship between changes in neural functional connectivity and changes in memory performance.
METHODS
Data were collected for the "Memory Training Versus Yogic Meditation Training in Older Adults with Subjective Memory Complaints and MCI" study at UCLA from 2013 to 2015 (NCT01983930). This 12-week randomized trial examined the cognitive effects of MET versus KY+KK in older adults with memory concerns and a CDR of 0.5.
Participants
Participants were recruited via advertisements from UCLA outpatient clinics and the UCLA Longevity Center Program, and from the community. This study was approved by the UCLA Institutional Review Board (IRB). All participants underwent IRB-approved informed consent procedures prior to enrolling in the study.
Inclusion criteria were: 1) age ≥ 55 years; 2) subjective memory complaints; 3) Clinical Dementia Rating (CDR) Scale score of 0.5 [20]; 3) sufficient English proficiency at the 8th grade level or higher as determined by the word reading subtest of the Wide Range Achievement Test-4 [21] to participate in MET; 4) capacity to provide informed consent.
Exclusion criteria included: 1) current or past Axis I psychiatric disorders, or recent unstable medical or neurological disorders; 2) any disabilities preventing participation in the MET or KY+KK conditions (e.g., severe visual or hearing impairment); 3) insufficient English proficiency; 4) a diagnosis of dementia per the DSM-IV/5; 5) Mini-Mental Health Examination (MMSE) [22] score of 24 or below; 6) use of psychoactive medications; 7) participation in a psychotherapy that involves cognitive training; and 8) participants with prior or current training in yoga.
Randomization and blinding procedures
After all baseline test results were reviewed and eligibility criteria were confirmed, subjects were randomized to either MET or KY groups using a computer-generated random assignment scheme, which assigns subjects in a 1:1 ratio to each group in the blocks of 8-10 subjects. All groups were called "wellness and mental stimulation" groups. All behavioral raters, the principal investigator, all statisticians and data managers were blind to the group assignment, and subjects were asked not to disclose their group assignment to the raters. No unblinding occurred in the process of assessment.
Neuropsychological assessment schedule
Participants were assessed at baseline (pretreatment) and at 12-weeks (post-treatment). While an extensive neuropsychological battery was administered for the full trial, the focus of this research was to identify the neural networks associated with memory improvements resulting from a meditation (KK) and MET among older individuals with MCI. The following memory tests were used: A verbal list-learning measure, the Hopkins Verbal Learning Test-Revised (HVLT-R) [32] (20-min delay recall raw scores) and a visuospatial memory measure, the Rey-Osterrieth Complex Test (Rey-O) (30-min delay recall raw scores) [33]. Only HVLT and Rey-O delayed recall change scores were correlated with connectivity changes in this study to compare the effects of interventions on network connectivity specifically associated with long-term memory.
Yoga intervention
Participants were randomly assigned to the KK meditation group (which included 60-min weekly KY classes and a 12-min KK meditation daily as a homework assignment). The structure of the 60-min weekly yoga sessions were the same for each group and standard for KY classes conducted by a certified KY yoga teacher. The content and structure of the class did not differ from one week to another and contained the following elements: 1) Tuning in (5 min); 2) Warm up (10 min); 3) Breath techniques "Pranayama" (10 min); 4) Kriya (20 min); 5) Meditation (11 min); 6) Rest "Shavasana" and closing (4 min). KK was taught during the class using standardized CDs for the 12-min meditation. Standardized handouts and CDs were given to each participant for home practice. This protocol is standard for the KK and KY practice and has been utilized in previous studies with older adults [34]. A brief (3-5 min) warm-up with 12-min yogic practice includes an ancient chanting meditation, KK, performed every day at the same time for a total of 12 weeks, using a previously tested protocol. The meditation involves repetitive finger movements (or mudras), as well as chanting of the mantra "Saa, Taa, Naa, Maa," meaning "Birth, Life, Death, and Rebirth," first chanted aloud, then in a whisper, and silently for the total of 11 min followed by 1-min of the final deep breathing relaxation accompanied by the visualization of light [35]. Adverse events were monitored at each visit by using the UKU Side Effect Rating Scale [36].
Memory enhancement training
Developed at the UCLA Longevity Center by Dr. Ercoli and colleagues, MET involves a scripted curriculum for trainers and a companion workbook for participants. The standard detailed protocol for the MET program is based on evidence-based techniques that use verbal and visual association strategies and practical strategies for memory compensation [16,37]. MET is manualized and includes several components common to effective memory training programs [16] such as: (1) education about memory; (2) preliminary instruction in the basic elements of memory strategies (i.e., "pre-training"); (3) instruction in specific memory strategies; (4) home practice with logs to track activity; (5) addressing non-cognitive factors such as self-confidence, anxiety, and negative expectations; and (6) small (i.e., about 10 persons) groups and short (60-min) sessions. The curriculum is divided into 12 weekly-sessions that are organized in the same way in which trainers: (1) document the number of participants attending the session, collect homework completion logs and assess engagement in alternative treatments; (2) review the previous session's homework to reinforce techniques; (3) teach new techniques, reviews, and conduct in-class exercises; and (4) assign homework. Each session is devoted to learning and practicing techniques, and 15 min are reserved for reviewing homework. Specific techniques taught include: Visual associative strategies for learning faces and names (adapted from McCarthy [38]); verbal associative techniques (such as the use of stories) to remember lists; organizational strategies (categorizing items on a grocery list); learning memory habits to recall where one places items, what one has done in the recent past (e.g., locking doors, turning off appliances); and how to remember future tasks (i.e., appointments).
Image acquisition
Resting-state (rs) fMRI data were collected with a 3T TIM Trio scanner (Siemens AG, Munich & Berlin, Germany) at baseline and at 12-weeks (immediately post-intervention). Participants' heads were positioned comfortably within a 32-channel head coil, and head motion was minimized with firm cushions. We instructed participants to close their eyes and stay awake during image acquisition. RS-fMRI images were acquired for 5 min and 41 s with a multi-band gradient-echo echo-planar imaging (EPI) sequence sensitive to BOLD contrast effects. We acquired 275 contiguous EPI resting-state volumes, and the parameters for functional imaging were: Repetition time 1.24 s, echo time 38.2 ms, flip angle 65 • , field of view 21.2 × 21.2 cm 2 , acquisition matrix 118 × 118, 1.8 mm 3 iso-voxel size (no gap), 78 slices, and 6 bands. We also acquired anatomic images with 3-dimensional MPRAGE sequence (acquisition matrix 256 × 256 with 1 mm thick contiguous slices) for co-registration with the rs-fMRI data.
Image preprocessing
Brain imaging data were pre-processed using FSL (FMRIB Software Library (FSL, www.fmrib.ox. ac.uk/fsl) for motion correction, high-pass filter (0.01 Hz), image normalization and 5-mm 3 Gaussian spatial smoothing. MELODIC (Multivariate Exploratory Linear Decomposition into Independent Components, a tool of FSL) was used to remove significant brain motion, scanner, and physiological artifacts using independent component analysis (ICA). The processed functional data from all participants were temporally concatenated to form a 4-dimensional data set, which was decomposed into group-level independent components (ICs) using ICA as implemented in MELODIC. The MELODIC automated dimensionality estimate was used to determine the number and order of the ICs [39]. Twenty-five ICs were identified with this method. Each component includes brain structures that share the same temporal pattern of signal after mixture modeling was applied. The dual regression approach was subsequently used to back-reconstruct individual-specific connectivity maps associated with each group-level component, which been shown to be an effective and reliable approach to analyses of resting state fMRI data [40]. This approach yielded 36 ICs; 26 of these overlapped grey matter and were considered biologically plausible and considered for subsequent statistical analysis (described below).
Statistical analysis
The two groups were compared on all demographic and clinical measures at baseline to assess that randomization procedures were effective. We used Fisher's exact tests for the categorical measures and Wilcoxon-Mann-Whitney tests for the continuous measures. Changes in clinical and the memory measures were examined between groups using Wilcoxon-Mann-Whitney tests and within groups using Wilcoxon signed rank tests. A significance level of 0.05, two-tailed was used for all inferences.
The present study focused primarily on 4 ICs for analyses, based on group level ICs relevant to the tests of long-term memory, including: The DMN, posterior DMN, language network, and superior parietal network. For each IC, we analyzed correlations between voxel-wise changes in network connectivity and changes in memory performance across all the participants (using years of education, age, sex, and days of training duration as covariates). These analyses were restricted to component-specific masks generated by thresholding the group-level IC probability image to voxels having 0.5 or higher probability of contributing to each IC map. Voxel-wise thresholds were chosen at z < 2.3 p < 0.01, and corrected for cluster-size using Random Field Theory at p (corr) < 0.05. Finally, we conducted post-hoc region of interest (ROI) analyses, assessing correlations for each intervention group (yoga and MET) separately between changes in functional connectivity and changes in memory scores. In these post-hoc ROI analyses, outliers were identified using Cook's distance (d) and removed as appropriate. Figure 1 presents the Consolidated Standards of Reporting Trials (CONSORT) flow. Groups were statistically equivalent across all measures at baseline. Table 1 presents the baseline clinical and demographic characteristics of the intervention. Twenty-five individuals total were involved in this study with: 14 individuals in the yoga group (mean age of 67.1 ± 9.5 years) and 11 individuals in the MET group (mean age of 67.8 ± 9.7 years).
RESULTS
Change scores did not significantly differ between the two groups in most of the clinical or memory measures; however the yoga group improved significantly in depression (GDS) and in visuospatial memory (Rey-O delayed recall). The clinical improvement in GDS for the yoga group was only minimal (baseline 7.5 (5.1) and follow up 3.9 (2.5); p = 0.01).
In terms of functional connectivity, we focused our analyses on four components related to longterm memory (the DMN, posterior DMN, language network, and superior parietal network). Results showed significant correlations between changes in connectivity and long-term declarative memory performance alterations for both groups after training (p (corr < 0.05). These components are described in detail in the sections below.
Default mode network findings for yoga and memory enhancement training
In our analyses, we identified two DMNs. The first included regions most typically associated with the DMN, including the precuneus, PCC, ACC, medial frontal cortex (MFC), and hippocampus. The second DMN was more posterior and included precuneus, PCC, lateral parietal cortex, and hippocampus. No significant effects were identified in this latter posterior DMN. Results within the more typical DMN are discussed below.
In analyses of the DMN (Fig. 2), improvement in the HVLT delayed recall in both in yoga and MET groups correlated with greater connectivity in two anterior clusters, the pregenual ACC and frontal medial cortex (FMC). ROI analyses confirmed the effects were present in both groups for the ACC (R 14 = 0.84, p < 0.001 for yoga subjects; R 11 = 0.73, p = 0.011 for MET subjects). The correlations were still statistically significant after removal of an outlier in the yoga group (Cook's Distance = 0.81) (R 13 = 0.62, p = 0.024 for yoga subjects). Correlations were also significant for both groups in ROI analyses of the FMC (R 14 = 0.91, p < 0.001 for yoga group; R 11 = 0.64, p = 0.035 for MET group). Similarly, the FMC correlation remained significant after the removal of an outlier in the yoga group (Cook's Distance = 2.3) (R 13 = 0.76, p = 0.002 for yoga group). See Fig. 2.
Increased HVLT delayed recall positively correlated with increased DMN-connectivity within two posterior clusters: The PCC and left lateral occipital cortex. In ROI analyses, this positive correlation was present for both groups in the PCC (R 14 = 0.78, p = 0.001 for yoga group; R 11 = 0.68, p = 0.02 for MET group). However, the correlation in the PCC was not significant after removal of an outlier in the yoga group (R 13 = 0.47, p = 0.106) (Cook's Distance = 0.86). Increased HVLT delayed recall was also correlated with increased connectivity within the left lateral occipital cortex for both groups in ROI analyses (R 14 = 0.81, p < 0.001 for yoga group; R 11 = 0.77, p = 0.005 for MET group). This correlation remained borderline significant after removal of an outlier in the yoga group (R 13 = 0.51, p = 0.077 for yoga group).
An additional cluster in the dorsolateral prefrontal cortex (middle frontal gyrus) also showed increased connectivity in association with increased HVLT delayed recall scores. ROI analyses indicated the effect was present in both groups (R 14 = 0.87, p < 0.001 for yoga group; R 11 = 0.68, p = 0.02 for MET group). The correlation remained significant after removal of an outlier for the yoga group (R 13 = 0.62, p = 0.025 for yoga group) (Cook's Distance = 3.1).
Language network findings for yoga and memory enhancement training
The language resting-state network included the bilateral frontal orbital cortex, superior frontal gyrus, left middle and inferior frontal gyrus, dorsal anterior cingulate cortex, left middle and superior temporal gyrus, and angular gyrus. Subjects showed positive correlations between long-term verbal memory performance changes and altered functional connectivity in one cluster, the left inferior frontal gyrus (IFG). ROI analyses indicated that this positive correlation was present for both groups (R 14 = 0.82, p < 0.001 for yoga group; R 11 = 0.55, p = 0.079 for MET group). However, the correlation was not significant after removal of an outlier for the yoga group (R 13 = 0.42, p = 0.15). See Fig. 3.
Superior parietal network findings for yoga and memory enhancement training
The superior parietal network (known for its role in long-term and working memory [41]) included the bilateral PCC, precuneus, precentral and postcentral gyrus, and parietal operculum cortex. With respect to this network, a single cluster near the precentral and postcentral gyri exhibited a negative relationship between changes in functional connectivity and changes in long-term visuospatial memory performance. In an ROI analysis, both the yoga and MET groups showed significant effects in this cluster (R 14 = -0.59, p = 0.028 for yoga group; R 11 = -0.73, p = 0.011 for MET group). See Fig. 4.
DISCUSSION
The present study was the first to examine changes in neural connectivity and memory associated with a yoga intervention (specifically KK) and MET among a group of elderly individuals experiencing MCI. Overall, we found comparable changes for both yoga and MET in neural connectivity networks associated with memory performance.
Both the yoga and MET group showed restingstate brain activity changes reflecting improvements in memory. Mainly, results showed increased connectivity within DMN and the language network in association with improved verbal memory performance for both Yoga and MET groups. Changes in the superior parietal network were also negatively correlated with memory recall improvements. We discuss these findings in detail below.
The DMN is a network of regions showing synchronized activity patterns when the brain is at-rest and decreased when the mind is engaged in the external environment [42,43]. The DMN includes areas in the medial prefrontal cortex, the PCC, precuneus and medial temporal lobe structures including the hippocampus [44,45]. Evidence suggests involvement of the DMN in episodic memory retrieval, prospective memory encoding, social cognition, selfreferential processing, including self-prospecting and internal monitoring, autobiographical memory retrieval, future planning, and theory of mind [45][46][47]. Some research suggests that the DMN is the main rs-fMRI network affected in aging [48][49][50][51], showing reductions in connectivity between anterior and posterior nodes [48,49,[52][53][54][55][56]. Decreased DMN connectivity has been noted in AD [57] and involves areas affected by cerebral atrophy, reduced metabolism, and amyloid in AD and MCI [58]. The suggestion for alterations of DMN connectivity has been previously attributed to inefficiency reallocation of brain resources. From a neuroanatomical perspective this may be due to poorer cingulum bundle integrity which connects posterior to anterior and temporal DMN nodes [59]. Poorer antero-posterior connectivity was recently correlated with visual and verbal memory scores in a study of 116 healthy elders [55].
Our study findings suggest that yoga may be helpful in enhancing memory recall, specifically visual memory encoding. However, we acknowledge that practice effects may factor into observed improvements. Moreover, these findings suggest that improved memory recall is associated with increased DMN connectivity in anterior, posterior and frontal medial areas. Relevant to these findings are a number of other studies. A pilot study by Wells et al. [13] explored the effect of MBSR versus usual care on 14 MCI subjects. They explored rs-fMRI and structural MRI data around the DMN and hippocampus, respectively. The study found enhanced DMN activity (increased functional connectivity between the PCC and medial prefrontal cortices and hippocampus), as well as lesser bilateral hippocampal atrophy. These results are promising given that these regions are associated with MCI and AD. Yoga is believed to exert its effect via lowering stress, lowering inflammation, enhancing neuroplasticity processes (e.g., production of brain derived neurotrophic factor), increasing antioxidant levels and increasing telomerase activity [12,[60][61][62][63][64].
A number of studies have explored the effect of meditative therapies on the DMN, supporting the present study findings. For example, Taylor et al. [65] recently compared DMN functional connectivity between 13 experienced meditators and 11 beginner meditators; and found that experienced meditators had weaker functional connectivity between DMN regions involved in self-referential processing and emotional appraisal, however they showed greater connectivity in regions associated with present-moment awareness. It is likely that present-moment awareness facilitates verbal memory, processing and recall as the individual is attentive to the moment. Brewer et al. [66] conducted a similar analysis of experienced meditators versuss meditation-naïve controls across a number of difference meditation types, i.e., concentration, loving-kindness, and choice-less awareness. Experienced meditators showed deactivation in the main nodes of the DMN (MFC and PCC) across all meditation types; as well as stronger coupling between the PC, dorsal ACC and dorsolateral prefrontal cortices at both baseline and during meditation. This suggests enhanced efficiency of self-monitoring and cognitive control among experienced meditators. Given the complex nature of KK meditation (as it involves chanting, hand movements, and visualization), we believe that it engages language, visual and frontal networks important for self-regulation resulting in enhanced verbal and visual memory as shown in the present study. Other proposed potential mechanisms are via stress reduction and increased global attention/awareness. Improved visual-spatial memory performance correlated inversely with connectivity in the superior parietal network, which has been shown to be associated with attention, translation of visual to motor information, and working memory [41,67]. It is possible that these network connectivity changes reflect enhanced efficiency of connectivity between the relevant brain regions (i.e., between the bilateral posterior cingulate cortex, precuneus cortex, precentral, postcentral gyrus, and parietal operculum cortex). The less-wiring-more-firing hypothesis suggests that greater activity of neurons in older age compensates for impaired white-matter connectivity, that there is an over-recruitment of neuronal activity for functional compensation [68]. In this study, we suggest that yoga improves white-matter connectivity thereby reducing neuronal activity with the result of improved visuospatial memory performance. This hypothesis can be tested more accurately with the use of diffusion tensor imaging which measures whitematter integrity.
Although there are several limitations we believe that the findings are promising, as we used strong control group (MET) and found significant connectivity changes even with a small sample size that are in-line with previous findings. The sample size was only powered towards rs-fMRI findings, and exploring relationships between memory and functional connectivity, not exploring multi-domain effects on cognition. Additionally, we do not have long-term follow-up, which means we are unable to explore cognitive decline towards dementia. Also, it is possible that the enhanced cognitive benefits and connectivity changes resulting from the KK yogic intervention were due to the 60 min of instruction per week, the 12 min per day of Kirtan kriya meditation (shown to positively affect blood flow in the brain [34]), or a combination of these factors. However, as previous studies only using KK meditation found activation patterns that are in-line with those from the present research, it is unlikely that the weekly classes presented a large deviation. Nevertheless, this is a fruitful area for future research studies, which may aim to parse out the effects of these various activities, or perhaps determine that for optimal benefits weekly classes in addition to a daily meditative practice is recommended.
CONCLUSION
The present study examining resting state neural connectivity changes among individuals undergoing a KK yoga intervention versus MET showed that both were as effective in improving memory functions (namely memory recall) and functional connectivityrelated to verbal, attentional, and self-regulatory performance. The improvement in verbal and visuospatial memory performance may be explained by the use of the components of the chanting mantra meditation (Kirtan Kriya) with visualization that may strengthen specific verbal and visual skills, and enhance global awareness and attention.
ACKNOWLEDGMENTS
Alzheimer's Research and Prevention Foundation provided funding for this study. Other sources of funding: NIH grants MH077650 and MH086481, contracts from the Forest Research Institute to Dr. Lavretsky.
|
2018-04-03T00:30:55.036Z
|
2016-04-05T00:00:00.000
|
{
"year": 2016,
"sha1": "a963268f427e0fca3f4aed3aa4b62a9bc5b87995",
"oa_license": "CCBYNC",
"oa_url": "https://content.iospress.com/download/journal-of-alzheimers-disease/jad150653?id=journal-of-alzheimers-disease/jad150653",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "b190d1eb2bd7388ee10150f4905855a97c7c0231",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
8089524
|
pes2o/s2orc
|
v3-fos-license
|
How does the self-reported clinical management of patients with low back pain relate to the attitudes and beliefs of health care practitioners? A survey of UK general practitioners and physiotherapists
Guidelines for the management of low back pain (LBP) have existed for many years, but adherence to these by health care practitioners (HCPs) remains suboptimal. The aim of this study was to measure the attitudes, beliefs and reported clinical behaviour of UK physiotherapists (PTs) and general practitioners (GPs) about LBP and to explore the associations between these. A cross-sectional postal survey of GPs (n = 2000) and PTs (n = 2000) was conducted that included the Pain Attitudes and Beliefs Scale (PABT.PT), and a vignette of a patient with non-specific LBP (NSLBP) with questions asking about recommendations for work, activity and bedrest. Data from 1022 respondents (442 GPs and 580 PTs) who had recently treated patients with LBP were analysed. Although the majority of HCPs reported providing advice for the vignette patient that was broadly in line with guideline recommendations, 28% reported they would advise this patient to remain off work. Work advice was significantly related to the PABS.PT scores with higher biomedical (F1,986 = 77.5, p < 0.0001) and lower behavioural (F1,981 = 31.9, p < 0.001) scores associated with advice to remain off work. We have demonstrated that the attitudes and reported practice behaviour of UK GPs and PTs for patients with NSLBP are diverse. Many HCPs held the belief that LBP necessitates some avoidance of activities and work. The attitudes and beliefs of these HCPs were associated with their self-reported clinical behaviour regarding advice about work. Future studies need to investigate whether approaches aimed at modifying these HCP factors can lead to improved patient outcomes.
Introduction
Low back pain (LBP) is common, affecting 38% of adults in any one year, of whom 1 in 4 experience significant disability [37]. Only 25% of patients consulting in primary care will be symptom free 12 months later [18]. The last two decades have also seen dramatic rises in work loss and sickness benefit payments, attributed to recurrent and persistent LBP [16,36].
Guidelines for the clinical management of patients with LBP encourage health care practitioners (HCPs) to advise patients to stay active, avoid bed rest, stay at or return to work, and stress simple messages about self-management [3,31,45,47,49,50]. Previous studies have identified that HCPs do not always follow guideline recommendations for LBP [10,20,24,26] and so despite the abundance of guidelines for practice, the management of LBP poses considerable challenges and frustrations for both patients and practitioners 0304 [14] and it is increasingly clear that it is insufficient to study patient factors alone [25,48].
A potentially important but relatively unexplored influence on patients' pain experiences is the attitudes and beliefs of the HCPs with whom they come into contact. HCPs are frequently asked to provide advice and recommendations about physical activities, work, and rest and HCPs' attitudes and beliefs may be an integral part of the health care process, influencing the success or failure of treatment. HCPs hold a range of attitudes and beliefs about back pain [17,19,28,32,39,[42][43][44], and these attitudes appear to be associated with the work and activity recommendations that HCPs give to patients [17,28,42,44].
In the UK, approximately 98% of the population is registered with a National Health Service general practitioner (GP) [13]. GPs serve as gatekeepers to secondary care, selecting and referring patients for specialist investigations and treatment services. Physiotherapy is one of the most common services to which patients are referred, or which patients seek out privately [36], and LBP accounts for more than half of physiotherapists' workload in the UK [24].
Few studies have explored HCP factors in the UK, but it has been shown that many physiotherapists (PTs) continue to advise limitations of work and activity levels, despite identifying when patients with LBP are at risk of chronicity [11] and an important proportion of therapists continue treating patients with LBP even when they fail to improve [41]. The aim of this study was to measure, in national random samples, the attitudes, beliefs and reported clinical behaviour of GPs and PTs about LBP, explore their associations and evaluate the implications for both clinical practice and future research.
Design and setting
We conducted a cross-sectional, nationwide postal survey of UK GPs and PTs, involved in the management of patients with LBP, between April and November 2005. Ethical approval for the study was obtained from the West Midlands Multi-centre Research Ethics Committee (MREC). Written consent was not sought from each participant for use of survey data, but consent of respondents was assumed if they completed and returned the questionnaire.
Questionnaire sample and mailing process
We used simple random sampling to obtain details of GPs (n = 2000) and PTs (n = 2000) from national databases (Binleys database for GPs, n = 46,000 GPs on the list; Chartered Society of Physiotherapy membership database, n = 32,000 PTs on the list). In the UK, all GPs working in the National Health Service are included on the Binleys database [7], which is produced in conjunction with the Royal College of General Practitioners. The Chartered Society of Physiotherapy (CSP) is the professional, educational and trade union body representing the UK's chartered physiotherapists and 98% of all PTs are members of the CSP.
A sample size calculation indicated that a sample of 900 responders (450 GPs and 450 PTs) was required to allow us to find a minimum difference of 10% in the proportion of respondents with 'helpful' to 'unhelpful' beliefs by important practitioner characteristics at a significance level of 0.05 and a power of 90% [2]. A questionnaire package containing the questionnaire, a cover letter, an information sheet and a prepaid envelope was mailed to each HCP. A single reminder was sent to all non-responders four weeks after the first mailing. In order to allow assessment of non-response bias within the survey estimates, a brief questionnaire was mailed to a random sample of non-responders. No incentives for completing the questionnaire were offered.
Questionnaire
A filter question was used to identify those HCPs who had treated at least one patient with non-specific LBP (NSLBP) in the previous six months, so that only respondents with recent experience of managing patients with LBP were included in the analysis.
Demographics and practice information
A number of demographic and practice questions, relevant to each profession, were included. Some items were pertinent to both professions: gender; years since qualification; postgraduate training in LBP; clinical interests/speciality and personal experience of back pain. Data gathered exclusively from GPs included whether they worked only in general practice and whether the practice was a single-handed or a group practice. Data gathered exclusively from PTs included how much of their clinical practice was based in the NHS, what proportion of their caseload was primary care patients, whether they worked alone or in a team, and grade of current job.
Attitudes and beliefs measure
The Pain Attitudes and Beliefs Scale (PABS.PT [28,39]) was included as a measure of HCPs' attitudes about LBP. This was selected following a systematic review of available tools for assessing the attitudes and beliefs of HCPs about LBP [12], in which the PABS.PT fared well on pre-defined quality criteria [34]. This tool was originally developed for use in physiotherapists, but more recently has been applied to a cohort of Dutch general practitioners [30]. In addition, the members of a multi-disciplinary clinical advisory group confirmed face and content validity of the PABS.PT for both GPs and PTs after recommending that the term 'therapy' was changed to 'treatment' in two of the items of the PABS.PT. The resulting minimally amended PABS.PT was used for both GPs and PTs.
The PABS.PT assesses the strength of treatment orientation on two subscales, 'biomedical' and 'behavioural'. The biomedical orientation is described as one in which the HCP believes in a biomechanical model of disease, where disability and pain are a consequence of a specific pathology within the spinal tissues and treatment is aimed at treating the pathology and alleviating the pain. The behavioural orientation is where the HCP believes in a biopsychosocial model of disease in which pain does not have to be a consequence of tissue damage, and can be influenced by social and psychological factors. We used the amended PABS.PT [28], which consists of 19 items, each rated on a six point Likert scale ('Totally disagree' = 1 to 'Totally agree' = 6), with ten items on the biomedical subscale (score range: 10-60) and nine on the behavioural subscale (score range: 9-54). Higher scores on each subscale indicate a stronger biomedical or behavioural treatment orientation, respectively.
Clinical behaviour measures
Clinical behaviour was elicited by asking the HCPs about diagnostic investigations and for their recommendations about work, activity levels, and bedrest, for a patient with NSLBP described in a vignette. The vignette described a patient with uncomplicated NSLBP who was not at work as a result of their symptoms (Appendix A). Vignettes have been shown to be a useful measure of clinicians' practice behaviour and a more accurate assessment of clinical behaviour than data extracted from case notes when measured against the gold standard of standardised patients [40].
The clinical behaviour question regarding work was as follows: ''The patient described in the vignette asks what your advice would be about her work. I would recommend this patient to: (Please tick the one response that best describes what you would recommend this patient to do) a. Be off work until pain has completely disappeared b. Return to part time or light duties c. Be off work for a further . . . weeks (please state number of weeks) d. Return to normal work e. Be off work until pain has improved'' Responses for each of the work, activity and bedrest questions were subsequently classified by the authors as being 'strictly in line with guideline recommendations', 'broadly in line with guideline recommendations' and 'not in line with guidelines'. For the work question given above, we considered option 'd' to be strictly in line with guideline recommendations, option 'b' to be broadly in line with guideline recommendations and options 'a', 'c' and 'e' to be not in line with guideline recommendations. This classification was based on a previously published expert consensus carried out on similar practice recommendations in a postal survey of physiotherapists, osteopaths and chiropractors in the UK [22].
Brief questionnaire
The brief questionnaire sent to a sample of non-responders contained the filter question to ensure that respondents recently involved in the management of patients with LBP could be identified. Alongside key demographic questions, we included four items from the PABS.PT (two from each sub-scale chosen on the basis of factor loadings described by the tool's developers and data from a pilot study), the vignette patient and the clinical behaviour questions related to work, activity and bedrest.
Statistical analysis
Scores for the PABS.PT were calculated according to methods specified by the questionnaire developers, i.e. a simple summation of the items in each subscale [39]. No method for dealing with missing data on this measure has been published so a pragmatic decision was made that if one value was missing from a subscale, a mean score based on the remaining values was substituted. If more than one value was missing the score for the whole subscale was classed as missing. A Pearson's correlation coefficient was calculated between the scores on the two subscales of the PABS.PT as previous work has shown that they are not totally independent [28,39]. We used descriptive statistics to summarise, by professional group, demographic, and practice data for both subscales of the PABS.PT. In addition, in response to the reviewer's suggestions, we conducted a subgroup analysis of work, activity and bedrest recommendations for those respondents who had high biomedical scores and low behavioural scores and vice versa. Unless differences occurred by profession, analyses were performed on the combined GP and PT dataset.
The relationship between attitudes and beliefs and clinical behaviour was examined using ANOVA to test for an overall relationship with clinical behaviour and, when appropriate, for a linear trend across clinical behaviour groups (strictly in line, broadly in line and not in line with guidelines). The effect of non-response was examined by comparing responses from all responders to the full questionnaire to those completing the brief questionnaire. All analyses were carried out using the Statistical Package for Social Scientists for Windows (SPSS Inc., Chicago, IL, version 13).
Results
The overall response rate was 38% (n = 1534), 22% (n = 443) for GPs and 55% (n = 1091) for PTs. Of the respondents, 580 PTs and 442 GPs reported treating at least one patient with LBP in the previous six months and were included in the analysis.
Characteristics of respondents
The demographic and professional characteristics of the respondents are summarised in Table 1. The majority of GPs worked exclusively in general practice, within group practices and had at least one specialist clinical interest. The majority of PTs worked within the NHS, with other HCPs, were of senior clinical grade or above, and had a patient caseload of more than 50% primary care patients. The PTs were qualified for a shorter length of time than GPs, were more likely to be female and to have postgraduate training in LBP.
Attitudes and beliefs
Scores for both of the PABS.PT subscales could be calculated for the majority of the 1022 responders (biomedical n = 1010, behavioural n = 1004). Mean (standard deviation, range) score for the biomedical subscale was 31.0 (6.4, 12-50) overall: GPs 30.9 (5.3); PTs 31.1 (7.2), and for the behavioural subscale was 33.0 (4.6, 15-48) overall: GPs 33.7 (4.2); PTs 32.5 (4.8). For both subscales and both professional groups, the mean observed scores were in the middle of the possible ranges. The Pearson's correlation coefficient (r = À0.38; p < 0.0001) showed a statistically significant level of dependence between the two subscales, suggesting that respondents who score higher on one subscale tend to score lower on the other subscale.
Diagnostic investigations
In response to the vignette patient, most HCPs reported that they would not want the patient described to have any diagnostic investigations. Of the GPs 33% (n = 142) reported that they would request at least one investigation, compared with 24% (n = 134) of PTs (Table 2). GPs were more likely to want laboratory tests and PTs were more likely to want an X-ray or special imaging procedure such as an MRI.
Clinical behaviour
The responses to the clinical behaviour questions were classified according to whether these were 'strictly in line', 'broadly in line' or 'not in line' with guideline recommendations and the responses and the classifications are sum- Table 3. The majority of respondents reported advice that was either 'strictly in line' or 'broadly in line' with guideline recommendations'. Very small proportions of respondents reported they would provide advice that was 'not in line' with guideline recommendations for activity and bedrest, however, this figure was considerably higher for recommendations regarding work, with 28% of respondents reporting that they would recommend the patient in the vignette to remain off work.
The summary of responses of the two subgroups of high biomedical and low behavioural scores (n = 187) and low biomedical and high behavioural scores (n = 137), compared to the total sample, is also presented in Table 3. The proportion of practitioners recommending that the patient in the vignette remain off work, i.e. not in line with guideline recommendations, was substantially higher in those with high biomedical and low behavioural scores (44.9%) than those with high behavioural and low biomedical scores (11.9%). Similar differences were also seen for recommendations regarding activity and bedrest.
Relationship between attitudes and beliefs and clinical behaviour
Given the very small proportion of respondents whose advice was 'not in line with guidelines' for both activity and bedrest, associations with the PABS.PT scores were not examined. Fig. 1 shows the distributions of the PABS.PT biomedical and behavioural subscale scores for each of the reported work recommendation groups. With increasing disparity with guidelines, biomedical scores increased (mean scores: 28.3, 30.6, 33.5) and behavioural scores decreased (mean scores: 34.1, 33.3, 31.8). These associations were shown to have a significant linear trend for both the biomedical (F 1,986 = 77.5, p < 0.001) and behavioural (F 1,981 = 31.9, p < 0.001) subscale scores.
Effect of non-response
In order to assess the impact of non-response bias within the survey estimates, a brief questionnaire Table 3 Recommendations, GPs and PTs combined, and subgroups for work, activity and bedrest for the patient described in the vignette was mailed to a random sample of non-responders (GPs n = 414, PTs n = 243), and responses were received from 14% of GPs (n = 59) and 17% PTs (n = 40). For the GPs, gender mix and years in practice were similar for those completing the full and brief questionnaire. For the PTs, those completing the brief questionnaire were slightly less experienced (mean of 12 years experience versus 15 years) and more likely to be male compared to the full questionnaire responders (25% vs. 19% male). Responses to both behavioural subscale PABS.PT items and one of the two items from the biomedical subscale were similar to those for the full questionnaire. Responders to the brief questionnaire, from both professions, were more likely to agree with the statement that 'patients with back pain should preferably practice only pain free movements', indicating a more biomedical orientation. The responses to the items regarding work and activity advice were similar for responders to the full and brief questionnaires. GPs responding to the brief questionnaire reported bedrest advice that was less in line with guideline recommendations than the responders to the full questionnaire (19.3% strictly in line with guidelines compared to 38.4%, respectively), whereas the PTs completing the brief questionnaire reported bedrest advice that was more in line with guideline recommendations than the initial responders (35.0% strictly in line with guidelines compared to 21.8%, respectively).
Main findings
This is the first national UK survey of LBP related attitudes, beliefs and reported clinical behaviour of GPs and PTs and results show that responses are diverse. The majority of respondents reported advice that was strictly or broadly in line with guideline recommendations about activity and bedrest, however, over a quarter of HCPs recommended that the vignette patient with NSLBP should remain off work. Reasons why adherence to guideline recommendations for work is lower than for activity and bedrest are unclear, but may be due to the complex nature of the clinical consultation, and previous studies have shown that GPs see sickness certification as a potential threat to the doctor-patient relationship [15,29]. The attitudes and beliefs of HCPs were significantly associated with reported work advice for the patient described, i.e. HCPs with stronger biomedical and weaker behavioural treatment orientations were more likely to report advice, regarding work, which was 'not in line with clinical guidelines'. The subgroup analysis supports this, although only a third of respondents could be categorized into these subgroups i.e. high biomedical and low behavioural scores on the PABS.PT or vice versa. The differences in the PABS.PT scores were small, and although statistically significant, no guidance is currently available to suggest whether these represent a clinically relevant difference.
A considerable proportion of HCPs in the UK continue to provide advice to patients about work that is not in line with guideline recommendations. The associations between attitudes, beliefs and reported clinical behaviour suggest that some HCPs continue to practice predominantly within a biomedical model, placing most importance on the severity of tissue damage when determining a patient's level of pain and functional disability. Others have adopted a more behavioural approach to management, embracing the notion that the level of pain and functional loss may be influenced by psychological and social factors in addition to biomechanical factors.
Comparison to other studies
HCPs in this study had similar attitudes and beliefs to therapists in the Netherlands [28], with Dutch therapists having similar mean biomedical scores (29.5 vs. 31.0), but slightly higher behavioural scores (35.6 vs. 33.0) on the PABS.PT. Direct comparison of subscale scores with studies using the original PABS.PT is not possible due to a different number of items [30,39].
The attitudes and beliefs of HCPs were significantly associated with reported work advice for the vignette patient. Respondents reporting advice 'strictly in line with guidelines' demonstrated stronger behavioural and weaker biomedical orientations than those reporting advice 'not in line with guideline recommendations'. Using a variety of measures, previous studies have demonstrated that advice to restrict work or activities is also associated with a biomedical treatment orientation [28], patho-anatomical focus of training courses [39], higher fear avoidance beliefs of HCPs [17,32,42] and a strong belief that pain and impairment are invariably linked [28,44]. Our study adds to this body of literature by showing a significant association between attitudes and beliefs and reported work advice in HCPs in the UK.
Implications for clinical practice and future research
The results suggest that the attitudes and beliefs of HCPs are linked to clinical practice and the recommendations provided to patients. These practitioner factors are thus part of the dynamic interaction within LBP care episodes, along with the LBP problem itself and the patient's own perceptions about their problem. This may help explain patient outcomes, although the mechanisms behind this are likely to be complex. It is probable that HCPs' attitudes and beliefs are expressed to patients in a variety of ways, with a range of possible consequences. By restricting activities and work, HCPs may reinforce patient's unhelpful illness perceptions and increase spinal vigilance. Alternatively, they may over-direct the patient by providing strict advice to perform only specific activities and exclude others, encouraging an over-reliance on the HCP [35], which may make it difficult to foster the patients' self-management skills, something recommended as part of best practice for patients with LBP.
The reported clinical behaviour of HCPs illustrates that the majority would provide advice that is strictly or broadly in line with guideline recommendations, however, nearly 30% reported they would advise the described patient to remain off work. Staying at work or an early return to work with NSLBP is recommended [50], as the longer someone is off work the likelihood of them returning steadily diminishes, with a 20% risk of long term disability for those off work for four to six weeks [51]. Although the management of LBP, in terms of advice about activity and bedrest, seems to be broadly in line with guideline recommendations, our results show that adherence about advising early return to work is suboptimal.
Attitudes and beliefs held by HCPs may help explain why implementation of current LBP guidelines has been slow and difficult [6,8,20,23,33]. Changing clinical behaviour is recognised to be a challenge [27]. Evidence from recent clinical trials suggests that although modest intervention strategies can result in moderate changes in reported adherence to guideline recommendations [8], this does not lead to a corresponding improvement in patient outcomes [9,21,30]. A better understanding of the attitudes and beliefs of HCPs, what influences these and how these relate to outcomes of patients with LBP is needed to inform development of future implementation strategies.
Future work should further test the psychometric properties of the PABS.PT to assess responsiveness and determine appropriate cut offs for 'high' and 'low' scores on the subscales and what constitutes a clinically relevant change. Methods to assess HCP attitudes, beliefs and behaviours warrant further study. For example, the validity of using methods to measure implicit attitudes about LBP, such as those employing automatic responses, could be explored in an attempt to overcome potential social desirability bias in survey responses as HCPs become more aware of clinical guidelines.
Strengths and limitations
The strengths of this study include the large sample sizes, simple random sampling of UK GPs and PTs, use of a validated beliefs measure, and investigation of potential non-response bias. The response rate of GPs was low, but comparable to other postal surveys of GPs in the UK [4,5,38]. The sample size calculation took this into account and yielded the required sample size for the planned analyses. The response rate of PTs was in keeping with other studies [11,39,41]. Responses to the brief questionnaire were broadly similar to those completing the full questionnaire in terms of attitudes, and recommendations for work and activity. However, some differences in the advice for bedrest suggest that we cannot rule out non-response bias in our survey. Responses to one PABS.PT item showed a stronger biomedical treatment orientation for responders to the brief questionnaire. Also, GPs responding to the brief questionnaire reported advice for bedrest that was less in line with guideline recommendations than responders to the full questionnaire, so for GPs, where the potential for non-response bias is greatest, our survey may underestimate the strength of a biomedical treatment orientation and the numbers providing advice not in line with guideline recommendations.
This study captured self-reported behaviour rather than real clinical practice, which is very difficult to measure. To provide a context for the clinical behaviour questions we used a vignette of a patient with NSLBP, an approach shown previously to have acceptable validity [40,46]. Although we used established tools to assess attitudes, beliefs and clinical behaviours, there may be some overlap in the constructs they measure. We attempted to address this by the wording of instructions and the order of the tools within the questionnaire. The PABS.PT attitudes measure came first with instructions to respond to the general attitudinal type statements. The vignette and the behaviour questions came later with the instruction to consider the specific management of the patient described.
Conclusion
This study shows the diversity of the attitudes and self-reported practice behaviour of UK GPs and PTs for patients with NSLBP. Many HCPs believed LBP necessitates some avoidance of activities and the need to be off work. For a patient with a history of being off work since onset of LBP four weeks previously, over a quarter of HCPs recommended further time off work. The attitudes and beliefs of HCPs were associated with their advice about return to work. Future studies need to investigate the associations between HCP factors and patient outcomes, and test if approaches aimed at modifying attitudes, beliefs and clinical behaviours of HCPs can be successful.
|
2016-10-19T02:51:29.151Z
|
2008-03-01T00:00:00.000
|
{
"year": 2008,
"sha1": "9f71178dadc946eca652602c1b68c6c95f5cae3c",
"oa_license": "CCBYNCND",
"oa_url": "https://europepmc.org/articles/pmc2258319",
"oa_status": "GREEN",
"pdf_src": "WoltersKluwer",
"pdf_hash": "1050b80a019b1b1ad98d31b1b20b9e3da87c8106",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
110386601
|
pes2o/s2orc
|
v3-fos-license
|
Development of an exploration land robot using low-cost and Open Source platforms for educational purposes
In this paper we present the didactic experience of building a low-cost robot composed of sensors, actuators, general electronics and already available frameworks. The control of the robot is through the usage of commercial Open Source platforms as Arduino; and the Raspberry Pi. The experience ranges from general conceptualization, mechanical, electric and electronic design, microcontroller programming and communications.
Introduction
Engineering physics can be defined as the direct application of mathematics and physics in the resolution of engineering problems. A direct example of application is robotics. Robots are devices composed of actuators, sensors and electronic circuits, which find their foundation in electrodynamics, classical mechanics and solid state physics. Another important subject around robots that has a direct relationship with physics engineering is control theory, where physics and advanced mathematics are used to solve robot control problems. This two approaches make robots a good supporting tool for teaching and demonstrating physics in class. It motivates and assist students to develop problem-solving skills, as well as help them understand the real life applications of the concepts they are learning.
A strong drawback of the usage of robots as a tool to teach physics is their high cost, which range from $350 USD (Lego Mindstorms [1]) up to $10,100 USD (Robotino [2]). This can make them inaccessible to low-budget schools and universities. Many authors have tackled this problem in several ways, for example, with the use of hobbyist equipment for their projects [3][4][5][6].
In this paper we illustrate the experience of building an inexpensive robot with the main objective of explaining the details and shortcuts for the general idea to be implemented in any school. Even if the cost of this robot is not as low as other options ($14 USD, KiloBot [7]), it provides the student the experience of building a robot, experience that he will never get only by experimenting with a pre-assembled robotic kit.
Development platforms.
In order to create the systems that monitor and control the robot, we need to use microcontrollers and computers. For this purpose, we used already available development platforms. They were chosen by their low cost, availability, ease of use, large user base, support, example code, but mostly by the usage of free and/or open source hardware of software [8].
Raspberry Pi.
The Raspberry Pi is a $35 USD, credit-card sized computer that uses the Broadcom BCM2835 System-On-a-Chip. With a power consumption of 5.0W and the equivalent computing power of the original X-Box, it supports video outputs in RCA and HDMI formats, has an Ethernet port, two USB ports, up to 512MB of RAM, a GPIO port and solid state disk drive [9]. It runs on Linux operating systems, being Raspbian (based on Debian) the default option [9].
Arduino.
Arduino is a family of boards oriented to prototyping based on Open Source hardware and software [8]. It consist of a board with two microcontrollers. The principal microcontroller is where user's programs are loaded and executed. Carries a set of digital and ADC inputs besides digital and PWM outputs. The secondary microcontroller acts as a Serial to USB converter that interacts with the computer. The IDE includes the means for writing programs (refereed as sketches), compile and upload them to the board [10].
In this project we used an Arduino Uno board and four extra ATMEGA328P microcontrollers preloaded with the Arduino bootloader. Many authors have used Arduino to operate their robots, usually employing the whole board in their project [3][4]6], having the inconveniences of false connections, inefficient use of the ports, space and cost increment I . The present robot design requires four microcontrollers, making the use of four Arduino boards ($25 USD each) unsuitable. Our solution was the deployment of the ATMEGA328P microcontrollers ($1.00 USD each) to a standalone PCBs using resonators II , surrounding electronics and power supply.
Architecture and modules
The robot developed in this project is shown in Figure 1, where the main components are highlighted. Figure 2 illustrates the main blocks of architecture of the robot. It shows the major components, modules and the means of communication between them. An ATMEGA328P microcontroller is used in every module except the Control module.
Frame and support
The robot is manufactured over a recovered electric scooter, where the front wheel was eliminated in favour of a welded Ackermann steering geometry [11], in which the two wheels came from an old bicycle. The main stand was removed and four pillars added to mount acrylic platforms to accommodate modules, components and cables.
Steering and traction.
The displacement of the robot is achieved through the management of a traction motor that powers a rear wheel. The original electric motor, gears, chain and rear wheel were kept, but a rotary encoder for motion measurement, adapted from a PS/2 Mouse board and acceded with a microcontroller using a library [12], was attached to the wheel. With the help of this encoder, the microcontroller monitors the number of turns that the traction wheel makes, becoming an odometer for the robot.
The motor is driven by an H-bridge using a PWM signal that is generated by means of a PID controller implemented from a library [13] also running in the microcontroller. Finally, direction control is fulfilled by governing the steering mechanism with a digital servomotor. Figure 3 illustrates this module. The most expensive component in this module is the L298 H-bridge costing $2.45 USD. The price of manufacturing the PCB was $6 USD. The total amount was approximately $10 USD.
Power.
The main source of energy is a 4Ah, 12V, lead-acid battery. A microcontroller monitors the currents that are flowing through the modules, and control the charge rate of the battery with a PWM signal that manages a switching-mode power supply. The module also contains a Step-Down Power Converter that feeds all the electronic circuits in the robot, and a LM7805 circuit that powers servomotors. Every component is protected by a fuse to avoid over currents or short-circuit damage. Figure 4 shows the steps required to regulate and condition the power from the battery in order to be used by the different components in the robot.
The most expensive components in this module were two capacitors and a power transistor ($2.00 USD each, used to build the switching-mode power supply), six fuse mounts ($0.50 USD each) and the Step-Down Power Converter ($1.55 USD). The price of the board was approximately $14 USD for a total cost of about $27 USD.
Sensors.
In order of the robot to navigate and monitor its internal conditions, it needs to obtain physical information of its environment via sensors. Some sensors need to be implemented in a board with a microcontroller in order for their output, which can be analog (as the temperature sensor III ) or in the form of an I 2 C bus IV (like the IMU and the Magnetometer), to be read and adapted to a format the control module can understand.
Sensors implemented in a board with an ATMEGA328P microcontroller are: four Ultrasonic Range sensors, a Temperature sensor, an Inertial Measurement Unit V (IMU; combines Accelerometer and Gyroscope), and a Magnetometer. Figure 5 illustrates the sensors module, its components and interfaces. The total cost of the sensors is about $11 USD, plus $1 USD for the microcontroller and $3 USD for the board. Total is about $15 USD.
Other sensors are connected directly to the Raspberry Pi via an USB cable. Sensors connected directly to the Raspberry Pi are: two video cameras ($5 USD each) and a GPS module ($20 USD).
Communication.
A microcontroller is used to interact with the Movement, Power Source and Sensors modules. This microcontroller acts as a "network switch", receiving packages of data and retransmitting them to. the adequate port, from the modules to the computer and vice versa. The price of this module is very low (approx. $3 USD), as only a microcontroller, terminals and a very small PCB are needed.
Control.
The control module is composed of the Raspberry Pi and the devices directly connected to it VI . The computer seizes data from the modules, the GPS and the cameras; processes and stores information, send orders to the microcontrollers and receive instructions over the internet through the Wi-Fi module. The serial port information is analysed and catalogued by Python scripts. Data from the cameras is processed with the OpenCV library and the GPS data is shown in an interactive map. The robot can stream live video, and can be controlled with an X-Box controller connected to a remote computer.
The most expensive components are: the Raspberry Pi ($35 USD) and the Wi-Fi module ($14 USD). Total price is around $50 USD.
Results and future work
The robot has been completely assembled and the different modules were independently tested with satisfactory results. All the sensors and actuators are working properly within their intrinsic error and the computer communicates effectively with the operator. What follows is the theoretical mode of operation for the simple case of sending the robot to a specific point on the planet. An automatic-travel algorithm is yet to be implemented. It would use information obtained from GPS to know a very good approximate of its position at a fixed moment, and estimate from there by using the odometer, accelerometer, gyroscope and magnetometer.
Obstacle avoidance and path finder algorithms will obtain their data from GPS, ultrasonic range sensors and stereo reconstruction from the cameras. Obstacle avoidance will have precedence over path finder, and top speed will be limited in unknown environments. Stereo reconstruction will focus in mapping the floor in front of the robot in furtherance of evading holes and other obstacles. Using recycled parts from an electric scooter and a bicycle, microcontrollers and a computer, the full system for an exploration terrestrial robot for educational purposes was constructed. The total cost of the bought components is less than $200 USD, whereas many parts were already available (scooter, battery, mouse, cameras, Arduino) so the total price starting from zero could raise up to $300 USD. Even if this is the case, the experience of: gathering information from various sources to implement different components and sensors; application of theoretical physics, mathematics and statistics for movement control, sensor output interpretation and calculations; development of microcontroller and computer software in several programming languages; design and implementation of communications schemes, mechanical parts, electric interfaces, electronic circuits and PCBs; putting it all together to accomplish the building of a robot; leaves more to the student than just experimenting with a pre-assembled kit.
Notes I Nevertheless, one can state that Arduino is a single electronic board which provides convenient communication, ways of programming, surrounding electronics and electrical terminals for inputs and outputs, that make it ideal as a development platform.
II
The ATMEGA328P can work at several frequencies, but 16MHz is what timing functions are optimized for in the Arduino Uno. This means that if other frequency is used, functions as serial communication will not work unless the code for them is compensated for the frequency change. III This sensor outputs 10mV/K and can be read with the ADC in the ATMEGA328P microcontroller with a precision of ±0.45K.
IV
The ATMEGA328P microcontroller can connect to an I 2 C bus with the use of the wire library. V The Digital Motion Processor in the IMU is not very easy to use, and is still not completely open. The maker of the sensor is not clear on how to program it, and its use is almost not documented at all. Up to now, the only available source is a compiled Demo Code that Jeff Rowberg managed to reverse-engineer to create a very extensive library for the use of this sensor [14]. VI The Raspberry Pi computer has only two USB ports. In order to accommodate all this devices, an externally-powered USB HUB had to be used. It is recommended to be externally-powered for the current of the connected devices don't go through the Raspberry Pi.
|
2019-04-13T13:04:51.381Z
|
2015-01-14T00:00:00.000
|
{
"year": 2015,
"sha1": "a554c352550196748744dbee8c9b31e5098886cc",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/582/1/012007",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "35af7cbbb7f83bdd7ae26c8cf43d4f6db8e2ff82",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
1207032
|
pes2o/s2orc
|
v3-fos-license
|
Glossina spp. gut bacterial flora and their putative role in fly-hosted trypanosome development
Human African trypanosomiasis (HAT) is caused by trypanosomes transmitted to humans by the tsetse fly, in which they accomplish their development into their infective metacyclic form. The crucial step in parasite survival occurs when it invades the fly midgut. Insect digestive enzymes and immune defenses may be involved in the modulation of the fly's vector competence, together with bacteria that could be present in the fly's midgut. In fact, in addition to the three bacterial symbionts that have previously been characterized, tsetse flies may harbor additional bacterial inhabitants. This review focuses on the diversity of the bacterial flora in Glossina, with regards to the fly species and their geographical distribution. The rationale was (i) that these newly identified bacteria, associated with tsetse flies, may contribute to vector competence as was shown in other insects and (ii) that differences may exist according to fly species and geographic area. A more complete knowledge of the bacterial microbiota of the tsetse fly and the role these bacteria play in tsetse biology may lead to novel ways of investigation in view of developing alternative anti-vector strategies for fighting human—and possibly animal—trypanosomiasis.
INTRODUCTION
A comprehensive understanding of the biology of insects requires investigations on the microbial content of their guts (Steinhaus, 1960). Insects are hosts for a large panel of microorganisms that have developed a variety of interactions ranging from mutualistic to parasitic (Jeyaprakash et al., 2003;Schmitt-Wagner et al., 2003;Campbell et al., 2004;Hongoh et al., 2005). Some of these interactions have been quite well characterized, owing to their ecologic and/or economic importance. However, the exact nature of many of these interactions remains poorly understood and poorly documented.
Human African trypanosomiasis (HAT), or sleeping sickness, caused by trypanosomes transmitted to humans by the tsetse fly (Glossina spp.), belongs to the neglected tropical diseases affecting more than 1 billion people worldwide (Fèvre et al., 2008;Welburn et al., 2009). Regarding sleeping sickness itself, 60 million people are living in HAT-risk areas in the 36 countries that are listed by WHO as being endemic for the disease, among which only 10-15% really undergo epidemiological control (Cattand et al., 2001). This means that the actual number of HAT cases is probably much higher than reported and that HAT remains a serious public health problem even though the prevalence of HAT now seems to be decreasing (Barrett, 2006;WHO, 2010;Simarro et al., 2011). Unless treated the disease is fatal. The drugs currently used to fight the disease are not satisfactory, some are toxic, and all are difficult to administer (Barrett, 2006). Furthermore, trypanosome resistance to some drugs has developed and is increasing (de Koning, 2001). Therefore new strategies to combat the disease need to be developed.
To be transmitted to the mammalian host, trypanosomes must first establish in the insect midgut and, upon their migration to the salivary glands, they have to undergo a maturation process. When the fly feeds on infected mammalian hosts, trypanosomes enter the fly midgut, where they rapidly differentiate into procyclic forms. Then they either die in the midgut of refractory individuals or survive to yield persistent procyclic infections in susceptible insects. Once established, parasites migrate toward the salivary glands where they differentiate into epimastigote forms and, finally, into infectious metacyclic forms (maturation step) that can be transmitted to naïve mammals by the fly when taking another blood meal (Vickerman et al., 1988;Van Den Abbeele et al., 1999). The factors involved in the establishment step are still largely unknown. However, several factors are believed to be involved in this step among which the fly's digestive enzymes and immune defenses and the intestinal microbial flora (Welburn and Maudlin, 1999;MacLeod et al., 2007;Wang et al., 2009Wang et al., , 2012. As reviewed by Dillon and Dillon (2004), insects harbor, mainly in the intestinal organs, diverse communities of microorganisms. The tsetse fly harbors three symbiotic microorganisms (Aksoy, 2000): (i) the obligate primary symbiont, Wigglesworthia glossinidia (Aksoy, 2000), which synthesizes B vitamins (Akman et al., 2002) that the fly is unable to synthesize and which are absent from its blood diet; (ii) Wolbachia (O'Neill et al., 1993), belonging to the Rickettsiaceae family, which infects a broad range of insect species, causing a variety of reproductive abnormalities, and cytoplasmic incompatibility in tsetse flies (Alam et al., 2011); and (iii) Sodalis glossinidius, belonging to the Enterobacteriaceae family, which has been shown to be involved in the fly's vector competence (Dale and Maudlin, 1999). Although most of the studies dedicated to insect gut microbiota focused on the contribution of microbial endosymbionts to the host's nutritional homeostasis (Dillon and Dillon, 2004), others examined the role of gut bacteria in preventing pathogen development (Pumpuni et al., 1993(Pumpuni et al., , 1996Welburn and Maudlin, 1999;Gonzalez-Ceron et al., 2003;Azambuja et al., 2004). Since the trypanosomes have to complete part of their lifecycle within their vector, particularly in its gut, the concomitant presence of diverse bacteria, if any, could affect the parasite's lifecycle and finally the fly's vector competence. Therefore, our knowledge on the composition of the tsetse fly midgut bacterial flora must be improved to gain more detailed insight into the potential interactions between these bacteria and the insect harboring Trypanosoma, and/or even with the parasite itself.
This article reviews the present knowledge on the fly's gutassociated bacteria, other than symbionts, and suggests novel ways of investigation.
DIVERSITY OF MICROBIOTA IN TSETSE FLIES
While the bacterial flora composition of a few of insects [Drosophila and several mosquitoes (Pumpuni et al., 1993(Pumpuni et al., , 1996Broderick and Lemaitre, 2012)] has been investigated for years and is fairly well documented, the bacterial flora composition of the tsetse fly has only recently gained attention. Studies on tsetse flies have been conducted on insectary-reared Glossina palpalis gambiensis flies and on flies belonging to several Glossina species collected in HAT foci in two Africa countries-Angola and Cameroon (Geiger et al., 2009(Geiger et al., , 2011-and on G. fuscipes fuscipes flies from Kenya (Lindh and Lehane, 2011) (Figure 1). It is noteworthy that, using a culture-dependent isolation method and a similar enrichment procedure throughout the studies, the former group evidenced differences in the bacterial flora composition not only with respect to the fly species, but also to their geographical origin. The approach used included dilution series (which ranged from 10 −6 to 10 −10 , depending on the study) of the midgut before bacterial enrichment, in order to ensure the isolation of microorganisms that have actively multiplied in the gut and that can therefore be considered as true gut inhabitants; this process rules out bacteria that are merely transient residents. The isolated bacteria were then identified using molecular phylogeny identification. However, this culture-dependent method does not allow the identification of non-cultivable bacteria. In contrast, the group (Lindh and Lehane, 2011)
G. f. fuscipes 4)
Arthrobacter sp. fuscipes: a the species between brackets are the closest relatives according to RDPII (Maidak et al., 2001); b the species underlined were identified with culture-independent methods. No bacteria were identified in G. caliginea. collected in East Africa used both culture-dependent and cultureindependent approaches that are expected to allow the characterization of not easily cultivated-or even non-cultivable-bacteria, but possibly also of bacteria that are simply in transit in the flies' gut.
THE BACTERIAL FLORA OF TSETSE FLIES FROM ANGOLA, CAMEROON, AND KENYA
The fly species collected and studied differed from one country to another: Glossina palpalis palpalis in Angola, G. p. palpalis, G. pallicera, G. nigrofusca, and G. caliginea in Cameroon, and G. fuscipes fuscipes in Kenya, which allows limited comparisons only between fly species from different countries (Figure 1). However, one may note the overall relatively high fly infection rates by bacteria for all three countries: 54% in Angola, 53% in Cameroon (Figures 2A and 3A), and 72% in Kenya (42% when discarding the bacteria isolated from the outer cuticle of the mosquitoes), despite the differences observed in the fly species. Similarly, the prevalence of Gram-negative bacteria was much higher than the Gram-positive bacteria. Finally, most often an individual fly harbored only one bacterial species; mixed infections were sometimes observed whatever the fly species studied (Figures 2B and 3B). However, the number of bacterial isolates characterized, three per fly, was low and therefore the prevalence of mixed infection could be underestimated.
The overall high diversity of bacterial species was also unexpected (Figure 1) with respect to (i) the geographic origin of the flies: 3 bacterial species in flies from Angola, 9 in Cameroon (Figures 2, 3), 22 in Kenya (+2 identified by the cultureindependent method), and/or (ii) the fly species: 22 (+2 by molecular approaches) in G. fuscipes fuscipes, 8 in G. p. palpalis, 3 in G. pallicera, 1 in G. nigrofusca, none in G. caliginea. The number of G. pallicera, G. nigrofusca, and G. caliginea flies collected and analyzed was very low making conclusions about the number and types of bacterial species in these flies limited at this time.
Besides these similarities, substantial differences are noted when comparing the results recorded for different countries; in fact, the overall bacterial species are assigned to four different phyla in which they are nevertheless unevenly distributed with reference to the geographic origin of the flies: Actinobacteria, 4% in Kenya, 0% in Angola and Cameroon; Proteobacteria: 36% in Kenya, 66% in Angola, and 44% in Cameroon; Firmicutes: 60% in Kenya, 33% in Angola and Cameroon, and Bacteroidetes: 0% in Kenya and Angola, 22% in Cameroon. In addition, when comparing the overall bacteria species identified in the two West African countries (on G. p. palpalis, G. pallicera, and G. nigrofusca) with those characterized in Kenya (on G. f. fuscipes), only four species were found to be common: Enterobacter spp., Providencia spp., Pseudomonas spp., and Staphylococcus spp. (Figure 1). However, differences in bacterial culture conditions (as opposed to differences in geographic origin) may account for differences in bacterial species. Finally, while a large diversity of bacteria was found in field-collected tsetse flies, only one bacterial species, a novel one pertaining to the Serratia genus, S. glossinae, was isolated from insectary-reared fly midguts of G. p. gambiensis, trapped several years before in Burkina Faso (Figure 1).
DIFFERENCES IN THE BACTERIAL DIVERSITY IN TSETSE FLIES COLLECTED IN THREE AREAS BELONGING TO THE SAME HAT FOCUS
In contrast to the substantial differences in the diversity of bacterial gut inhabitants recorded according to the geographic origin of the flies, it could be expected that such differences would be much more limited in flies collected in a restricted area. This was not the case, as shown by the results of an investigation carried out in three villages (Akak, Campo Beach/Ipono and Mabiogo) located into the same HAT focus, Campo, in southern Cameroon.
The large differences in fly infection rates recorded with reference to the collecting sites were surprising. In the most representative species, G. p. palpalis, 87.5% of the flies collected in Akak were infected, in contrast to 55.5% of the flies from Campo Beach/Ipono, and only 20% of those from Mabiogo ( Figure 3A). Furthermore, considering G. p. palpalis, the distribution of the different bacteria identified was also very uneven with respect to the origin of the flies. In Mabiogo, the infection rate was the lowest. Two bacterial species were identified: Chryseobacterium spp. and Sphingobacterium spp. These bacteria were not identified in the flies sampled in the two other villages in the performed surveys. Similarly, Enterobacter and Lactococcus spp. infections were restricted to flies collected in Akak (Geiger et al., 2011), and finally, four bacteria species were isolated from flies from Campo Beach/Ipono (Acinetobacter spp., Providencia spp., Enterococcus spp., and Staphylococcus spp.) ( Figure 3B). However, since these surveys looked at three bacterial isolates per fly, it is possible that the prevalence of each bacterial species could be underestimated in the different villages tested.
ORIGIN OF THE GUT BACTERIA AND THEIR DIVERSITY ACCORDING TO THE FLY SPECIES AND THEIR GEOGRAPHIC LOCATION
The high prevalence and diversity of bacteria in tsetse flies is unexpected given that these flies are monophagous as they only feed on vertebrate blood throughout their life span. In wild populations of mosquitoes, the origin of the midgut bacteria is unknown (Pumpuni et al., 1996;Straif et al., 1998), as in tsetse flies. However, differences in the environmental conditions and in the food supply may influence the diversity of the bacterial communities harbored. This hypothesis could be acceptable if one considers that the fly may swallow bacteria present in the environment, particularly on the skin of the animals on which it feeds. This possibility cannot be excluded since Poinar et al. (1979) demonstrated that, when applied to the ears of rabbits used as tsetse fly-feeding hosts, the bacterium S. marcescens was ingested during the blood meal and multiplied in the fly's gut. Tsetse flies were shown to feed on a variety of hosts (Simo et al., 2008;Farikou et al., 2010), which probably carry diverse bacteria on their hair and skin, thus implying the possibility of the flies being infected by these bacteria. Nevertheless, the mechanism may be more complex since the G. p. palpalis flies collected in the three villages of the Campo HAT focus differed in their bacterial inhabitants, even though they developed in similar environmental conditions.
INVOLVEMENT OF MIDGUT BACTERIA IN THE INSECT VECTOR COMPETENCE AND ITS SURVIVAL
While investigations on the potential effect of gut microbiota on tsetse fly vector competence are nearly non-existent, such studies have been successfully conducted on other insects. Gonzalez-Ceron et al. (2003) reported that the Plasmodium vivax sporogonic development in field-collected Anopheles albimanus was blocked by bacteria inhabiting the mosquitoes' midgut (Gonzalez-Ceron et al., 2003). When feeding laboratoryreared adult anopheline species with either Gram-negative or Gram-positive bacteria together with Plasmodium falciparum gametocytes, it was shown that Gram-negative, but not Grampositive, bacteria partially or totally inhibited the formation of oocysts (Pumpuni et al., 1993(Pumpuni et al., , 1996. In contrast, working on field-collected mosquitoes, Straif et al. (1998) showed that the presence of Gram-negative bacteria in the midgut did not influence the number of Anopheles funestus infected with P. falciparum sporozoites, while Gram-positive bacteria significantly enhanced the incidence of mosquitoes that contained sporozoites. Furthermore, feeding mosquitoes with gentamicin significantly increased the number of Plasmodium-infected mosquitoes (Beier et al., 1994). In Anopheles albimanus, coinfections with S. marcescens and Plasmodium vivax resulted in only 1% of mosquitoes being infected with parasites, compared to a 71% infection rate in control mosquitoes (Gonzalez-Ceron et al., 2003). Recently, a significant positive correlation was observed between P. falciparum infection and the presence of Enterobacteriaceae in the mosquitoes' midgut (Boissière et al., 2012). In sandflies (Phlebotomus papatasi), microbial infections significantly reduced the rates of infection with Leishmania major (Schlein et al., 1985). In addition, strains of Pseudomonas fluorescens (Mercado and Colon-Whitt, 1982), as well as of S. marcescens (which was isolated from Rhodnius prolixus) (Azambuja et al., 2004) have been reported to be able to lyse Trypanosoma cruzi in vitro. All these examples show potential implication of bacteria isolated from insects in their vector competence.
Some of the bacterial genera/species that were found in at least one species of tsetse fly (Geiger et al., 2009(Geiger et al., , 2011 have been shown to affect other insects. Stomoxys calcitrans fly larvae require the presence of Acinetobacter spp. for complete development (Lysyk et al., 1999). Conversely, several other bacterial species including Providencia spp. and Pseudomonas spp. are close relatives of known insect bacteria (Jackson et al., 1995;Lacey, 1997). In addition, a number of Gram-negative and Grampositive bacteria such as S. marcescens, Providencia rettgeri, and several Bacillus spp. induce mortality in G. m. morsitans (Kaaya and Darji, 1989). Furthermore, S. marcescens has been shown to cause increased mortality in Anopheles albimanus mosquitoes and in G. pallidipes flies (Poinar et al., 1979;Gonzalez-Ceron et al., 2003). Other bacteria isolated from field tsetse flies (Geiger et al., 2009(Geiger et al., , 2011Lindh and Lehane, 2011) were assigned to the genus Lactobacillus, some members of which are reported to be pathogenic to plants and animals whereas other Lactobacilli are commonly found as members of human microbiota (Hammes and Hertel, 2006).
Symbionts have also been implicated in vector competence and/or tsetse fly survival. Studies from Weiss et al. ( , 2012 have shown that Wigglesworthia protect against Escherichia coli infection and promote tsetse immune system development. Moreover, interactions between Wigglesworthia and the tsetse peptidoglycan recognition protein (PGRP-LB) may be involved in trypanosome transmission (Wang et al., 2009). Weiss et al. (2013) showed that trypanosome infection in the tsetse fly gut was influenced by microbiota-regulated host immune barriers. Geiger et al. (2007) showed an association between the presence of specific genotypes of Sodalis and G. p. gambiensis midgut infection by Trypanosoma brucei gambiense or Trypanosoma brucei brucei.
MECHANISMS POTENTIALLY INVOLVED IN THE MODULATION OF PARASITE INFECTION BY MIDGUT MICROBIOTA
Several mechanisms may be involved in the modulation of parasite infection by midgut microbiota. One could be the competition for limited resources or the production of antiparasitic molecules by the bacteria inhabiting the vectors' gut. Toxic molecules (Figure 4) with potential antiparasitic activity have been identified. Among them are cytotoxic metalloproteases produced, for example, by S. marcescens and Pseudomonas aeruginosa (Maeda and Morihara, 1995) or hemolysins secreted by Enterobacter spp., E. coli, S. marcescens, and Enterococcus spp. (Hertle et al., 1999;Coburn and Gilmore, 2003). Antibiotics can be produced by Serratia spp. (Thomson et al., 2000); hemagglutinins (Gilboa-Garber, 1972) and siderophore by P. aeruginosa (Schalk et al., 2002). An antitrypanosomal factor has been shown to be produced by P. fluorescens (Mercado and Colon-Whitt, 1982). Pigments such as prodigiosin are produced by the Gramnegative bacteria such as Serratia spp. and Enterobacter spp. (Moss, 2002). They induce the fragmentation of DNA, characterizing an apoptotic action of the toxin (Díaz-Ruiz et al., 2001;Montaner and Perez-Tomas, 2003). Prodigiosin was shown to be toxic for P. falciparum (Lazaro et al., 2002) and T. cruzi (Azambuja et al., 2004). Free hemoglobin, resulting from the hemolysis of the blood meal in the digestive tract of vector insects (Azambuja et al., 2004) has been suggested to be a ready source of iron for bacteria and would contribute to the massive increase in the gut bacteria population, following feeding. However, toxic molecules have not been shown to be constitutively expressed and the production of these may even be indirectly correlated with bacterial density. Dong et al. (2009) suggested the bacteria-mediated antiplasmodium effect was due to the mosquito's antimicrobial immune responses, possibly through the activation of basal immunity. Recently, in Zambia, Enterobacter spp. were isolated from wild mosquitoes resistant to infection with P. falciparum. It was suggested the anti-Plasmodium effect was caused by bacterial generation of reactive oxygen species (Cirimotich et al., 2011).
PERSPECTIVES
It is crucial to investigate whether any of the recently identified bacteria in tsetse could modulate the fly vector's competence, as do the flies' endosymbionts (Welburn and Maudlin, 1999), and as has already been reported in other insect parasite vectors (Pumpuni et al., 1993(Pumpuni et al., , 1996Straif et al., 1998;Gonzalez-Ceron et al., 2003). Such modulation may occur through direct inhibitory bioactivity, by secreted enzymes or toxins focused on the parasitic trypanosomes. Alternatively, microbiota may constrain pathogen development indirectly by activating or enhancing the host immune system that in turn could clear the parasite; this effect was previously reported for Wigglesworthia affecting PGRP-LB (Wang et al., 2009;Weiss et al., 2013). Investigations on several insect systems indicate that both direct and indirect microbiota-induced phenotypes occur (Dong et al., 2009;Cirimotich et al., 2011). Finally, understanding the mechanisms governing the association between of tsetse flies and the hosted bacteria, and determining how the association is controlled, are important issues. These issues could be addressed by monitoring the diversity and density of bacteria in flies throughout their life cycle and by investigating the possible transmission of these bacteria species by the female fly to its progeny, as occurs for the maternal transmission of the three Glossina endosymbionts.
In wild populations, differences in environmental conditions and in food supply may influence the diversity of the bacterial communities harbored by the flies. This could explain the diversity in the flies' gut bacterial inhabitants and in fly infection rates reported in tsetse fly communities from Angola, Cameroon and Kenya, and therefore point out the need to multiply and diversify the fly collecting areas. Moreover, a greater number of samples has to be collected in order to better assess the occurrence of co-infections and to evidence the possible involvement of the gut-hosted bacteria in the tsetse fly.
All these investigations deserve to be undertaken as they may open novel avenues for tsetse vector competence control through manipulation of gut microbial communities, which in turn may result in novel HAT control strategies.
|
2016-05-12T22:15:10.714Z
|
2013-07-24T00:00:00.000
|
{
"year": 2013,
"sha1": "10b2967146eb44878957b8ebcff023974f5c9060",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fcimb.2013.00034/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "10b2967146eb44878957b8ebcff023974f5c9060",
"s2fieldsofstudy": [
"Biology",
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
25358257
|
pes2o/s2orc
|
v3-fos-license
|
Circadian regulators of intestinal lipid absorption.
Among all the metabolites present in the plasma, lipids, mainly triacylglycerol and diacylglycerol, show extensive circadian rhythms. These lipids are transported in the plasma as part of lipoproteins. Lipoproteins are synthesized primarily in the liver and intestine and their production exhibits circadian rhythmicity. Studies have shown that various proteins involved in lipid absorption and lipoprotein biosynthesis show circadian expression. Further, intestinal epithelial cells express circadian clock genes and these genes might control circadian expression of different proteins involved in intestinal lipid absorption. Intestinal circadian clock genes are synchronized by signals emanating from the suprachiasmatic nuclei that constitute a master clock and from signals coming from other environmental factors, such as food availability. Disruptions in central clock, as happens due to disruptions in the sleep/wake cycle, affect intestinal function. Similarly, irregularities in temporal food intake affect intestinal function. These changes predispose individuals to various metabolic disorders, such as metabolic syndrome, obesity, diabetes, and atherosclerosis. Here, we summarize how circadian rhythms regulate microsomal triglyceride transfer protein, apoAIV, and nocturnin to affect diurnal regulation of lipid absorption.
Several behavioral and physiologic activities show circadian rhythms that are attuned to changes in light within a 24 h day. Plasma triacylglycerols exhibit diurnal variations in humans and rodents (1)(2)(3)(4)(5)(6). Due to their hydrophobic nature, these lipids are transported in the plasma as major core constituents of apoB-containing lipoproteins. These lipoproteins are assembled in the endoplasmic reticulum and Golgi of enterocytes and hepatocytes with the assistance of a dedicated chaperone, microsomal triglyceride transfer protein etc.) cues (14)(15)(16)(17)(18)(19)(20). Therefore, diurnal regulation of peripheral tissues, such as the intestine, is affected by several central, hormonal, and environmental stimuli.
Food is a potent synchronizer of peripheral clocks and entrains various behavioral and physiologic activities (14)(15)(16)(20)(21)(22)(23)(24)(25). This can be easily demonstrated by providing food for a few days in the daytime to rodents that usually consume their meal in the nighttime with no disruptions in the light on/off schedule. Changes after food entrainment have characteristic features of circadian rhythms. These rhythms are sustained for some time after the food entrainment is discontinued. The identity of a center, if it exists, that controls and elucidates food-entrained oscillations is unknown. There is signifi cant evidence to suggest that the food-entrained oscillator might be independent of the SCN. However, the clock genes involved in the lightinduced oscillators may participate in the food-entrainment response. It is likely that the food-entrained oscillator is a network of several neural sites in the brain that cooperate to elicit a behavioral and physiologic response ( 20,26 ).
EXPRESSION OF CLOCK GENES IN THE INTESTINE
Various functions of the intestine, such as motility, gastric emptying, DNA synthesis, epithelial cell renewal, food anticipatory activity, and nutrient absorption, exhibit circadian rhythms ( 14,20,(27)(28)(29). It is well-known that the major complaints in shift workers and transcontinental travelers are related to gastrointestinal disturbances ( 30 ). Hence, it is possible that these activities are regulated by clock genes and disruptions in the diurnal expression of clock genes might be a cause for gastrointestinal discomforts. Indeed, the expression of various clock genes has been documented in various parts of the intestine. The colon exhibits the highest expression of clock genes ( 17,31 ). We showed that the expression of these proteins increases from the duodenum to the colon ( 32 ). We also measured the expression of clock genes in intestinal mucosal and epithelial cells. Clock proteins were more abundant in the epithelial cells compared with the mucosal cells. Thus, expression of clock genes increases from duodenum to colon and from mucosal cells to epithelial cells.
Clock genes in the jejunum and colon show diurnal variations ( 17,31,32 ). The peaks and nadirs in the expression of these clock genes are in phase with their expression levels increase, Bmal1 forms heterodimers with another clock gene, circadian locomotor output cycles kaput (Clock). The Bmal1:Clock heterodimers interact with the E-boxes present in the promoter regions of period (Per) and cryptochrome (Cry) genes and increase their transcription ( Fig. 2A ). Per and Cry proteins also form heterodimers and act as repressors of the Bmal1:Clock heterodimers to reduce their own expression. This transcriptional auto-regulatory loop repeats with an approximate interval of 24 h and is further modulated by several posttranslational modifi cations, such as phosphorylation and acetylation (10)(11)(12)(13). In addition to this major regulatory loop, expression of Bmal1 is regulated by other transcription factors that sense cellular energy levels and other environmental stimuli. These include retinoic acid receptor-related orphan receptor ␣ (Ror ␣ ), PPAR ␥ coactivator 1-␣ (PGC1 ␣ ), and reverse erythroblastosis virus ␣ (Rev-erb ␣ ). Ror ␣ and PGC1 ␣ increase, while Rev-erb ␣ suppresses, Bmal1 expression constituting a secondary regulatory loop (10)(11)(12). More molecular details about the regulation of these transcription factors and the regulation of circadian clock genes can be found in several excellent reviews (10)(11)(12)(13). An important feature of these clock genes is that they need to be regularly entrained by light to maintain periodic rhythmicity. In the absence of regular exposure, the intensity of circadian response shortens with time and eventually disappears.
In addition to the clock genes described above, Bmal1:Clock heterodimers also interact with E-boxes present in the promoters of several "clock controlled genes" ( Fig. 2B ) and increase their expression. These transcription factors then modulate the expression of key proteins involved in different metabolic pathways. Thus, Bmal1:Clock heterodimers increase the expression of repressors to downregulate their own expression and also increase the expression of other transcription factors to modulate different metabolic pathways.
Besides SCN, clock genes are also expressed in almost all the cells. These cells are not synchronized by light; instead, they are synchronized by hormonal and neuronal signals emanating from the SCN. These peripheral clock genes are susceptible to changes in the environment, such as heat and food. Thus, while the regulation of clock genes in the SCN is mainly entrained by the light, the peripheral clock genes are regulated by various stimuli originating from the SCN, as well as other environmental (food, temperature) and physiologic (NAD + , hormones, nutrients, Fig. 1. Diurnal regulators of intestinal lipid absorption. Both food-and light-entrained oscillators appear to affect the expression of clock genes in the intestine. Intestinal clock genes affect other genes to modulate intestinal lipid absorption. So far, at least three proteins, MTP, apoAIV, and nocturnin, that affect diurnal variations in intestinal lipid absorption have been identifi ed. Changes in the diurnal expression of these genes appear to affect diurnal variations in plasma lipids. clock genes might be secondary to the neuronal and hormonal signals emanating from the SCN. Circadian expression of clock genes in the intestine is altered with the feeding schedule ( 15,32 ). After food Regulation of clock and clock-controlled genes by Bmal1:Clock heterodimers. A: Regulation of clock genes: Bmal1:Clock heterodimers interact with E-box elements present in the promoters to increase the expression of different clock genes. When the levels of Per and Cry proteins increase, they form heterodimers and act as repressors reducing the expression of Bmal1 constituting the primary auto-regulatory transcriptional loop. Bmal1:Clock heterodimers also increase temporal expression of Ror ␣ and Rev-erb ␣ that act as activator and repressor, respectively, of Bmal1 expression by binding to a retinoic acid-related orphan receptor response element (RORE) , and constitute a secondary regulatory loop that affects diurnal regulation. PGC1 ␣ can interact with Ror ␣ to increase Bmal1 expression. These two complementary autoregulatory loops are involved in the control of circadian rhythms. B: Regulation of clock-controlled genes: Bmal1:Clock heterodimers also interact with E-boxes present in the promoter regions of several other transcription factor genes to augment their expression. These transcription factors then regulate the expression of several genes involved different pathways to affect metabolism. in the liver. However, the peak expressions of these genes are phase delayed compared with their temporal expression in the SCN ( 15 ). Thus, it is possible that a temporal delay in the rhythmicity of the expression of intestinal ABCA1 and apoAI play a role (46)(47)(48). ABCA1 and apoAI defi ciencies reduce cholesterol transport via the HDL pathway with no effect on cholesterol secretion via the chylomicron pathway. In short, fatty acids are used for the synthesis of lipids for packaging with lipoproteins and secretion. All triglycerides are transported via chylomicrons, while cholesterol and phospholipids are transported with both chylomicrons and HDLs.
Lipid absorption studies have been performed in animals, jejunal loops, and isolated primary enterocytes. These studies have revealed that lipid absorption is maximal at night and lowest in the day ( 5 ). Because lipid absorption involves uptake of hydrolyzed lipid products followed by their secretion with lipoproteins ( 7,14,(33)(34)(35)(36)(37)(38)(39)(40)(41)(42), we studied both the uptake and secretion of fatty acids and cholesterol by primary enterocytes ( 5,32,49 ). These studies showed that enterocytes exhibit diurnal variations with respect to their capacity to take up and secrete fat. Thus, various steps in the uptake, packaging, and secretion are probably controlled by clock genes.
To understand the molecular basis for diurnal lipid absorption, we looked for changes in different proteins involved in the uptake and packaging of fatty acids and cholesterol. Most of the genes examined showed diurnal changes in their expression ( 32 ). These include apoB, MTP, apoAIV, diacylglycerol O -acyltransferase 2 (DGAT2), fatty acid synthase, and stearoyl-CoA desaturase-1 (SCD-1). Thus, at least some of the genes involved in the synthesis and secretion of triglycerides are regulated by circadian rhythms. More studies are needed to establish that the individual steps and proteins involved in lipid uptake and secretion are indeed regulated by circadian clock genes. This can be achieved by measuring temporal changes in different processes and proteins involved in lipid absorption in mice defi cient in specifi c clock genes.
Clock
The role of Clock in circadian control of lipid absorption has largely been derived from studies in mice that express a dominant negative Clock protein (Clock ⌬ 19 ) in C57Bl/6J background. These mice are arrhythmic and show longer periodicity (26-29 h instead of 23-24 h) in their locomotor activity ( 50 ). The Clock mutant allele encodes a protein with 51 amino acid deletion in its putative transcriptional regulatory domain. It interacts with Bmal1, binds to E-box enhancer sequences, acts in a dominant negative fashion ( 51 ), decreases the transcription of Per and other circadian clock genes, and disables the negative feedback loop of circadian rhythm. These Clock mutant ( Clk ⌬ 19/ ⌬ 19 ) mice are entrainable during a normal light/dark cycle, but lose this ability when placed in the dark ( 50 ). In addition, they show physiologic abnormalities such as reduced fertility, obesity, hyperleptinemia, hyperlipidemia, hepatic steatosis, hyperglycemia, and metabolic syndrome ( 52,53 ). They have been extensively used to study the role of circadian rhythms in metabolism. entrainment, most of the clock genes are expressed at the time of food availability, instead of their normal peak expression at night. Thus, food availability can alter the expression pattern of clock genes in the intestine. However, the proper food-entrainment response is not seen in mice kept in constant dark or constant light. Thus, normal functioning of light-entrained response is necessary for the adaptation to food entrainment.
In short, intestinal cells express different clock genes and their function is likely modulated by these genes. Disruptions in circadian expression of clock genes in the intestine might contribute to gastrointestinal discomforts during trans-continental fl ights and while working at odd hours.
CIRCADIAN REGULATION OF INTESTINAL LIPID ABSORPTION
The major function of the small intestine is to digest and absorb food. Macronutrients, carbohydrates, lipids, and proteins are hydrolyzed in the lumen of the intestine and products are retrieved by enterocytes involving various transporters. Intestinal lipid absorption involves hydrolysis of dietary fat in the intestinal lumen, uptake of hydrolyzed products by the enterocytes, resynthesis of lipids, and assembly and secretion of chylomicrons. Several reviews have extensively discussed these steps in lipid absorption ( 7,8,14,20,(33)(34)(35)(36)(37)(38)(39)(40)(41)(42). Because lipids are water-insoluble, an important step in the digestion of these lipids is their emulsifi cation with bile salts ( 39,43 ). In this process, hydrophobic lipids are incorporated into bile salt micelles rendering them water-miscible. The digestion of triacylglycerols in the intestinal lumen produces unesterifi ed fatty acids and monoacylglycerols ( 36 ). The digestion of phospholipids is carried out mainly by pancreatic phospholipases A2, yielding free fatty acids and lysophospholipids. Cholesterol esterase hydrolyzes cholesterol esters into free cholesterol and unesterifi ed fatty acids. These hydrolyzed products are taken up by enterocytes ( 36,38,41,44 ). Thus, the major steps involved in the transport of dietary lipids from the intestinal lumen to enterocytes are emulsifi cation with bile, hydrolysis by esterases, and uptake by transporters.
After uptake, fatty acids are transported in the cells by fatty acid binding proteins. These proteins deliver fatty acids to various organelles. In the endoplasmic reticulum (ER), fatty acids are used for the synthesis of triacylglycerols, phospholipids, and cholesterol esters. These lipids are then packaged into lipoproteins called chylomicrons ( 8,45 ). Chylomicrons are very large spherical triacylglycerolrich particles that also contain phospholipids and cholesterol. The surface of these particles is covered with a phospholipid monolayer and the core is enriched in triacylglycerols and cholesteryl esters. The surface of these particles also contains apoB48 that acts as a scaffolding protein. Triacylglycerol absorption is dependent on the assembly and secretion of these particles. In contrast, phospholipids and cholesterol are absorbed via chylomicrons and the HDL pathway ( 8,46 ). In the HDL pathway, during translation, MTP assists in the formation of primordial apoB-containing lipoproteins. Formation of these particles prevents proteasomal degradation of apoB that occurs in the absence of lipid supply or MTP defi ciency.
The major organs expressing MTP are the intestine and the liver ( 9,(55)(56)(57)(58). Mechanisms controlling different levels of MTP in tissues have not been explained. MTP expression is modulated by macronutrients ( 59 ). Recently, microRNA-30c has been shown to regulates its expression and reduce hyperlipidemia and atherosclerosis ( 60 ). In general, there is a good agreement between cellular levels of MTP mRNA, protein, and activity suggesting that MTP is mainly regulated at the transcriptional level . Various transcription factors and cis -elements involved in the regulation of MTP have been reviewed ( 59 ). A short promoter with few cis -elements appears suffi cient for its expression in cells. The promoter contains positive regulatory hepatic nuclear factor (HNF)-1, HNF-4, Fox, and negative sterol/ insulin regulatory elements that bind to HNF-1 ␣ , HNF-4 ␣ , FoxO1/A2, and SREBP transcription factors, respectively. Thus, MTP expression is controlled by various transcription factors that interact with specifi c elements present in the promoter.
Intestinal and hepatic MTP expression shows modest diurnal variations with highest levels found at midnight ( 5,32,49 ). These changes are not seen when mice are placed in constant dark for 5 days, indicating that light entrainment might be needed for proper diurnal expression of the MTP gene. Further, expression of intestinal and hepatic MTP was altered when mice were subjected to food entrainment, indicating that MTP responds to food-entrained oscillator. Thus, diurnal variations in the expression of MTP gene are under the control of both light-and food-entrained oscillators. The regulation of MTP by light-entrained oscillators was further supported by the observations that diurnal variations in MTP expression were absent in Clk ⌬ 19/ ⌬ 19 mice ( 5 ). Normal Clock expression was also needed for foodentrained changes in MTP expression as Clk ⌬ 19/ ⌬ 19 mice were unable to increase MTP expression at the time of food availability after food entrainment compared with wild-type mice. Thus, Clock appears to be necessary for both light-and food-entrained oscillatory changes in MTP expression.
Evidence that Clock regulates MTP comes from studies in Clk ⌬ 19/ ⌬ 19 mice ( 49 ). MTP expression did not show diurnal variations in the intestine and livers of Clk ⌬ 19/ ⌬ 19 mice. The role of Clock in MTP regulation was further demonstrated by reducing the expression using siRNA for Clock . Knockdown of Clock increased MTP expression, indicating a reciprocal relationship. We observed that MTP lacks an E-box in its promoter that is recognized by Bmal1:Clock heterodimers. Hence, we reasoned that increases in activators or reductions in repressors might explain increases in MTP expression in Clk ⌬ 19/ ⌬ 19 mice. siRNA-mediated knockdown of Clock in cells reduced or had no effect on activators, however, one of the repressors, small heterodimer partner (Shp), was reduced ( 52 ). Further, knockdown of We studied the expression of clock genes and different nutrient transporters in mice expressing normal and dominant negative Clock protein ( 32,49 ). Our data show that normal intestinal cells express canonical clock genes in a circadian manner and are susceptible to attunement by food. Normal Clock expression is important for the circadian and food-entrained expression of nutrient transport proteins as well as in the absorption of macronutrients, as Clock mutant mice do not show circadian expression of genes involved in lipid absorption and do not respond to food entrainment ( 32 ). Thus, circadian rhythms and Clock protein are important in macronutrient absorption by the intestine.
Lipid absorption studies in Clock mutant mice involving in situ intestinal loops and isolated enterocytes showed that uptake of fatty acid and triglyceride secretion are signifi cantly altered in these mice. As opposed to wild-type mice, Clk ⌬ 19/ ⌬ 19 mice did not show signifi cant differences in the uptake of fatty acids or triglyceride secretion by intestinal loops or enterocytes at midnight or midday, suggesting that diurnal variations in lipid uptake were lost in the mutant mice. Thus, these mutant mice were absorbing lipids throughout the day, and this sustained high lipid absorption might be a reason for the hypertriglyceridemia observed in these mice.
To understand the molecular basis for defects in circadian regulation of lipid absorption in clock mutant mice, we measured expression levels of different genes involved in lipid absorption in wild-type and Clk ⌬ 19/ ⌬ 19 mice ( 49 ). The expression of several of the studied genes in the Clk ⌬ 19/ ⌬ 19 mice was altered compared with their wild-type siblings. In wild-type mice, most of the genes exhibited diurnal expression; however, they did not show diurnal variations in Clk ⌬ 19/ ⌬ 19 mice, suggesting that the absence of circadian variations in lipid absorption might be due to alterations in molecular events that control the expression of genes involved in lipid absorption.
Because Clock mutant mice develop hyperlipidemia, we reasoned that they may be more susceptible to atherosclerosis.
MTP
MTP is a chaperone that is crucial for the biosynthesis of apoB-containing lipoproteins ( 9,(55)(56)(57). It transfers lipids, mainly neutral lipids, in vitro between membrane vesicles, as well as physically interacts with apoB. Thus, by interacting with apoB and lipidating this peptide MTP expression in the early hours of the night to maximize lipid absorption and transport. This is a simplistic picture about the regulation of MTP by Clock. Other clock genes might also be involved in the regulation of MTP. Hence, more studies are needed to understand how other clock genes regulate MTP to modulate intestinal lipid absorption and plasma lipid levels.
In Clk ⌬ 19/ ⌬ 19 mice ( Fig. 3B ) Shp increased MTP expression. These studies suggested that Clock might regulate MTP expression by modulating Shp expression. The Shp gene contains an E-box. In wild-type mice, Bmal1:Clock heterodimers interact with the E-box enhancer elements in the promoter of the Shp gene increasing its expression at dawn ( Fig. 3A ) ( 61 ). When levels increase, Shp interacts with several transcription factors that activate MTP gene expression ( 49 ). These include HNF-4 ␣ , HNF-1 ␣ , and liver receptor homolog-1 (LRH-1) . By binding to these transcription factors, Shp represses the expression of MTP. The maximum association of Shp with MTP promoter occurs in the midday and is correlated with low levels of MTP in the daytime. Shp levels reduce at the end of the day and this derepresses the molecular and biochemical levels needs further exploration. Apart from these changes at the intestinal level, apoAIV might regulate lipid absorption involving central neuronal controls. Liu et al. ( 80 ) showed that apoAIV is expressed in the hypothalamus, and the hypothalamic apoAIV gene expression is reduced by food deprivation and restored by lipid refeeding. Blocking the action of endogenous apoAIV with its antibody increases meal size, implying that endogenous apoAIV exerts an inhibitory tone on feeding ( 81 ). Therefore, it is possible that hypothalamic apoAIV might control lipid absorption. In fact, it has been suggested that apoAIV affects food intake and acts as a satiety factor in rats ( 82 ). However, genetic ablation and overexpression studies in mice showed no infl uence on dietary lipid absorption and feeding behavior ( 68,83 ). However, Weinstock et al. ( 83 ) did observe that apoAIV knockout male mice took up more food after a long fast and suggested that under certain conditions apoAIV might serve as a satiety signal.
Another feature of apoAIV that is pertinent to lipid absorption is its circadian expression ( 13,38,39 ). Serum apoAIV levels exhibit circadian rhythms. Further, intestinal apoAIV protein and mRNA levels are higher in the dark. We have shown that apoAIV mRNA levels show circadian expression in ad libitum-fed mice subjected to a 12 h light/dark cycle. Further, its expression is induced by food entrainment ( 13 ). Inductions in mRNA after food entrainment are suppressed when animals are placed in continuous dark or light. Diurnal changes and response to food entrainment were not seen in Clk ⌬ 19/ ⌬ 19 mice. Therefore, it was concluded that apoAIV belongs to a family of genes that is regulated by both light and food ( 13 ). These changes in apoAIV expression are similar to those observed for MTP and plasma lipid levels ( 12,14 ). Hence, we hypothesized that apoAIV might play a role in the diurnal regulation of lipid absorption and plasma lipid levels. To test this hypothesis, we studied diurnal changes in plasma lipids, lipid absorption, and intestinal MTP in apoAIV knockout mice and compared them with wild-type controls. Increases in intestinal MTP, intestinal triacylglycerol absorption, and in plasma triacylglycerols were reduced at midnight in apoAIV knockout mice compared with wildtype controls, indicating that diurnal increases in apoAIV might optimize lipid absorption in the postprandial state ( 78 ). Mechanistic studies showed that apoAIV might contribute to optimum lipid absorption at mealtime by enhancing the expression of FoxO1 and FoxA2 transcription factors, as well as MTP. Peaks in the expression of FoxO1 and FoxA2 occurred before the maximum expression of MTP in mice fed during the day. Further, these transcription factors bound to the MTP promoter and enhanced its expression. Therefore, temporal increases in apoAIV expression at night or at mealtime might increase the expression of FoxO1 and FoxA2, enhance the binding of these transcription factors to the Mttp promoter, and elevate MTP expression. Higher amounts of MTP in congruence with augmentations in apoAIV after a fatty meal might optimize packaging and secretion of lipids with apoAIV apoAIV is a 46 kDa protein found associated with chylomicrons and HDLs, as well as in lipid-free form in the plasma ( 62,63 ). In humans, it is mainly synthesized by the intestine; but rodent livers also synthesize apoAIV ( 64 ). apoAIV is incorporated on the surface of chylomicrons in the early stages of their biogenesis in the ER ( 65 ) and is secreted from the basolateral side of enterocytes. There is considerable evidence that dietary fat increases apoAIV expression; fat ingestion and intestinal perfusion of lipids increase synthesis and secretion of apoAIV in rodents (66)(67)(68). Lu et al. ( 69 ) have demonstrated that a high-fat diet induces apoAIV expression by 7-fold in newborn swine jejunum. Similarly, a 3-fold increase in apoAIV mRNA levels was observed in Caco-2 cells supplemented apically with lipid micelles ( 70 ). Fat feeding increases transcription of apoAIV. Mechanistic studies indicate that fat feeding increases the binding of HNF-4 ␣ to the apoAIV promoter ( 70,71 ). Thus, fat feeding may induce apoAIV synthesis by increasing gene transcription.
Recently it has been shown that cAMP responsive elementbinding protein, hepatocyte specifi c (CREBH) regulates apoAIV expression by binding to two CREBH elements present in the apoAIV promoter ( 72 ). CREBH is an ER membrane anchored bZIP transcription factor that is mainly expressed in the liver and intestine ( 72,73 ). CREBH deficiency reduces apoAIV expression in the liver and intestine. apoAIV expression is increased after the overexpression of CREBH in the liver and in cultured cells. High-fat diet induces steatosis in the liver leading to increased expression of CREBH and apoAIV ( 72,73 ). Fasting is known to induce CREBH and apoAIV expression in the liver. However, it is not clear whether prolonged fasting increases CREBH and apoAIV in the intestine. It remains to be determined whether CREBH is involved in diurnal regulation of apoAIV expression.
Transgenic overexpression of human apoAIV in mice increases serum VLDL cholesterol and triacylglycerols in the fed state ( 68 ). An approximately 50-fold enhanced expression of apoAIV in a newborn swine enterocyte cell line, IPEC-1, increased secretion of nascent triacylglycerols and phospholipids with chylomicrons by 2-to 3-fold ( 69,74 ). VerHague et al. ( 75 ) have shown that apoAIV enhances VLDL triglyceride production by enhancing core expansion, not particle number, in steatotic livers. Molecular mechanisms involved in this process are not fully understood, but Weinberg et al. ( 76 ) have suggested that apoAIV may help in the expansion of nascent lipoproteins into larger lipoproteins by maintaining interfacial tension and elasticity of the larger particles. Besides these biophysical mechanisms, Yao et al. ( 77 ) showed that overexpression of apoAIV increases MTP mRNA protein and activity involving pretranslational mechanisms . Pan et al. ( 78 ) showed that this increase in MTP might involve transcription factors FoxO1 and FoxA2. In contrast to these studies, other studies suggest no increase in MTP after apoAIV overexpression ( 75,79 ). Thus, how apoAIV assists in fat absorption at was signifi cantly reduced. Secretion of cholesterol was less with both chylomicrons and HDLs. Reductions in the secretion of lipids with chylomicrons were not due to reduced expression of MTP. In contrast, MTP levels were increased in the intestine, but not in the liver, of nocturnin-defi cient mice ( 87 ). Thus, nocturnin-defi cient mice fed a high-fat diet do not gain weight like the wild-type controls, most likely secondary to reduced fat absorption that is independent of changes in MTP activity. The importance of reduced fat absorption in nocturnin-defi cient mice is supported further by the observations that feeding high carbohydrate and low fat diets has no discriminatory effect on weight gain in these mice. Thus, it is likely that the lean phenotype of nocturnin-defi cient mice fed a high-fat diet might be secondary to lower fat absorption and inhibition of nocturnin might be a way to reduce hyperlipidemia and hepatosteatosis.
FUTURE DIRECTIONS AND PERSPECTIVES
Most of the studies described above about the circadian regulation of intestinal lipid absorption have been performed using Clock mutant mice. The expression of dominant negative mutant protein may or may not refl ect the actual mode of action of Clock itself. Therefore, similar studies are needed in Clock Ϫ / Ϫ mice. Further, the role of Clock can be supplemented with studies in Bmal1 Ϫ / Ϫ mice. Studies in Clock Ϫ / Ϫ and Bmal1 Ϫ / Ϫ mice can provide substantial supportive evidence for the regulation of intestinal lipid absorption by the positive regulators of circadian rhythms. Complementary studies in mice defi cient in circadian repressors, Pers and Crys, are also needed to identify whether circadian activators and repressors have similar or opposite effects on intestinal lipid absorption. Some progress has been made to understand the circadian regulation of proteins involved in lipid absorption by the Clock protein. However, there is a need for a comprehensive understanding of the different steps involved in lipid absorption and their regulation by various clock genes. In this regard, studies in mice defi cient in different clock genes might provide invaluable information.
Nocturnin is not directly involved in lipid absorption but appears to modulate lipid absorption by temporally regulating unknown intestinal proteins involved in lipid absorption and transport. Identifi cation of these proteins may provide valuable clues to the mechanisms that contribute to circadian regulation of intestinal lipid absorption during high fat consumption. Furthermore, these studies could identify new targets that might be amenable to control lipid absorption.
In summary, intestinal lipid absorption is a diurnally regulated process. The mechanisms involved in the diurnal regulation of different genes involved in intestinal lipid absorption have not been elucidated. Such knowledge might be useful in minimizing gastrointestinal adverse events associated with delivery of some drugs, and in identifying new targets and/or modalities to avoid intestinal discomforts associated with abnormal working schedules. chylomicrons by enterocytes, contributing to increases in plasma triacylglycerol.
Nocturnin
Nocturnin was identifi ed as a RNA transcript that shows signifi cant circadian expression in the early part of the night in the retina of Xenopus ( 84,85 ). Nocturnin is an exoRNase that deadenylates mRNA with specifi city toward poly(A) nucleotides. Deadenylation of mRNA results in rapid degradation and inhibition of their translation. Inhibition of translation probably occurs because proteins that interact with poly(A) tails are also known to interact with mRNA caps to form translationally optimum loops. Truncations of poly(A) tails might disrupt the binding of proteins and formation of translationally active loops. Both of these mechanisms reduce the amounts of protein translated from deadenylated mRNAs. Kojima, Sher-Chen, and Green ( 86 ) sequenced hepatic poly(A) mRNAs with different poly(A) tail lengths and showed that ف 2.5% of the total mRNA contained different lengths of poly(A) tails and their levels exhibit circadian variations. More importantly, rhythmicity in poly(A) tail length was correlated with rhythmic expression of proteins. Mechanisms for the origin of rhythmicity in the tail length of mRNA are not known, but it is possible that enzymes involved in the processing of poly(A) tails might play a role in imparting circadian regulation. Thus, nocturnin could potentially regulate protein levels involving posttranscriptional mRNA degradation or by inhibiting mRNA translation. Due to its circadian expression, nocturnin could alter mRNA stability and protein translation in a temporal fashion and cause specifi c time-dependent changes in protein levels. Specifi c transcripts that are temporally regulated by nocturnin have not been identifi ed.
In mice, nocturnin is expressed in a wide variety of tissues. It is expressed throughout the small intestine with highest levels in the proximal portion ( 87 ). Jejunal nocturnin mRNA levels exhibit circadian changes with peak levels present at dark onset. Expression of nocturnin is increased within 2 h of an olive oil gavage, suggesting that it is induced after fat feeding. Thus, nocturnin is likely regulated by both light-and food-entrained oscillators.
Ablation of nocturnin in mice is not correlated with any obvious developmental, reproductive, and circadian abnormalities. However, these mice remain lean on a high-fat diet and do not develop hepatosteatosis ( 88 ). Nocturnin-defi cient mice absorb fewer triacylglycerols after an oral fat gavage ( 87 ). Further, they absorb lesser amounts of radiolabeled triacylglycerols and cholesterol and retain more radiolabeled lipids in the intestinal segments ( 87 ). In these mice, more lipid droplets are present in the perinuclear region of the enterocyte nuclei toward the apical side. Experiments in isolated enterocytes corroborated observations made in nocturnin-defi cient mice. Nocturnin-defi cient enterocytes retain more and secrete fewer radiolabeled lipids ( 87 ). Analysis of secreted lipoproteins revealed that triacylglycerol secretion with chylomicrons
|
2018-04-03T05:08:22.842Z
|
2015-04-01T00:00:00.000
|
{
"year": 2015,
"sha1": "1e950fb031d55a63a1ede01cd85bed6476c82fb4",
"oa_license": "CCBY",
"oa_url": "http://www.jlr.org/content/56/4/761.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "c44b5d17a2ea20ad5903d05b312b5171d4d4bad1",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
232341242
|
pes2o/s2orc
|
v3-fos-license
|
CNNATT: Deep EEG&fNIRS Real-Time Decoding of bimanual forces
Non-invasive cortical neural interfaces have only achieved modest performance in cortical decoding of limb movements and their forces, compared to invasive brain-computer interfaces (BCIs). While non-invasive methodologies are safer, cheaper and vastly more accessible technologies, signals suffer from either poor resolution in the space domain (EEG) or the temporal domain (BOLD signal of functional Near Infrared Spectroscopy, fNIRS). The non-invasive BCI decoding of bimanual force generation and the continuous force signal has not been realised before and so we introduce an isometric grip force tracking task to evaluate the decoding. We find that combining EEG and fNIRS using deep neural networks works better than linear models to decode continuous grip force modulations produced by the left and the right hand. Our multi-modal deep learning decoder achieves 55.2 FVAF[%] in force reconstruction and improves the decoding performance by at least 15% over each individual modality. Our results show a way to achieve continuous hand force decoding using cortical signals obtained with non-invasive mobile brain imaging has immediate impact for rehabilitation, restoration and consumer applications.
I. INTRODUCTION
Brain computer interfaces (BCIs) offer an alternative way to interact with our environment. They are specially relevant for people whose natural ability to physically interact has been lost or damaged. There has been a lot of progress in the decoding of kinematic variables in BCI [1], [2], [3]. In contrast, human force decoding is less explored even though force is essential for safe and meaningful mechanical interactions [4]. In addition, force decoding can provide more generalisable BCI decoders [5] specially in changing dynamic conditions [6]. Even in the cases where the ultimate goal of a BCI is different from force decoding, understanding and exploiting signals that already exist in the brain might offer a more intuitive control than learning new ones [7]. Force control is specially relevant for the hands. As the specie with greatest hand dexterity we spend most of our day physically interacting with our environment through our hands. Indeed, for completely paralysed people the recovery of the hand function is the most relevant priority [8].
To enable intuitive BCI hand force control we first need to understand how to decode force from cortical signals in healthy humans. Invasive BCI have done a lot of progress in the force decoding field in humans [9] and monkeys [1]. More recently, invasive BCIs showed the advantages of non-linear approaches in force decoding [3]. However they have a low acceptance rate across potential users due to the several surgical procedures they require. In contrast, for non-invasive BCI, force decoding still remains relatively unexplored. Discrete force characteristics have been decoded from multi-modal (EEG and fNIRS) signals. In [10] two force levels (20% and 60% of the maximum voluntary contraction, MVC) were decoded and force detection was performed for the right foot. In [11] and [12] a similar multi-modal approach was used for the classification of force imageries. These studies showed that the combined use of fNIRS and EEG provided a significant advantage in classification of discrete force characteristics. In [13] EEG was used to decode unimanual force trajectories and reported a modest reconstruction performance (correlation ≈ 0.42). However, the multi-modal (EEG and fNIRS) approach has not yet been explored for the decoding of continuous force trajectories. Furthermore, to the best of our knowledge, no non-invasive BCI study has explored the simultaneous production of force with the right and the left hand despite bimanual interactions being more frequent in daily activities.
We use here a multi-modal system (fNIRS and EEG) to continuously decode bimanual force trajectories and explore the advantages that deep learning (DL) introduces in the fusion of signals with different neurophysiological origins.
A. Protocol and task
Ten participants (N = 10) were asked to perform a bimanual isometric contraction task. We provided the force profile that each subject had to track with each hand with two characteristics (Fig. 1). First, both hands were either contracted or relaxed at the same time (relaxation vs contraction). Second, in the contraction state the hands had to dynamically track four force profiles with different crest orders as that presented in Fig. 1. The dynamic force that each hand had to track was different and introduced contraction variability during the contraction state. The different way each hand was engaged during the dynamic force tracking enables a better representation of continuous control of force that corresponds to more natural bimanual manipulations.
The participants received visual feedback on the desired contraction trajectory they had to follow with each hand. Four conditions (one condition per force profile) were used. Each condition represented a different force trajectory which increased the variability of the brain signals and contractions recorded. All conditions lasted 10 s and all participants did 30 trials per condition. The order of the conditions was randomised but each condition was performed in blocks of 30 trials. The highest level of contraction was set to 25% MVC and the lowest to 10% MVC. The trajectories for both hands were designed so that the average of the contraction during the 10 s corresponded to 17.5% MVC. Each trial was followed by a randomised resting period uniformly distributed between 15 and 21 seconds, to avoid phasic constructive interference of systemic artefacts, e.g. Mayer waves, in the brain responses. The refreshing of the feedback in the screen was set to 100 Hz.
Participants were right-handed (confirmed by the Edinburgh inventory). The Imperial College Research Ethics Committee approved all procedures and all participants gave their written informed consent. The experiment complied with the Declaration of Helsinki for human experimentation and national and applicable international data protection rules.
B. Recordings
Twenty four (N = 24) co-aligned EEG and fNIRS channels covered the bilateral sensorimotor cortex and were used as the brain signals from which to decode the force signals generated with each hand (Fig. 1).
The fNIRS signals were recorded using a NIRScout system (NIRx Medizintechnik GmbH, Berlin, Germany). We used a total of 12 optodes per hemisphere (10 sources and 8 detectors in total) sampling at 12.5 Hz.
EEG was recorded using 24 channels of an ActiChamp amplifier (BrainProducts, Berlin, Germany) operating at 4 kHz (running software BrainVision, v1.20.0801). EEG was first downsampled to 250 Hz (with anti-aliasing down-pass filtering). Notch filters were applied at the mains (50 Hz) and fNIRS (12.5 Hz) frequencies and their harmonics. EEG was finally high-pass filtered above 1 Hz using a 5th order Butterworth filter. ICA was then applied to automatically remove EOG artefacts rejecting 1 component when they had a correlation above 0.3 in absolute value.
Two grip force transducers (PowerLab 4/25T, ADInstruments, Castle Hill, Australia) were used to record the force generated by each hand simultaneously recording at 1 kHz. The force signals were first resampled to 250 Hz, then bandpass filtered between 0.01 and 10 Hz with a Butterworth filter of order 3 and then again high-pass filtered with an elliptical filter of order 1 above 0.01 Hz. Drift was further eliminated removing the linear drift per trial. Force measures were finally converted to contraction values using the recorded MVC before the experiment started.
C. Preprocessing
All channels were used in the analysis and needed preprocessing. Optical intensities were low-pass filtered below 0.25 Hz with a 7th order elliptical filter. Changes in optical densities per wavelength, ∆OD λ ij (t), were obtained using Beer-Lambert's law.
We additionally recorded the hemodynamic activity of the scalp skin on the forehead using a NONIN 8000R (Tilburg, The Netherlands). The skin hemodynamics reflect the variations of hemoglobin due to the heart and breathing activity but do not contain brain hemodynamic responses. We used scalp hemodynamics to discard that pulse and breathing were predictive of force. All epochs were extracted from 4 s before the "Go" instruction to 14 s after.
D. Decoding methods
The fNIRS, the EEG Hilbert features and force are resampled so all measures could be aligned in time. EEG Hilbert features and fNIRS are then used to build a linear and a deep learning (cnnatt) model to decode the bimanual force. Both decoders used 800 ms of brain signals (EEG Hilbert features and fNIRS signal) history to decode the bimanual force.
The decoding can be expressed as f t = φ(X t−800ms, ..., t ) where f t represents the vector of bimanual left (L) and right (R) forces f t = [f L,t , f R,t ] T at time t, and X t−800ms,...,t the matrix of fNIRS and EEG features from t − 800ms to t. The linear decoder was trained using the Lasso method. Our deep learning cnnatt model including CNN and attention layers (Fig. 2), was trained using the mean squared error loss, Adam optimiser and early stop techniques.
III. RESULTS
We evaluate the force reconstruction performance of our multi-modal decoding approach using the fraction of variance accounted for FVAF Here, y i is the i th sample of the real signal,ŷ i is the corresponding sample of the predicted signal,ȳ is the average of the real signal and N the number of samples in the signal. The fraction of variance accounted for (FVAF[%]) has a value between (− inf, 100%]. A 100% FVAF represents a total reconstruction. A 0% represents a reconstruction that is as good as using the average of the signal as predictor and negative FVAF[%] even worse reconstructions. We use EEG and fNIRS signals to decode bimanual force trajectories. At the population level, the increase of reconstruction performance due to the combination of fNIRS and EEG Second, to understand the dependency of each decoding approach (linear or cnnatt) to each of the multi-modal input features we perform a sensitivity analysis. Figure 4 shows the comparison of each model sensitivity to the perturbation of each input feature. The sensitivity is computed using a perturbation approach in which we randomly shuffle the time dimension of the input feature we want to analyse and measure its impact in the force decoding FVAF[%]. This test evaluates how important the temporal evolution of the features and their auto-correlation are for the decoding (in contrast to their amplitude distribution). To standardise this measure we compute the percent change in performance with a 0% change representing the performance of the unperturbed signals and 100% the maximum performance reduction when all features are perturbed. Namely, the higher the sensitivity to an input feature perturbation, the more dependant the model is on that feature for an accurate decoding.
As we can see in Figure 4, both models are sensitive to perturbations of any of the features (positive changes) but never to the same extent (100% level) as when all features are perturbed simultaneously. This shows that the temporal structure of the signal is important in the decoding and suggests that the decoding has a causal nature.
The comparison of the linear and the DL system shows that the latter strikes a better balance in its dependency between EEG and fNIRS signals, both inside the grey vertical bar (Fig. 4, multiple comparison of means, Tukey correction, p > 0.01 for the right hand and p < 0.01 for the left hand). In contrast, for a same dependency range (horizontal grey bar) the linear decoder has a much higher dependency in fNIRS than in EEG (multiple comparison of means, Tukey correction, p < 0.01 for left and right hand). Both systems can be applied in real-time but cnnatt is slower (21.0±0.0 ms per trial) than the linear model (2±0.5 ms). Finally, we verify that breathing or pulse rhythms do not have predictive power on force generation. Namely, when skin hemodynamics are used to decode force with a similar linear model trained with this purpose, they have a significant lower reconstruction performance (FVAF[%] = 1%) compared to when fNIRS is used (FVAF[%] = 32.5%, t-test, p < 0.001).
IV. DISCUSSION
We set us the challenge of reconstructing continous bimanual grip force production in a dynamic task. We tackled the challenge by using multi-modal non-invasive signals (EEG & fNIRS). We introduce a bimanual isometric task that opens up new BCI challenge. Using a linear model we first show that the fusion of mul -modal signals brings a 15 FVAF[%] increase compared to only using one of the two signals. To explore the breadth of non-linear models we crafted a deep learning architecture (cnnatt). We show that our cnnatt deep learning model improves the bimanual force reconstruction in terms of FVAF[%] in 5 to 8 points compared to the linear system (Fig. 3). Both models preserve the specificity to the decoded hand as shown by the decay of reconstruction performance when the FVAF[%] is computed in the opposite hand than the one the model was trained to decode.
Our sensitivity results show that the multi-modal linear decoder is more dependent on fNIRS than on EEG while DL better exploits all EEG bands (Fig. 4). In particular, in combination with fNIRS, DL achieves a better exploitation of the delta and beta bands (p < 0.05, Kruskal-Wallis test, Tukey correction) which supports [13] and expands their results to the bimanual multi-modal case. We note that the distribution of fNIRS amplitudes is centred and has a higher standard deviation, while the EEG Hilbert features have a skewed and narrower distribution and the target force distribution is bimodal -which convolutional layers appear to capture en passant.
V. CONCLUSION
We used Deep Learning to solve the multi-modal sensor fusion and decoding problem and were able to decode continuous force generated in a dynamic bi-manual grip force task. Combining EEG and fNIRS is particularly challenging as signals differ by 3 orders of magnitude in time scale (ms vs s), thus we used the power of representational learning in deep learning. Previous approaches used the advantages of Gaussian Process regression to achieve efficient continuous multi-modal decoding [4], however signals there the EEG and MMG signals operated on similar timescales. Deep Learning is data-hungry and thus a challenge for BCI, however our approach can be directly mapped to dataefficiency improving meta-learning [14] and multi-subject transfer learning [15] recently demonstrated in Deep EEG BCI. We show that non-invasive human interfacing can overcome continuous decoding challenges usually thought the realm of invasive BCI. Combining EEG-fNIRS has direct implications in BCI for restoration of movement and robotic control [16], but also for real-world and consumer use [17].
|
2021-03-10T04:32:22.035Z
|
2021-03-09T00:00:00.000
|
{
"year": 2021,
"sha1": "1fced44ea4cd4bbdff5c1843caa24d7dd57a3899",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "1fced44ea4cd4bbdff5c1843caa24d7dd57a3899",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
247370797
|
pes2o/s2orc
|
v3-fos-license
|
Implementation of reliability-based thresholds to excavation of shotcrete-supported rock tunnels
ABSTRACT Modern tunnelling in rock relies heavily on information from monitoring and observations during construction as a means to reduce the considerable uncertainty that originates from a lack of knowledge about the ground conditions. By formally integrating the monitoring information into the structural safety evaluation, a comprehensive risk-based design framework can be achieved. The establishment of relevant thresholds against unacceptable structural behaviour is a key aspect of this work. This paper presents an extensive application example based on real-case data, showing how a reliability-based threshold can be established to ensure the serviceability of a shotcrete lining in a rock tunnel in a challenging geological setting. The authors address and discuss practical approaches to consider the model uncertainty related to the confinement loss caused by the tunnel front advance, as well as the transformation uncertainty related to the use of indirect monitoring information. The established threshold is discussed in the context of the observational method.
Introduction
Monitoring and observation of the structural behaviour during construction are key components of modern tunnelling, because of the considerable prevailing uncertainties of the geological and geotechnical conditions in the ground; this has paved the way for observation-based tunnelling approaches (Schubert 2008;Stille and Holmberg 2010;Spross and Larsson 2014;Palmström and Stille 2015;Bjureland et al. 2017;Spross et al. 2018). These uncertainties originate either from natural variability in the ground conditions (aleatory uncertainty) or from a lack of knowledge of the conditions (epistemic uncertainty), where the latter is caused mainly by the limited extent of the pre-investigations performed. The knowledge gained from monitoring and observations during construction are therefore vital in satisfying the structural safety requirements, as additional information generally implies that the epistemic uncertainties are reduced and that the calculated structural reliability is improved .
If the tunnel has been designed with reliability-based methods (e.g. Langford and Diederichs 2013;Lü et al. 2017;Kroetz et al. 2018;Napa-García, Beck, and Celestino 2018;Bjureland et al. 2019), the knowledge gained can be accounted for straightforwardly through Bayesian updating, as demonstrated recently for tunnels by Feng and Jimenez (2015) and Feng et al. (2019), and for other geotechnical structures by Schweckendiek and Vrouwenvelder (2013), Li et al. (2016) and Contreras and Brown (2019). However, the primary purpose of performing monitoring and observations of the structural performance is normally to gain information about when specific measures are needed to ensure that the considered limit states are not attained. This requires that the monitoring or observations be linked to a threshold that triggers the implementation of safetyenhancing modifications to the structure. Consequently, a threshold needs to be established so that the triggered measure can be implemented before the corresponding limit state is attained. From a structural reliability point of view, the threshold shall be established such that the probability of failure of the structural component is satisfactory (i.e. the calculated probability of failure, p F , is less than the target, p F,T ), as long as the monitoring result does not violate the threshold.
Although such thresholds often have a key role in the execution of a geotechnical design, there is, however, a considerable lack of practical guidance on how to establish the threshold so that it provides a sufficient safety margin with respect to the problem at hand; for example, the application guidelines to Eurocode 7 (Frank et al. 2004) state only that "it is the designer's responsibility to prepare and communicate specifications for any such monitoring". Spross and Johansson (2017) showed in a simplified example how such reliability-based thresholds can be determined and applied to monitor the deformation of a rock pillar in the context of the observational method. The procedure for determining reliability-based thresholds was later theoretically extended by Spross and Gasch (2019) to a more general class of engineering problems, by making use of the Finite Element Method and Subset simulation. There is, however, a need for more comprehensive case examples that discuss the practical implementation of such thresholds to complex geotechnical structures, where the epistemic uncertainty typically is considerable. In this paper, we, therefore, investigate how reliability-based thresholds can be established to ensure the structural reliability of the shotcrete lining in a rock tunnel. In particular, we discuss the practical management of the threshold in the design of tunnels with the observational method (Peck 1969;CEN 2004) and the practical challenges in accounting for the involved model errors. The study uses the challenging geological conditions that were encountered in the Stockholm bypass rock tunnel project (European highway E4), where the excavation passed through a fault zone under lake Mälaren, west of Stockholm, Sweden.
2. Method to establish reliability-based thresholds 2.1. Threshold definition Describing a limit state as a function G(X) = 0, with X = [X 1 , . . . , X m ] being a vector of the relevant random variables, Spross and Johansson (2017) showed how a set of threshold values, x alarm , can be determined from an equality, so that the thresholds facilitate the structural safety requirements: In short, the procedure utilises the fact that a potential monitoring result can be described in terms of a probability of violating the threshold. For example, assuming monitoring of X 1 , the probability becomes P(h(X) = x 1,alarm − X 1 ≤ 0), where the function h(X) can be interpreted as a limit state function of its own. Thereby, the threshold value x 1,alarm can be determined using any structural reliability method, as Equation (1) may be reformulated into the following equality, after taking the new information Z from the monitoring into account (Straub 2014(Straub , 2015: where the numerator can be seen as parallel-system multiple failure modes and the denominator as a single failure mode. (In the simplest case with only one monitored parameter, Equation 2 is simply solved for the only unknown variable x 1,alarm ). Here, h(X) is defined so that critical behaviour corresponds to exceedance of the threshold, but thresholds against too low readings can also straightforwardly be determined. Note that if more than one threshold is to be established, Equation (1) becomes underdetermined, prompting additional assumptions or information (Spross and Gasch 2019). Such assumptions may, for example, concern the probability of exceeding the threshold. In this paper, however, we limit the study to one threshold.
Algorithm based on Subset simulation
The procedure to establish reliability-based thresholds from Equation (2) requires an iterative approach. This can be solved with crude Monte Carlo simulation (Spross and Johansson 2017), but when the limit state function requires input from a computationally demanding structural model (e.g. a Finite Element model), crude Monte Carlo becomes very inefficient. Spross and Gasch (2019), therefore, developed a general computational algorithm that efficiently computes such thresholds. The complete algorithm is provided in Appendix A and outlined in the following. The algorithm applies Subset simulation (Au 2001(Au , 2014, which is an adaptive form of Monte Carlo simulation that is particularly efficient for estimation of low failure probabilities when there are many random variables (Straub, Papaioannou, and Betz 2016). This efficiency facilitates the evaluation of the structural response with computationally demanding Finite Element models. A recent geotechnical application was presented by Gao et al. (2019).
In short, subset simulation calculates the p F as a product of larger conditional probabilities by defining p F from a number of nested intermediate failure events, such that F 0 . F 1 . . . . . F M , where F 0 is a certain event and F M is the structural failure described by the event {G(X) ≤ 0}. The probability of failure can then be formulated as in which F k = {G(X) ≤ c k }, where the limit c k for each intermediate event corresponds to a predefined probability p 0 for the event F k . The simulation entails a stepwise pushing of the sampling (by decreasing the value of c k ) toward the failure event of interest, where c M = 0, corresponding to the investigated limit state. This is facilitated using Markov chain Monte Carlo (MCMC) simulation by letting the samples that satisfied the previous intermediate event F k−1 be the seeds of the next sampling round. Considering that p 0 is a predefined probability, the p F can be estimated as where p M is an estimation of the last conditional probability P(F M |F M−1 ) and given by where I F,M is the indicator function of F M that is evaluated with the samples in the last round of sampling, x M−1 , which were generated conditionally on the event F M−1 . This general sampling procedure for subset simulation was adjusted in the algorithm for establishing reliability-based thresholds (Appendix A), to account for the information provided by the monitoring, by using the approach suggested by Straub, Papaioannou, and Betz (2016). To simulate the conditional probability of Equation (2), the intermediate events in , which gives the calculated probability of attaining the limit state G(X) ≤ 0, conditional on the information Z that the threshold described by h(X) has not been violated: Conceptually, the algorithm uses an iterative process to find the threshold x 1,alarm that satisfies Equation (2), based on a first guess of x 1,alarm (suggestions for reasonable starting values and revised guesses are provided in Spross and Gasch (2019)). In each iteration, the described Subset simulation procedure is performed using independent-component MCMC (Au and Wang 2014), aiming to minimise the error,1, between p F|Z and p F,T . The iterations end when1 is less than a predefined tolerance, t, which adjusts the accuracy of how well the structural safety corresponds to p F,T when considering the information that the monitoring result is not violating x 1,alarm . In the practical application, the reliability-based threshold should be interpreted such that as long as x 1,alarm has not been exceeded, the structural safety is acceptable with respect to the considered limit state.
Implementation details
In this study, all steps of the algorithm outlined in Section 2.2 and detailed in Appendix A are implemented in the general computational modelling software COM-SOL Multiphysics (Comsol 2018). A crude sampling approach is adopted in all realisations of the random variables in X. Important to emphasise is that our implementation puts no limitation on the structural model evaluated in steps 2b and 3dii of Table A.1 (in Appendix A), which can be 3-D or 2-D, and include, for example, material nonlinearity and time-dependent effects. Actually, the incorporation of our algorithm in a general Finite Element software gives us access to all functionality of the specific code when setting up a physical model. The algorithm is, in fact, not limited to structural cases, and could just as well be applied for monitoring the inflow of groundwater to a tunnel or the concentration of some chemical species in the groundwater close to the construction site. Any measurement error in the observation of X 1 can be considered straightforwardly by increasing its variability correspondingly in the algorithm when samples of X 1 are generated or evaluated from the numerical model. Details on models for measurement errors are provided by, for example, Baecher and Christian (2003). Generally, increasing measurement errors leads to stricter thresholds.
Description of the Stockholm bypass tunnel project
The Stockholm bypass project will provide a new route for the European highway E4 and will connect the Northern and Southern parts of Stockholm County with three lines in each direction, separated into two parallel 18-km-long tunnels. The tunnels are excavated in rock under lake Mälaren and the Lovön and Kungshatt islands west of Stockholm, Sweden (Figure 1). At the passage under lake Mälaren, south of Kungshatt, the excavation intersects a regional strike-slip fault zone, where the south side also has rotated in a reversed dip-slip movement. The fault zone is approximately 200 m wide at the passage. The interpreted results from the pre-investigations indicate a number of parallel weakness zones mainly oriented along the fault line, but with occasional subhorizontal weakness zones in the tunnel direction, where blocks have rotated upwards. The pre-investigations indicate that the weakness zones, which consist of gauge, cataclasites, mylonites or breccias, may be limited in extension and surrounded by areas of higher quality rock mass.
When passing the regional fault zone, the tunnel will be located 64 m below the lake surface. The weak rock mass is overlain by till and clay, which forms the lake bottom. The engineering geological forecast identified two typical rock qualities, denoted as rock classes IV and V, within the fault zone. Rock class IV describes mainly the quality of the rock mass in the transition zone of the fault zone, while rock class V describes mainly the quality of the rock mass in the core of the fault zone. This paper analyses only rock class IV, which is characterised by an adjusted Rock Mass Rating (RMR) value within the range 38-59 with 40 being the assigned typical value. (For reference, rock class V has an adjusted RMR in the range 28-33 with 29 being the assigned typical value.) The Swedish "adjusted RMR" is used in early design phases; it is based on Bieniawski (1989), but sets ground water conditions to "completely dry" and discontinuity orientation to "very favourable", as these conditions are not yet known.
The tunnels are excavated with drilling and blasting and temporarily supported with a systematic pattern of rock bolts and a layer of shotcrete. Where poor rock quality is expected, a pipe umbrella system is installed as pre-support. Because of the poor rock mass quality and the large width of the tunnel, the excavation is divided into a sequence of gallery and bench excavations, with an advance rate of 2 m per sequence. As the expected rock cover is limited, extensive rock grouting is performed to limit the risk of stability issues caused by flowing ground conditions. A permanent cast concrete lining will be installed at a later stage, to account for long-term issues related to, for example, potential swelling clay. Further details of the geological conditions and the applied technical solution have been presented by Stille et al. (2019).
Limit state definition
The excavation of the two tunnels is subject to several potential failure modes, including the collapse of the pipe umbrella system, failure of temporary support, face collapse and flowing ground. Instability caused by the attainment of the rock mass strength due to large in-situ stress is judged to be the main failure mechanism. This causes a loose core of rock that may lead to a progressive, large-scale collapse. A complicating factor is the stress conditions, which are affected by the different advance rates of the two tunnel faces. The complete design is, therefore, rather complex.
For this reason, the case analysed in this paper is a simplification of the real tunnel design, to better highlight the features of an underground rock excavation that can be related to the reliability-based thresholds. We consider, therefore, only the failure mode related to the exceedance of the shotcrete compressive strength, as this is a good indicator of the structural behaviour of the main support system. Moreover, we limit the analysis to the cross section of one tunnel with a simplified geometry, without considering any effects from the other parallel tunnel. As illustrated in Figure 2, the tunnel is assumed to be subjected to an overburden of 27 m of rock with rock class IV, 26 m of soil and 11 m of lake water. The support consists of 300 mm of shotcrete and 5-m rock bolts. As we investigated potential roof collapse, we did not consider the pipe umbrella support system in the finite element model, as the main purpose of the pipe umbrellas is to prevent face collapse.
Shotcrete can fail due to either attained compression or attained tensile strength. In the general case, multiple failure modes are analysed. Here, however, we analyse only compressive failure, which gives the following limit state function: where f c is the compressive strength of the shotcrete and s is the maximum compressive stress that any material point in the shotcrete support is subjected to; using the standard definition of stress in solid mechanics, we have that where s 3 is the minimum (compressive) principal stress, X is the coordinate vector and V s is the shotcrete domain. Thus, s depends on the complex structural interaction between the rock support, the rock mass properties and the in-situ stresses in the rock. This limit state corresponds to unsatisfactory serviceability due to the inward deformation of the shotcrete arch. Effects from bending deformation or local deformation around, for example, rock bolts, are not well described by this limit state, and rather imply a tensile failure of the shotcrete support. Extension of our procedure to consider also such failure modes is conceptually straightforward by considering multiple limit state functions.
Description of threshold violation as complementary limit state function
To verify structural safety, the vertical shotcrete displacement, d, is to be monitored at the crown of the tunnel. Displacement monitoring is common in tunnel construction, especially when tunnelling in complex geological conditions with large geotechnical uncertainty, such as the fault zone passage under Lake Mälaren in the selected case. The observed displacements are used as an indirect indication of the attainment of the limit state in Equation (7), which therefore can be rewritten into where C is a transformation factor using observations of d in the crown as indirect observations of s anywhere in the shotcrete support. The C is given by where C is a model error describing the uncertainty in this transformation. The C can here be interpreted as an uncertain stiffness-like parameter of the shotcrete-rock interaction at the measurement point, which accounts for the behaviour of the entire support system. The magnitude and determination of C is further discussed in the next chapter. By defining the event of identifying allowable shotcrete stress with the function where d alarm is a threshold for unacceptable vertical crown displacement, Equation (2) can be applied to find the d alarm that must not be violated, in order to ensure that the p F,T is satisfied: Geometries of the rock surface and the shotcrete thickness, as well as the shotcrete properties, are assumed to be without irregularities or spatial variability. While partly an effect of a need for mathematical parameters Confinement loss at support installation c l front Triangular a tr = 0.49; b tr = 0.80; c tr = 0.53 a The s h and s H are based on in-situ stress measurements in the Stockholm region. The randomly generated values of s h and s H are increased with respect to the depth below the rock surface by a constant 13.75 and 37.5 kPa/m, respectively. b Determined as described in the section estimation of transformation model error. c Determined as described in the section estimation of confinement loss at support installation. convenience in this study, the assumption is judged to be a reasonable simplification for the analysed largescale serviceability limit state of a rather thick lining (300 mm) at the design stage. We base this on recent research studies on the effect of spatial variability of shotcrete properties and thickness on its failure behaviour (Bjureland et al. 2020;Sjölander, Ansell, and Malm 2021). They indicate that even for local blockfall failures, there is a substantial averaging effect. Furthermore, Malmgren and Nordlund (2008) studied numerically the effect of having an uneven rock surface and an evenly thick shotcrete lining. For this case, they found that the surface roughness is an important factor for the shotcrete behaviour, as it inflicted bending of the shotcrete at the edges of the rock surface in their model. In reality, however, rock surface irregularities tend to be evened out in the spraying of the shotcrete, especially for thicker linings, reducing the bending. (For ultimate limit states or thin linings, the effect of irregularities in the rock surface and shotcrete thickness may be more prominent and therefore require that spatial variation be accounted for in the model, to capture the effect of local weaknesses.)
Set-up of structural model
Dimensions of the simplified version of the tunnel used to set up our finite element model are shown in Figure 2. The rock bolts are placed with a spacing of 1 m, and relevant boundary conditions and loads are depicted in the figure. Only one-half of the tunnel-rock geometry was included in the model by assuming a vertical symmetry plane. The geometry was discretised with a mesh as shown in the left part of the figure (only showing the mesh close to the tunnel) and with an assumption of 2-D plane strain. In total, the mesh consists of 3123 elements, and, given a quadratic displacement field, the finite element model includes 19350 degrees of freedom (DOFs). Note that the rock bolts are placed along mesh element boundaries and, thus, are assumed to fully interact with the rock/shotcrete, and therefore add no additional DOFs to the model.
Both rock bolts and the shotcrete were assumed to be elastic, while the rock mass was considered elastoplastic. A Dücker-Prager yield criterion with the non-associated flow was used for the rock mass, with parameters determined from the Mohr-Coloumb criterion. The elastic constants of the rock mass and shotcrete, the cohesion, c, and the friction angle, w, of the rock mass were considered random variables, while the dilatancy angle, c, controlling the plastic flow in the rock mass was assumed fully correlated with the friction angle, such that c = w/8.
In 2-D modelling of rock tunnels, a key parameter is the inward displacement of the rock mass, u, here taken as the vertical displacement at the measurement point indicated in Figure 2. The principal behaviour of this displacement and its relation to d are shown in Figure 3: a supported rock mass in a considered tunnel section exhibits smaller displacement than an unsupported rock mass, as the excavation continues. A design challenge lies in the assessment of how much inward displacement, u 0 , and corresponding confinement loss, l front , has already occurred in front of the tunnel face. This structural behaviour can be analysed with longitudinal displacement profiles (LDP). However, generating LDPs for the general case requires full 3-D models to capture effects from, for example, non-hydrostatic stress conditions and complex tunnel geometries (Vlachopoulos and Diederichs 2009;Langford and Diederichs 2013). Because of the substantial computational effort that this would require in a reliability analysis, we have applied a simplified approach in 2-D to evaluate probabilistically the u 0 and the corresponding confinement loss, using findings of Vlachopoulos and Diederichs (2009) as described in the following.
In our 2-D model, the confinement loss is considered by using an incremental analysis, where the support given by the rock to be excavated is incrementally removed by reducing its stiffness and stress. This means that the tunnel was actually included in the discretised geometry, as is shown in the left part of Figure Figure 3. Principal behaviour of inward tunnel displacement as a tunnel is excavated. The u denotes the total displacement of the rock mass, while δ denotes the displacement of the shotcrete (Bjureland et al. (2017), CC-BY-NC-ND 4.0, https:// creativecommons.org/licenses/by-nc-nd/4.0/, with minor adjustments).
2. This incremental procedure can be described by the confinement loss parameter, l, which ranges from l = 0 simulating full confinement to l = 1 simulating no confinement (González-Nicieza et al. 2008). Formally, the stress tensor s and the fourth-order elasticity tensor D in the domain to be excavated can be described by where s 0 is the in-situ stress, and D 0 describes the stiffness of the intact rock. By incremental increase of l toward no confinement in the model, the resulting u can be plotted as a ground reaction curve. Moreover, the support structure (shotcrete and bolts) is installed in a stress-free state at an intermediate step, where the support from the excavated rock has been partially removed. Assuming that the rock support is installed at the tunnel front, the l front describes the point in the incremental relaxation of the applied pressure in the 2-D model, when the support is to be implemented in the model. The l front was considered a random variable and its determination is further discussed in the next section. To summarise, the FE simulation can be described by the following steps: 1. Initialise in-situ stress in the non-excavated rock domain.
2. Incrementally remove the support from the tunnel by reducing its stiffness and stress (Equation (13)). 3. Insert the rock support system in a stress-free state given by l front . 4. Continue incremental removal procedure until l = 1.
Estimation of confinement loss at support installation
To estimate the displacement u 0 at the tunnel front, we assessed the ratio of the maximum radius of the plastic zone and the approximate equivalent tunnel radius (r P /r T ) to be approximately 2, based on initial simulations of a 2-D model of the analysed tunnel section (unsupported). According to Vlachopoulos and Diederichs (2009), r P /r T = 2 corresponds to the u 0 being 25% of the expected maximum deformation of the unsupported tunnel, u max . For 100 simulated ground reaction curves of the 2-D model without support, using crude Monte Carlo, we then investigated the uncertainty in the confinement loss that corresponded to u 0 = 0.25u max , that is, the confinement loss l front (Figure 4). Based on the 100 observations of l front , we assigned it a triangular distribution, as presented in the left chart of Figure 4 and Table 1.
Estimation of transformation model error
The magnitude of the transformation model error C in Equation (10) was evaluated from the crude Monte Carlo simulations of the structural model (step 2.b in Table A.1 in Appendix A), by calculating the variability in the ratio s/d, based on the output from the finite element analyses. The variability of C is shown in Table 1 as the COV of C. The C was then implemented in the limit state evaluations in the subsequent steps 2.c and 3.d.ii in the algorithm.
Simulation constants
To run the algorithm that determines d alarm for the analysed case, the following simulation constants need to be defined: target probability of failure, p F,T ; the probability of the intermediate events in the subset simulation, p 0 ; the tolerance, t, of the error between p F|Z and p F,T for the proposed threshold; the number of samples in the subset simulation, N; and the constant k, which ensures that there are enough samples in the initial crude Monte Carlo simulation (see details in Appendix A). As summarised in Table 2, the p F,T was set to 0.005, as the analysed limit state can be seen as a serviceability limit state for the temporary rock support. The p F,T used corresponds to suggested target probabilities for other serviceability limit states (Fenton, Naghibi, and Griffiths 2016), but in practice the required safety level is an issue to study further in development of reliability-based design codes. The p 0 was set to 0.1, following recommendations by Zuev et al. (2012), and t was set to 0.1. Based on preliminary runs on the algorithm, k was set to 3. Figure 5 shows the simulation results for the last, accepted threshold, d alarm = 4.4 mm. Figure 5(a) shows an output space in terms of deformation quantities, while Figure 5(b) shows the same simulations but in terms of strength-stress. The grey dots and the contour lines represent the initial crude Monte Carlo simulations (step 2 in the algorithm in Appendix A). The black dots represent the accepted data points after truncation at the proposed threshold value (step 3b). The red dots represent the last subset simulation level, that is, the simulations used to determine the probability of event F * M (cf. Equation (6) and step 3d). The probability of satisfying the threshold and therefore also satisfying the p F,T is 71%. Without adhering to an established threshold, the calculated p F would be 0.012 for the analysed design solution in this limit state in rock class IV. This illustrates the advantage of designing with planned monitoring in an observational approach: if the monitoring were not accounted for in the structural safety assessment, a more conservative design would be required to satisfy p F,T = 0.005. Figure 6 illustrates four evaluations of the final element model (marked in Figure 5), where (a) represents a typical structural behaviour, (b) represents Table 2. Simulation constants used in the example.
Constant
Value failure behaviour (i.e. violation of the limit state function), (c) represents a simulation with low horizontal in-situ stress that however did not violate the limit state and (d) represents a failure behaviour despite limited plasticised rock mass volume. The difference in structural behaviour between the figures indicates the complexity of the case: there are many possible combinations of input data, and the resulting behaviour in the evaluation of the structural model in each simulation is very difficult to predict in advance.
Effects of model errors and other simplifications
Model errors affect the calculated threshold considerably for this application. In the presented example, we introduced a transformation model error (C) to account for the uncertainty related to the fact that we used the displacement measurement in the tunnel crown to capture f c exceedance anywhere in the lining. A comparison of the two graphs in Figure 5 illustrates the significance of this error clearly: the data points representing displacements below the threshold in Figure 5(a) (black dots) exhibit a considerable scatter in the strength-stress output plot of Figure 5(b). The uncertainty represented by C depends partly on the extent of the monitoring programme. By measuring the shotcrete displacement at the expected location of the largest s, instead of at the tunnel crown, a smaller transformation error would be expected. Another model error is introduced through our simplified approach to assess the occurred confinement loss at the tunnel front at the time of support installation (l front in Figure 4). In this calculation example, we derived a triangular distribution for this model error. For simplicity, we did not, however, account for any uncertainty in the estimation of the ratios r P /r T and u 0 /u max that were needed to assess the uncertainty of l front . A straightforward approach to this issue would be to also assign a model uncertainty to u 0 /u max , for example, by assigning this ratio a uniform distribution between, say, 0.2 and 0.3, before evaluating l front , instead of using the deterministic value 0.25.
We also disregarded the potential correlations between l front , the rock mass properties, and the insitu stress conditions; ground conditions that are prone to cause substantial wall deformation (large u max ), because of significant plastic ground behaviour, may potentially be associated with more curved shapes of the normalised ground reaction curve (Figure 4). Such curved shapes may, in turn, be associated with less remaining confinement at the tunnel front (i.e. larger l front ).
The calculation example assumes that the shotcrete is installed directly at the tunnel front and achieves full strength instantly. In practice, the tunnel support would be installed normally some distance behind the tunnel front and gain strength as the shotcrete cures. The principles of how to account for these aspects in tunnel support design have been discussed for deterministic design by, for example, Chang (1994), Carranza-Torres and Fairhurst (2000) and Oke, Vlachopoulos, and Diederichs (2018), but their application to reliability-based design remains an issue for future studies.
In addition to the aforementioned model errors, there may also be intrinsic errors in the applied FEmodel. Such model errors have not been quantified and considered in this paper, which imposes a degree of arbitrariness on the calculated threshold. We note however that the effect of intrinsic model errors remains a general issue for future research in practical application of reliability-based analyses of rock engineering structures.
Accuracy of threshold and challenges with large uncertainties
The iterative nature of the applied algorithm (Appendix A) implies that a proposed threshold is accepted, if the conditional event of not violating the proposed threshold provides a calculated p F that is close enough to p F,T . This tolerance (t) between p F and p F,T is defined by the user. The larger the τ is, the faster the algorithm is expected to find an acceptable threshold value, but the larger becomes the potential deviation from the required p F,T when the threshold is applied. We believe a quite large τ can often be acceptable in practice, especially for serviceability limit states, meaning that the established threshold only needs to provide alarms related to the correct order of magnitude of the required p F,T .
In case there is only limited information regarding the site conditions, the uncertainty of the involved geotechnical parameters may be considerable. While this does not pose a problem for the algorithm itself, its accuracy might be impaired for cases where a non-linear FE-model is considered. With large uncertainty represented in the probability distributions, the likelihood increases of drawing sample combinations that characterise extremely low-quality rock mass with virtually no strength. For such samples, the chosen structural model may not converge for a non-negligible number of evaluations, that is, decreasing the number of valid data points. This would indicate a need for another approach for discretisation of the real conditions, for example, by using another material model or numerical method. Such considerations would, of course, also be needed in a deterministic sensitivity analysis for the same situation. For practical application, we recommend to monitor the number of non-converged model evaluations to ensure sufficient accuracy of the evaluated threshold.
Practical implementation with the observational method
The definition of the threshold (Equation (1)) implies that a violated threshold by definition means exceeding the p F,T . For the analysed limit state, the shotcrete in a considered cross section is loaded sequentially as the tunnel excavation progresses with each blasting round. Measuring the occurred inward displacement of the shotcrete (d) after each blasting round, a logarithmic pattern can be expected between d and the normalised distance to the tunnel front, l/r T (Figure 7). After a few blasting rounds with corresponding displacement measurements at distances [l 1 , l 2 , . . . , l i ] from the tunnel front, a prediction of the final displacement can be Figure 7. Expected logarithmic inward displacement behaviour with increased distance to the tunnel front. Using Bayesian regression analysis on the retrieved data points, a prediction can be made of the potential violation of the threshold. made, using a linear regression model with logarithmic transformation on the format where a and b are regression coefficients. Based on initial work by Stille (2009), Bjureland et al. (2017) showed how such measurement data can be analysed and implemented within a reliabilitybased framework for the observational method: using Equation (14) in a Bayesian framework, the final displacement can be predicted through extrapolation, which will be increasingly accurate as more data is collected. If this prediction indicates that the derived threshold d alarm is likely to be violated, the responsible decision maker regarding the tunnel support is alerted to take action, for example, by ensuring that additional support is installed. The use of reliability-based thresholds as a part of the observational method can thereby integrate the shotcrete design into a larger risk-based framework for tunnel design and construction, which was proposed by Bjureland et al. (2020). In our application example, we considered only one measurement point in the cross section and let the transformation factor C account for the behaviour of the complete shotcreted area. However, the roof and the walls are typically subjected to different structural behaviour, and moreover, the structural behaviour is affected considerably by the elastoplastic behaviour of the rock mass, as indicated in Figure 6. It is, therefore, likely to be necessary in a real-world application to implement more than one point of measurement and to consider multiple limit state functions, such as both tensile and compressive failure. This requires several thresholds to be derived. However, as previously mentioned, having more than one threshold makes Equation (1) underdetermined. How to find the optimal set of threshold levels in x alarm remains an issue for future research.
Conclusion
We have presented an extensive calculation example of how reliability-based thresholds can be established for the inward displacement of a shotcrete lining in a rock tunnel. The example considered a serviceability limit state function, for which a finite element model was needed to evaluate the ground-structure interaction associated with the loss of confinement from the advancing tunnel and the installed support. Because of the considerable computational effort required to evaluate the final element model probabilistically, an algorithm using subset simulation was employed. The effect of model uncertainty related to the confinement loss and transformation uncertainty related to information from indirect monitoring was investigated and discussed extensively. In particular, we find that there is a need for future research on how to consider the uncertainty in the modelling of confinement loss in 2-D models, as full longitudinal displacement profiles (LDPs) in 3-D still may be unreasonable to employ in probabilistic analyses.
The derived threshold may serve as a key component in designing a tunnel with the observational method. Our analysis in this paper showed how applying such thresholds could facilitate less conservative designs, as the thresholds ensure sufficient safety by being related to a target reliability and account for prevailing uncertainties. Thereby, the threshold can become an integrated part in the performed risk management work in the tunnel project. ii. Run the structural model for each set of samples inb k to find the complete set of proposed samples,x k iii. Set, for all N parameter sets of proposed samples, iv. Order the samples in x k in increasing order of magnitude of their limit state value G(x) and let c k+1 be the p 0 -percentile of the ordered samples. Let the N c first samples be denoted x (seed) k+1 and let the corresponding b (seed) k+1 contain the next seeds of the Markov chains. v. k = k + 1. e. Identify the number, N F , of sample sets for which x k−1 [ F * M and f. Calculate the error betweenp F and p F,T as1 = |(p F − p F,T )| p F,T .
|
2022-03-11T16:28:52.098Z
|
2022-03-08T00:00:00.000
|
{
"year": 2023,
"sha1": "a8c883b3503bc321e1248e3b8b4c977c84eaab3a",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/17499518.2022.2046789?needAccess=true",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "3db1a1468d0abc1666b10ebb77e3c860b86b31a7",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
}
|
119211646
|
pes2o/s2orc
|
v3-fos-license
|
Relativistic proton levels from region AR 12673 (GLE \#72) and the heliospheric current sheet as a Sun$-$Earth magnetic connection
On 2017 September 10 Neutron Monitors (NMs) apparatus located at ground level and high latitudes detected an increase in the counting rate associated to solar energetic particles (SEPs) emission from X8.2-class solar flare and its associated CME. This was the second-highest flare of the current solar cycle. The origin was the active region AR 12673 when it was located at the edge of the west solar disk, magnetically poorly connected with Earth. However, there was a peculiar condition: the solar protons accelerated by the CME shocks were injected within a heliospheric current sheet (HCS) region when Earth was crossing this region. We show that often HCS and SEPs propagation are closely related. If the source locations of SEPs are within or close to HCS, the HCS play the role of a Sun$-$Earth magnetic connection. SEPs drift around HCS paths, and SEPs are also drift in a wide range of longitudes by the HCSs. In some cases, and especially when Earth crosses the HCS sector, a fraction of these particles can reach Earth with a harder energetic particle flux, triggering a ground-level enhancement (GLE). The blast on 2017 September 10, which triggered the GLE \#72, was the second in the current solar cycle. We show that the two GLEs, including all sub-GLEs observed in the current solar cycle, comes from solar explosions that happened within an HCS structure; this behavior is also observed in the GLEs of the previous solar cycle. In general, solar explosions from active regions poorly connected with Earth can trigger GLEs, through the mechanism described above. In all cases, the SEPs drift processes by HCS structures provides an efficient particle transport, allowing the observation of these solar transient events.
INTRODUCTION
The ground-level enhancements (GLEs), typically in the MeV-GeV energy range, are sudden increases in cosmic ray intensities registered by neutron monitors (NMs), which are ground-based instruments that detect a variety of secondary particles and mainly neutrons produced by primary protons penetrating the Earth's atmosphere (Miroshnichenko et al, 2008;Gopalswamy et al., 2010). These enhancements can be registered by other types of ground-based detectors, such as air shower detectors and muon telescopes (L3 Collaboration 2008; Wang, 2009;Nitta etal., 2012). In most cases GLEs occur during the intense X-class solar flares 2 as well as the fast (above ∼ 1000 km s −1 ) Coronal Mass Ejections (CMEs) (Gopalswamy et al., 2010;Gopalswamy at al., 2012). However, also there are some cases of GLEs associated with weaker flares and slower CMEs (Cliver, 2006). The GLEs are triggered by solar energetic particle (SEP) and that are one population of particles in the interplanetary space from the Sun to 1 AU, mainly accelerated by shock waves during CMEs. However, particle acceleration can occur directly in flare sites (Cane et al., 2002;Li et al., 2013). SEP are classified according to particle origin or acceleration region (Kallenrode, 1998(Kallenrode, , 2003Tsurutani et al., 2009). SEP have energies from MeV to the GeV range and occur in events that last from some hours to a few days. SEP events are much more frequent at times of solar maximum than during solar minimum. The occurrence of SEP events is directly related to flares and CMEs (Kallenrode, 2003). These are recognized as two distinct classes of SEP events: impulsive (accelerated in flares) and gradual (accelerated at CME-driven shocks) (Cane et al., 1986;Kallenrode, 2003). A number of individual GLEs have been studied in details in literature (Nitta etal., 2012;Debrunner et al., 1997;Miroshnichenko, 2001;Dorman et al., 2005;Muraki et al., 2008;Usoskin et al., 2011;Kurt et al., 2013), reflecting the diversity in their features and the original conditions. The exact understanding of the origin and the mechanism of high energy acceleration of solar particles in large SEP events is one of the main topics of the physics governing GLE events (Miroshnichenko et al, 2008;Reames, 2009;Vashenyuket al., 2011;Aschwanden et al., 20012). Rather high energies (in the GeV energy region) may be reached at the early stages of shock evolution (Zank et at., 2000), as well as originate from shocks driven by CMEs propagating through the corona and interplanetary space (Ellison & Ramaty 1985)). Usually, the CMEs provide conditions for the seed particles to be re-accelerated.
Since 1950, the observation of solar energetic particles from solar flares has been done with ground-level experiments, such as the NMs (Meyer et al., 1954;Simpson, 2000;Moraal et al., 2000) as well as the solar neutron telescope network (Watanabe et al., 2009;Valdes-Galicia et al., 2009) around the world. These observations have yielded a lot of new information. For instance, the anti-correlation between solar activity and the flow of galactic cosmic rays, the existence of a prompt and late emission in flares, correlations of the cosmic ray intensity with CMEs and other solar disturbances crossing the Earth, etc. (Moraal et al., 2000;Chupp et al., 1987).
Nowadays, particles accelerated in the Sun environment in the MeV energy region, can be detected by space-borne instruments. The measurement of high-energy protons in space is achieved by the High Energy Proton and Alpha Detector (HEPAD) ( Onsager et al., 1996) on GOES, which provides data on differential fluxes in three channels in the energy region between 350 MeV and 700 MeV, integral flux above 700 MeV, and the X-ray flux in two wavelengths (http : //www.oso.noaa.gov/goes/index.htm). In addition, gamma-rays and hard X-rays from solar flares also are detected by Fermi GBM, in the energy region of up to 300 keV.
Energetic particles from solar flares that can be measured at the Earth's surface are rare events and fewer than 100 have been observed by NMs in the past 70 years (http : //www.nasa.gov/mission p ages/sunearth/news/particles − gle.html). Not all of the solar explosions observed by satellites can be measured at the Earth. It is necessary favorable conditions, i.e., a Sun-Earth magnetic connection. This condition in most cases is achieved when the solar active region is close to the Equator and at the west region, but no far from the central region. In addition, can be a dissipation of the radiation by the interplanetary magnetic field (IMF), and particles can be deflected or captured by the Earth's magnetic field or absorbed by the Earth's atmosphere.
It is important to note that not all ground-based detectors can observe GLEs simultaneously. For instance, in most cases, the NMs that have observed the GLEs in the last solar cycle 23, were located in polar regions, or near it, with a geomagnetic rigidity cutoff ∼ 1GV or less (Shea & Smart, 2012). There are exceptions, such as the case of the GLE on January 20, 2005, the GLE was observed by detectors in regions with much higher cutoff rigidities ( Bostanjyan et al., 2007). Solar flares and CMEs occur whenever there is a rapid large-scale change in the Sun's magnetic field. Among 16 GLEs observed in the previous solar cycle 23, 14 GLEs were linked with X-class flare, one with an M7.1-class flare, and one with a C2.2-class flare. It was reported that in 15 GLEs the associated blast has also emitted a CME (Cliver, 2006;Kahler, 2001;Cliver et al., 1983). As already indicated, the detection of energetic particles from solar flares and CMEs at ground level depends on a good magnetic connection between the Sun and the Earth. Most solar flares associated with GLEs are located on the western sector of the Sun (about 58% of GLE associated solar flares originated from the southwest active regions and 36% from the northwest active regions (Firoz et al., 2011)), where the IMF is well connected to the Earth Reames, (1999). At least two more factors may contribute to GLEs: the presence of prior CMEs and the magnetic field connection of the acceleration region to Earth. The time profiles of the observed SEP events depend on the original solar active region longitude (helio-longitude). For example, those from the active western source regions tend to rise more quickly to the peak than those from the active eastern source regions (Nitta etal., 2012;Reames, 1999;Cane et al., 1988Cane et al., , 2003. The active solar areas are distributed in much broader longitudes than those of impulsive SEP events in flares (Reames, 1999). The first GLE in the current solar cycle was on 17 May 2012, a strong M5.1 solar flare was observed near the west limb of the Sun. This position was well connected by IMF lines linking the Sun to near-Earth space (Li et al., 2013). The flare was accompanied by an O type 3 CME (http : //www.nasa.gov/mission p ages/sunearth/news/N ews050912 − M f lares.html) , and a category S2 solar radiation storm 4 was reported (http : //www.solarmonitor.org). High energy solar particles were recorded by near-Earth spacecraft and some ground-based NMs. For the entire period of ground-based observations (since 1942 (Cliver et al., 1982)), it was the 71st GLE (Li et al., 2013) and the first GLE in the current solar cycle 24 (Klein et al., 2012;Kudela, 2013;Gopalswamy et al., 2013). The halo CME was not fully directed toward Earth. Geomagnetic conditions (at Earth) were slightly disturbed on 17 May 2012 (Papaioannou et al., 2013). (the estimated 3 hr planetary index Kp = 2 and the ring current Dst = 34nT 5 ).
It is worth noting that GLE71 is characterized by some peculiarities (Li et al., 2013;Gopalswamy et al., 2013;Balabin et al., 2013;Mishev etal., 2014). For instance, it was associated with a moderate flare (M5.1), the flare size was smaller than that in all cycle 23 GLEs and the CME itself was very fast (Gopalswamy et al., 2013). It means that there are many aspects of the event still need to be further explored and explained.
In this paper, we show that the signals detected by spacecraft detectors on 10 September correspond to two different phases of the solar flare. The energy release, during the impulsive phase of the flare, was observed as an increase of gamma rays and hard X-rays, registered by RHESSI and Fermi GBM, and peaking at ∼ 15:47-16:48 UT. The GOES proton and ground-based detectors observed signals corresponding to gradual (or extended) phase of the flare, and peaking at ∼ 16:07. Solar protons accelerated by shock waves with an average speed of 948 km/s, a triggered a radiation storm of up to S3 (strong) level in the NOAA storm-scale, triggering the GLE #72. The observation of these solar energetic particles at Earth, was possible only thanks to a Heliospheric current sheet, that played the role of a sun-earth direct magnetic connection. Because the blast happened in the extreme west edge of the solar disc, i.e., a non-geoeffective region.
This article is organized as follows: In Section 2 the solar activity, in September 2017, from the largest (so far) sunspot (AR 2073) on the current solar cycle, is described. In Section 3 we present an analysis on the origin of relativistic proton levels and the GLE #72, observed on 10 September. We highlight the peculiar characteristics of the event, such as the location of the AR 2073 at time of the blast and the HCS, playing the role of a magnetic connection. The connection between the flare, CME and their counterpart observed as the hard X-rays (Fermi GRM) and soft X-rays (GOES 15) and the relativistic particles (GOES protons), including the GLE #72 detected by NMs at high latitudes are presented in Section 4. Finally, In Section 5 we present our summary and conclusions.
ACTIVITY OF SUNSPOT AR 2673 IN SEPTEMBER 2017
The active region AR 2673 began to be visible on 26 August and during its rotation toward the sun's west limb, AR 2673 had a "beta-gamma-delta" magnetic configuration, this means capable of producing strong eruption on the Sun. Indeed, AR 2673 erupted 25 M-Flares and 3 X-Flares, including the two largest of the current solar Cycle (Cycle 24). Two X-class solar flares erupted on 6 September 2017 from AR 2673. The first was a long-duration X2.2-class flare at 9:33 UT. The blast was associated with a narrow CME ejected on the west region of the solar disc. However, the average shock wave speed was low 419 km/s and also they were injected mainly " below" of the ecliptic plane. This flare was the first X-class since 5 May 2015. The second was at 12:02 UT and was also the strongest eruption in the current solar cycle, reaching the condition of X9.2-class flare. The blast was associated to a full halo CME (CME #0017, in the Cactus catalog), with average shock wave speed of 624 km/s. However, the high-speed shock waves with speed up to 1950 km/s were injected above and below the ecliptic plane. The presence of Type IV and Type II radio emission associated with the blast means a strong coronal mass ejections and associated with solar radiation storms. After a travel of 35 h, the halo CME arrived at Earth on 7 September at ∼ 23:00 UT, triggering a strong G3 geomagnetic storm. This was the second stronger geomagnetic storm in the current solar cycle. The Dst geomagnetic index reached up to -142 nT in the transition from 7 to 8 September, the top panel of Fig. 2 summarize the situation.
In addition, the strong geomagnetic storm triggered a Forbush decrease (FD). An FD is a transient decrease followed by a gradual recovery in the observed galactic cosmic ray intensity. The perturbed geomagnetic field during geomagnetic storms disperses the cosmic rays in the vicinity of the Earth, producing a fall in the counting rate at ground level detectors. The FD intensity increases as the geomagnetic rigidity cutoff of the site of detector decreases. Thus the FDs are more intense in detectors located at high latitudes. The FD associated with G3 geomagnetic reached an intensity variation of up to 11% at South Pole NM. The bottom panel of Fig. 2 summarizes the situation.
On 10 September, AR 2673 erupted when it was in the extreme west limb. It was the second strongest flare of the cycle 24, reaching up to X8.2-class, associated to a halo IV CME (angular width of 360 degrees). The CME was ejected within an HCS region and when the Earth was crossing the HCS region. The HCS allowed a direct magnetic (Sun-Earth) connection because of the solar energetic particles, accelerated by the CME shocks started arriving on Earth, up to reaching the S3 (strong) condition in the NOAA storm-scale, triggering the second Ground Level Enhancement (GLE #72) in the current solar cycle, detected in some NMs, as shown in bottom panel in Fig. 2. These observations are detailed in the next section. On 10 September 2017, almost out of view from our fair planet, rotating around the Sun's western edge, the active region AR2673 erupted again at 15:35 UT, the blast was an X8.2-class flare, it was the second largest flare of the current solar cycle. An image in the extreme ultraviolet region was caught by Solar Dynamics Observatory and it is shown in the left panel of Fig. 3. This blast was the most spectacular from AR2673, not just because it happened in the extreme west of the solar limbo, more also because its associated CME (CME #0037), had a significant release of plasma and magnetic field from the solar corona, saturating the images obtained by LASCO at the SOHO spacecraft, as shown in the right panel of Fig. 3. In general, halo CMEs (angular width of 360 degrees) happen when the active region that originates the CME is close to the central region of the solar disc. But this is not the case for the CME # 0037, there was another factor, the CME was ejected within an HCS region. In the next section we detail this spectacular condition. Figure 4 shows the shock velocities as a function of the principal angle "pa" for the CME #0037 on 10 September. The data is from LASCO coronagraph images and automatically generated by CACTus (Robbrecht et al., 2009). The pa CACTUS parameter correlated with the CME projected latitude, and it is defined as the middle angle of the CME when seen in the white-light images.
ORIGIN OF RELATIVISTIC PROTON LEVELS AND THE GLE #72
The pa parameter also depends on the orientation of the CME in the relation of the observer (Lagrange Point L1). The principal angle is measured counterclockwise from North (degrees). Thus, the projected latitudes are only a good estimation of the true direction of propagation. Values of the pa close to 90 and 270 degrees represent zero latitudes. The pa = 90 0 and pa = 270 0 means that the middle angle of the CME coincides with the eastern side and with the western side of the ecliptic plane, respectively.
In addition, from Fig. 4 we can see that the shock wave velocities in the CME #0037 reach up to ∼2000 km/s, in the whole pa region, including the region around pa = 270 0 , i.e., the ecliptic west region. Solar protons accelerated by high-speed shock waves and injected in the west region of ecliptic plane have good chance to reach the Earth. On the other hand, an HCS is a transition zone that separates regions of opposing interplanetary magnetic field polarity (Wilcox & Ness, 1995). An increase in the plasma concentration and of the interplanetary magnetic field (IMF) intensity is associated with the HCS compression region. This means that the HCS plays a magnetic focusing role to charged particles, they propagate more efficiently following the HCS boundary sectors. In some cases, the crossing of Earth by HCS boundary sectors also can propitious a small enhancement in galactic cosmic ray (GCR) flux observed at ground level, as shown by (Thomas et al., 2014).
Charged particles in the heliosphere with energies up to ∼ 2 GeV undergo drift processes along of the HCS (Usoskin et al. 2008). Charged particles when the solar magnetic field is positive (A>0) or negative (A<0), satisfying the condition qA>0 drift away from the sun and under the condition qA<0 drift toward the sun. Fig. 5 gives a snapshot of the WSA-Enlil model 6 run around on 10 September. The model shows the solar wind plasma density in the ecliptic plane. Blue colors show very high plasma density and grey color represents low density. The Earth is marked as the red circle, and the Sun is shown by the blue circle. Following this figure, we can see that on 10 September the Earth was crossing a wide region of high solar plasma density, i.e., crossing the HCS.
The crossing of Earth, through the boundary sectors of HCS, is usually not associated with big disturbances in the geomagnetic field, but there is a signature on the solar wind parameters. In the present case, a sector boundary crossing (SBC) occurred in 6 September 2017 (DOY 249), when the magnetic field changed from "away" to "towards" Sun, and ∼ 8 days later (at transition from 13 to 14 September, DOY 256-257) back to "away", as shown in the time profiles of P hi angle in the top panel of Fig. 6, there are also changes (fluctuations) in the solar wind density and speed, as shown in central and bottom panel of Fig. 6, respectively. Data were taken from ACE.
CONNECTION FLARE CME AND GLE
The observation by the Skylab around 40 years ago, of X-ray emissions by solar flares at the edge of the solar limb has enabled the clear identification up to three phases in a solar flares (Pallavicini et al., 1977), a precursor phase, an impulsive phase (prompt) and a gradual phase (delayed). However in most cases are identified only the two last phases, the impulsive and the gradual according to the duration of their X-ray duration (Ruffolo, 1997). From the analysis of the timing of the diverse populations in large events, is possible to identify these two types, in the same event and they are known as different phases of a solar flare (Lin et al., 2002(Lin et al., , 2003. In some flares, the timing of electromagnetic emissions and relativistic protons suggests that the first proton peak is related to the acceleration during the impulsive phase (Murphy et al., 1987;Klein et al., 2014). Protons and heavier ions are accelerated in the impulsive phase to relativistic energies by small-scale coronal loops. The impulsive phase is characterized by a fast rise and shorter decay times, with duration of some tens of minutes (Pallavicini et al., 1977). A fraction of these high-energy particles can interact with the nuclei of the different elements in the ambient solar 2 0 1 7 -0 9 -1 0 1 7 : 0 0 : 0 0 U T C Fig. 5.-WSA-Enlil model information on the density of solar wind for 10 September 2017, at 17:00 hours. Earth is marked as the red circle, the Sun is shown by the blue circle, and the red and blue circles are the twin STEREO spacecraft in 1 AU orbits around the Sun. The circular image shows a view of the ecliptic plane viewed from above and the semi-circle a side view of this plane. Credit: NOAA/SWPC WSA-Enlil model. atmosphere, these interactions can produce neutral pions which decay immediately and generate a broad gamma-ray line with a maximum near 70 MeV (Kurt et al., 2013). There is also the emission of hard X-rays in this phase, due to bremsstrahlung, produced by electrons that have been accelerated to much higher energies than those found in the ambient plasma (Lin et al., 2003). The hard X-rays and the gamma-rays emissions are the evidence of acceleration of electrons and protons (ions) to relativistic energies in this impulsive phase.
The second proton peak begins with magnetic restructuring in the corona after the CME passage and indicates the gradual phase of the solar flare. It is characterized by soft X-ray emissions. Protons and heavier ions are accelerated at this gradual phase by re-connection and possibly turbulence in the large-scale coronal loops, as well as CME shocks, with a longer rise and decay times, with a duration from some hours to days. Thus, these particles are spread over a broad region in solar longitude and under some conditions, such as geoeffective of the active area, the most energetic particles can give rise to a GLE (Chupp et al., 2009) [86], and the same event can be observed by ground-based detectors, at least by those located at high latitudes.
The relativistic particles emission in the large particle event on 10 September 2017, is also likely to be interpreted in two different phases, an impulsive (prompt) and a gradual (delayed) particle acceleration.
Signatures of the impulsive phase were seen by the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI) (Lin et al., 2002) in hard X-ray and gamma-ray emissions up to 20 MeV. The onset time of the RHESSI signal is estimated by as 15:50 UT, with a peak at 15:57 UT 7 .
A comparison between the hard X-ray emission during the impulsive phase as observed by Fermi GBM and the soft X-ray emission during the gradual phase as observed by GOES 15 is shown in Fig. 7. From this figure, we can see that the hard X-ray emission, peaked about 10 minutes before then the soft X-ray peak.
In the present case, despite the blast location was in the extreme west limb, i.e., do not be geo-effective, it occurred when the Earth was crossing by a heliospheric current sheet (HCS) region. The HCS drifted the protons away from the sun following the sheet. i.e., played the role of a sun-earth magnetic connection, because the Earth was crossing the HCS sector at the time of the blast. This condition allowed that the SEP arrived at the Earth with a flux enough to triggering a GLE, the GLE#72, and the second of the current solar cycle.
The particle enhancement at NMs had an onset about half hour after the peak of the soft X-ray emission observed at 1 AU by GOES 13, such as observed in Fig. 8, where is shown a correlation between the GOES X-rays time profiles and the counting rate in three NMs: South-Pole (0.1 GV)(confidence 5.6%), Oulu (0.8 GV)(confidence 3.5%, and Lomnicky(LMKS)(3.8GV)(confidence 0.7%). As already was expected, NMs in places with high magnetic rigidity cutoff, for instance, the Athens NM (8.5 GV) did not detect any signal (see Fig. 2).
In most cases, a type II and IV radio emissions are observed in the gradual phase at the high corona (Gopalswamy et al., 2007) and is typically associated with strong coronal mass ejections and solar radiation storms. In fact, in the event around Sunspot 2673 on 10 September, there were type II radio emissions, with onset at 16:08 UT, indicating that the coronal mass ejection was associated with the X8.2-class flare, generating a strong S3 Level Solar Radiation Storm (a proton flux at 1AU above 1000 particles per cm 2 per second and energies above 10 MeV), as observed by GOES satellite. Fig. 9 summarizes the situation, the top panel shows the GOES proton flux for five different energy band, from 5 GeV to 1000 GeV and the bottom panel shows the counting rate at the three NMs, South-Pole, Oulu, and Lomnicky respectively. Following the top panel, we can see a fast rise increase in relativistic proton levels and after 3 hours of the onset (16:10 UT), the solar radiation storm reaches the S3 level (strong level) in the NOAA storm radiation scale, ranging from S1 (minor) to S5 (strong). In addition, solar energetic particles triggering a GLEmeans that the flux of relativistic particles (protons) reached the GeV energy range, with a flux above the background galactic cosmic ray flux.
CONCLUSIONS
On 10 September 2017, an X8.2-class solar flare erupted from the active region AR 2673. This was the second strongest solar flare of the current solar cycle. Besides, the blast happened under three peculiar circumstances, the first was the location of the AR 2673, on the western edge of the solar disk, without a magnetic connection with the Earth. Besides, the blast happened under three peculiar circumstances, the first was the location of the AR 2673, on the western edge of the solar disk, without a magnetic connection with the Earth. The second is the AR 2673 footprint, within an HCS region and the third was a temporal coincidence, the blast happened when the Earth was crossing by this HCS region. We claim that the HCS played the role of a magnetic connection between Sun and Earth. This assumption is according to numerical calculations (Usoskin et al. 2008), indicating that if the charged particles satisfies the condition qA>0 and that is true to solar protons accelerated by shocks (up to energies about 2 GeV), in the global magnetic field polarity (A>0, on 10 September 2017), the particles drift away from the Sun following the sheet. Thus, in association with the X8.2-class flare, the second GLE (GLE #72) in the current solar cycle 24 was detected at ground level by NMs located at high latitudes.
From a timing analysis, we found that the signals detected from this blast corresponding to different phases of the solar flare. The energy release, during the impulsive phase of the flare, was observed as an increase of gamma rays and hard X-rays, registered by RHESSI and Fermi GBM and peaking at ∼ 15:57 UT. While the energy release in the gradual phase of the flare was observed as an increase of soft X-rays registered by GOES and peaking at 16:06 UT. The GOES proton flux also corresponds to energy release in the gradual phase of the flare. Solar protons were accelerates in this phase by shock waves with an average speed of 948 km/s, reaching energies of up to the GeV band and triggering at Earth a radiation storm of up to S3 (strong) level. In addition, the ground-based detectors located in high latitudes observed a GLE (the GLE#72) with a confidence of up to 5.6%, in temporal correlation with the GOES proton flux, i.e., also corresponding to gradual (or extended) phase of the flare.
Finally, some peculiarities of the active region AR 2673 were reported, such as the eruption on 6 September, an X9.3-class flare, the strongest flare of the current cycle. It was associated with a halo CME toward Earth, triggering the second major geomagnetic storms of the current solar cycle on 7 September.
|
2018-12-18T17:51:52.000Z
|
2018-05-07T00:00:00.000
|
{
"year": 2018,
"sha1": "a3b58960800ffc55ef8e4c087471014a0a63ad5a",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1805.02678",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a3b58960800ffc55ef8e4c087471014a0a63ad5a",
"s2fieldsofstudy": [
"Physics",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
}
|
250567163
|
pes2o/s2orc
|
v3-fos-license
|
Is Anodal Transcranial Direct Current Stimulation an Effective Ergogenic Technology in Lower Extremity Sensorimotor Control for Healthy Population? A Narrative Review
Anodal transcranial direct current stimulation (a-tDCS) aims to hone motor skills and improve the quality of life. However, the non-repeatability of experimental results and the inconsistency of research conclusions have become a common phenomenon, which may be due to the imprecision of the experimental protocol, great variability of the participant characteristics within the group, and the irregularities of quantitative indicators. The aim of this study systematically summarised and analysed the effect of a-tDCS on lower extremity sensorimotor control under different experimental conditions. This narrative review was performed following the PRISMA guidelines until June 2022 in Web of Science, PubMed, Science Direct, Google Scholar, and Scopus. The findings of the present study demonstrated that a-tDCS can effectively improve the capabilities of lower extremity sensorimotor control, particularly in gait speed and time-on-task. Thus, a-tDCS can be used as an effective ergogenic technology to facilitate physical performance. In-depth and rigorous experimental protocol with larger sample sizes and combining brain imaging technology to explore the mechanism have a profound impact on the development of tDCS.
Introduction
Sensorimotor control in the lower extremity (i.e., balance, gait, mobility, and postural control) is the fundamental element for everyday activities, often coinciding with non-postural cognitive tasks [1,2]. Such adaptive sensorimotor processes are exceedingly complex, which integrate the somatosensory, visual, and vestibular transmission pathways [3]. In addition to integrating external information, the activation of the brain regions is also key for maintaining the execution of neural signals and strategies targeting cortical excitability that will help the sensorimotor control in the lower extremity [4,5].
Non-invasive brain stimulation techniques such as therapeutic tools have been developed to treat neuropsychiatric and neurological disorders [6]. Transcranial direct current stimulation (tDCS) is one of such techniques that modulates the excitabilities of brain regions by sending low-intensity current to polarise and depolarise the resting membrane potentials of neurons via surface scalp electrodes [7]. The mechanisms and functions of tDCS are shown in Figure 1. As a neuromodulation technique, anodal-tDCS (a-tDCS) has positive effects on enhancing synaptic connections [8] and modulating the nervous system, thereby improving the coordination efficiency of musculoskeletal modification during the system, thereby improving the coordination efficiency of musculoskeletal modification during the performance of lower extremity sensorimotor control [9], such as balance and gait in healthy adults. Specifically, early studies have provided insights into the potential ergogenic effect of a-tDCS on a wide range of exercise types based on promising outcomes [7,10,11]. For instance, multi-session a-tDCS can improve semantic associations for schizophrenia patients, which supports its neuromodulation role in improving cognitive functions [12]. By investigating the changes of a-tDCS on cortical plasticity, Pisoni et al. [13] elucidated the positive correlation between the neurophysiological effects of a-tDCS at specific cortical circuits and cognitive enhancement. Still, the effect of a-tDCS reported in previous studies was inconsistent, which may be due to the selection of tDCS variables (i.e., duration, montage, location, etc.) being of high variance. For instance, applying 20-min a-tDCS over the cerebellum for young adults may increase the excitability of the motoneuron pool, which can result in a continuous neural drive for the motor neurons and improve the dynamic balance task while standing with two feet on a movable platform [14]. However, the enhancements have not been found in another study [2], showing that the missing ergogenic effects of a-tDCS may be the shorter duration (10 min vs. 20 min) and the small sample size. Additionally, in the dual-tasking condition (i.e., standing or walking while performing another task), brain regions are involved in cognitive processes [15,16]. Studies also demonstrated that a-tDCS contributes to modulating cortical excitability and results in a sustained neural drive for the motor neurons, which may have enabled better integration amongst different sets of nuclei necessary for the execution of cognitive-motor tasks [17]. Therefore, more brain regions of interest have been included as stimulation targets (i.e., primary motor cortex (M1), prefrontal cortex (PFC), supplementary motor area (SMA), and temporal cortex (TC)).
Therefore, this narrative review aimed to systematically characterise the effect of a-tDCS on the lower extremity sensorimotor control in healthy individuals, providing constructive knowledge on the optimal protocol design and effects of a-tDCS on lower extremity sensorimotor control to inform future studies. Still, the effect of a-tDCS reported in previous studies was inconsistent, which may be due to the selection of tDCS variables (i.e., duration, montage, location, etc.) being of high variance. For instance, applying 20-min a-tDCS over the cerebellum for young adults may increase the excitability of the motoneuron pool, which can result in a continuous neural drive for the motor neurons and improve the dynamic balance task while standing with two feet on a movable platform [14]. However, the enhancements have not been found in another study [2], showing that the missing ergogenic effects of a-tDCS may be the shorter duration (10 min vs. 20 min) and the small sample size. Additionally, in the dual-tasking condition (i.e., standing or walking while performing another task), brain regions are involved in cognitive processes [15,16]. Studies also demonstrated that a-tDCS contributes to modulating cortical excitability and results in a sustained neural drive for the motor neurons, which may have enabled better integration amongst different sets of nuclei necessary for the execution of cognitive-motor tasks [17]. Therefore, more brain regions of interest have been included as stimulation targets (i.e., primary motor cortex (M1), prefrontal cortex (PFC), supplementary motor area (SMA), and temporal cortex (TC)).
Therefore, this narrative review aimed to systematically characterise the effect of a-tDCS on the lower extremity sensorimotor control in healthy individuals, providing constructive knowledge on the optimal protocol design and effects of a-tDCS on lower extremity sensorimotor control to inform future studies.
Search Strategy
This narrative review was performed for relevant papers from the first data available until June 2022 in the following databases: Web of Science, PubMed, Science Direct, Google Scholar, and Scopus. The following key search terms were used to improve the matching of the searched English literature with the research purpose: 'transcranial direct current stimulation' or 'tDCS' or 'HD-tDCS' and 'postural control' or 'balance' or 'sensorimotor control' or 'physical performance' or 'gait' or 'time-on-task'. Moreover, the reference lists of the included studies were reviewed to find additional relevant studies that have not appeared in the database with our initial electronic search terms.
Eligibility Criteria
Studies that met the following requirements were included: (a) English full-text articles; (b) randomised, single/double-blinded, sham-controlled experimental design; (c) the intervention of a-tDCS was performed in healthy adults; (d) application of bilateral a-tDCS or unilateral a-tDCS in any brain region; (e) perform lower extremity sensorimotor testing with static or/and dynamic postural control. In addition, review, conference, and unpublished articles were excluded.
Overview of the Included Studies
We collected a total of 587 relevant documents from the Web of Science, PubMed, Science Direct, Google Scholar, and Scopus. After rigorous screening, 26 studies were used in the narrative review (static sensorimotor control, 18; dynamic sensorimotor control, 18; static and dynamic sensorimotor control, 11) ( Figure 2). As shown in Table 1, only one study simultaneously recruited two populations (young and older adults) as participants [14] among all included studies, and three studies compared and investigated two sets of montage placement [9,14,18]. In addition, participants in 13 studies (56.5%) received two simulation methods (a-tDCS and sham stimulation) separately, with an interval of 3 to 7 days or more, and 8 of these studies (61.5%) selected 7 days. Only two studies (8.7%) applied a multi-session stimulation program [18,19]. Nine studies (39.1%) used electrode sponges of different sizes for the cathode and anode, and only one study used high-definition tDCS (HD-tDCS) [20].
until June 2022 in the following databases: Web of Science, PubMed, Science Direct Google Scholar, and Scopus. The following key search terms were used to improve th matching of the searched English literature with the research purpose: 'transcranial direc current stimulation' or 'tDCS' or 'HD-tDCS' and 'postural control' or 'balance' or 'sen sorimotor control' or 'physical performance' or 'gait' or 'time-on-task'. Moreover, the ref erence lists of the included studies were reviewed to find additional relevant studies tha have not appeared in the database with our initial electronic search terms.
Eligibility Criteria
Studies that met the following requirements were included: (a) English full-text arti cles; (b) randomised, single/double-blinded, sham-controlled experimental design; (c) th intervention of a-tDCS was performed in healthy adults; (d) application of bilateral a tDCS or unilateral a-tDCS in any brain region; (e) perform lower extremity sensorimoto testing with static or/and dynamic postural control. In addition, review, conference, and unpublished articles were excluded.
Overview of the Included Studies
We collected a total of 587 relevant documents from the Web of Science, PubMed Science Direct, Google Scholar, and Scopus. After rigorous screening, 26 studies were used in the narrative review (static sensorimotor control, 18; dynamic sensorimotor control, 18 static and dynamic sensorimotor control, 11) ( Figure 2). As shown in Table 1, only on study simultaneously recruited two populations (young and older adults) as participant [14] among all included studies, and three studies compared and investigated two sets o montage placement [9,14,18]. In addition, participants in 13 studies (56.5%) received two simulation methods (a-tDCS and sham stimulation) separately, with an interval of 3 to 7 days or more, and 8 of these studies (61.5%) selected 7 days. Only two studies (8.7%) ap plied a multi-session stimulation program [18,19]. Nine studies (39.1%) used electrod sponges of different sizes for the cathode and anode, and only one study used high-defi nition tDCS (HD-tDCS) [20]. All included studies applied a randomised design, of which two studies (7.7%) applied a parallel design, and others (92.3%) applied a crossover trial design. Studies assessed a total of 680 participants, with a population number of 25.81 ± 12.06 (mean ± SD) per study (from 5 to 57). In addition, the studies of Ehsani et al. [11] and Hafez et al. [18] had a total of 5 (14.7%) and 4 (10.3%) dropouts, respectively. Regarding gender, all the studies included 292 male participants and 359 female participants, one of which did not indicate gender [21]. Three studies (11.5%) only recruited male participants, and no study only recruited female participants. Across the studies, 269 participants (40.09%) were under the age of 50, and 402 (59.91%) were over 50 years old. As shown in Figure 3, concerning the stimulus duration of a-tDCS, the majority of studies (76.92%) used 20 min. The current density was primarily 2 mA, with mean ± SD current density per the study of 1.61 ± 0.57 mA (ranging from 0.5 mA to 2.8 mA) and electrode size of 26.69 ± 13.67 cm 2 (from 1 cm 2 to 55.25 cm 2 ). The anode montage was placed in the motor cortex (51.61%), cerebellum area (25.81%), PFC (19.35%), and TC (3.23%). Note: *, the study included static test; #, the study included dynamic test. Abbreviations: M/F = male/female; dlPFC = dorsolateral prefrontal cortex; OBF = orbitofrontal cortex; SMA = supplementary motor area; TC = temporal cortex; M1 = primary motor cortex; CA = cerebellum area; PC = prefrontal cortex; SA = supraorbital area; BM = buccinator muscle. [2,9,11,14,.
Effect of A-tDCS on Standing Postural Control
Standing upright is a complex task, which occurs simultaneously with non-postural cognitive tasks. Such 'dual tasking' significantly increases the difficulty of lower extremity sensorimotor control compared with 'single tasking', and it is often used as an important evaluation index for the elderly to prevent falls. Older adults with executive dysfunction are linked to poor dual-tasking capacity, leading to greater risk of falls [40]. Relevant literature indicates the effectiveness of a-tDCS of the dorsolateral prefrontal cortex (dlPFC) on performing two cognitive tasks concurrently. Manor et al. [41] reported that as compared to sham, 20 min of a-tDCS induced significant improvements in dual-task postural sway speed and areas in older adults with functional limitations, but not in single-task standing postural control performance. In addition, they argued that the reduced dualtask costs were due to tDCS improving the capacity of the frontal-executive systems and optimising cognitive-motor resources. In line with the studies on young healthy adults, Figure 3. Stimulation duration and intensity in included studies [2,9,11,14,.
Effect of A-tDCS on Standing Postural Control
Standing upright is a complex task, which occurs simultaneously with non-postural cognitive tasks. Such 'dual tasking' significantly increases the difficulty of lower extremity sensorimotor control compared with 'single tasking', and it is often used as an important evaluation index for the elderly to prevent falls. Older adults with executive dysfunction are linked to poor dual-tasking capacity, leading to greater risk of falls [40]. Relevant literature indicates the effectiveness of a-tDCS of the dorsolateral prefrontal cortex (dlPFC) on performing two cognitive tasks concurrently. Manor et al. [41] reported that as compared to sham, 20 min of a-tDCS induced significant improvements in dual-task postural sway speed and areas in older adults with functional limitations, but not in single-task standing postural control performance. In addition, they argued that the reduced dual-task costs were due to tDCS improving the capacity of the frontal-executive systems and optimising cognitive-motor resources. In line with the studies on young healthy adults, Zhou et al. [37] also found that the dlPFC was a primary brain region supporting cognitive dual tasks. However, one study partially replicated the study of Zhou et al. [37], and the results were inconsistent. Pineau et al. [31] investigated the postural performance in a simple and dual-task with eyes open and closed via assessing the centre of pressure (COP) parameters immediately after a 20 min a-tDCS session. The results showed that acute a-tDCS cannot effectively improve dual-task performance, and they explained that the discrepancy may be due to the physical activity level of participants. Moreover, the application of slightly larger current intensity (2 mA vs. 1.5 mA) and smaller stimulating electrodes (25 cm 2 vs. 35 cm 2 ) for the latter may not have a decisive effect on the experimental results compared with the former. Based on the feature of ceiling effects, the more energetic the participants are, the more difficult it is to reflect the positive effectiveness of a-tDCS. A better understanding of the effect of a-tDCS on standing posture control can be established by investigating the Brain Sci. 2022, 12, 912 7 of 11 age-related loss of complexity in healthy older adults. Therefore, Zhou et al. [38] quantified the complexity of postural sway of the elderly with a-tDCS over the left PFC in single and dual-task postural control using multi-scale entropy. Their results indicated that a-tDCS was associated with an increase in prefrontal cortical excitability, which coincided with improved complexity of standing postural sway specifically within a dual-task condition.
The effects of a-tDCS on other cortical regions have also been investigated. The cerebellum is a pivotal stimulus target, and as a complex intracranial organ, it has an extensive connection with many areas of the midbrain, brainstem, and cerebral cortex [42]. A large number of studies have confirmed that cerebellar a-tDCS can enhance the links and increase the control function of the cerebellum on the motor cortex, vestibular system, and other brain regions [11,43,44]. Standing postural control includes both static control, that maintains stability on a firm and unchanging support surface, and dynamic control, that maintains balance on a movable platform. Ehsani et al. [11] investigated the effect of cerebellar a-tDCS on static and dynamic postural control in older individuals using a Biodex Balance System, and they revealed that the participants receiving cerebellar a-tDCS showed significantly reduced postural sway in anterior-posterior and medial-lateral directions. Similarly, combined with postural control training, cerebellar a-tDCS stimulation can improve the skill acquisition of postural control in young individuals [27]. As we previously mentioned, tDCS is a form of neuromodulation, which can modulate neural activity. Therefore, a-tDCS of the M1 has gained increasing interest as a neurorehabilitation tool for facilitating the excitability of this region and enhancing standing performance. Xiao et al. [20] reported that the static standing balance performance with eyes closed improved after single-session HD-tDCS by assessing the averaged sway velocity of the centre of gravity in anterior-posterior and medial-lateral directions. However, no significant differences were observed between HD-tDCS and sham stimulation amongst young participants. These results were attributed to the small sample size and ceiling effect. In addition, the results were in agreement with the study of Inukai et al. [24], confirming that a-tDCS over the cerebellum cannot enhance the standing posture control capacity of young health populations compared with sham stimulation. Literature indicates that the equivocal results in standing posture control are due to the stimulation target and age.
Two studies were included for comparison of M1 a-tDCS to determine the effect of cerebellar a-tDCS on standing posture control [9,18]. In the lower extremity sensorimotor control of healthy individuals, a previous study has asserted that the motor cortex plays a smaller role compared with the cerebellum and subcortical structures [45]. Moreover, a recent study has shown that the cerebellum and M1 a-tDCS have significant effects on the standing posture balance of the elderly [9]. However, Hafez et al. [18] found that posture training combined with bilateral cerebellar or M1 a-tDCS was more effective than cerebellar a-tDCS alone or postural training alone in improving the anterior-posterior and mediallateral stability index of standing postural control under eyes open and closed conditions. An important aspect of the divergence between the two studies was the difference in the experimental protocol.
Although the aforementioned findings were combined, the systematic information reconfirmed that a-tDCS over the dlPFC could improve standing posture control performance under dual-task conditions, particularly for the elderly. In any case, further optimisation of experimental protocol could provide a stable experimental effect on standing posture control. Therefore, some speculation on the a-tDCS mechanism should be treated with caution.
Effect of A-tDCS on Gait Speed and Time-on-Task
The improvements in gait speed after a 20 min session of a-tDCS over the prefrontal cortex under single-task conditions were found compared with sham stimulation, but the differences were not statistically significant [29,37]. Although the aforementioned result is encouraging, evidence shows that most studies have a small sample size. Regarding the gait speed in double-task conditions, four studies proved that a-tDCS over the prefrontal cortex can significantly reduce dual-task costs by assessing the walking tests in healthy elderly and young adults [29,34,37,39]. Another study with the same test protocol did not find a significant functional improvement in walking with dual tasking based on the TUG test of mobility in functionally limited older adults [41]. Factors such as the dose and duration of a-tDCS in participants with different physical conditions should be appropriately adjusted. Previous studies have indicated that anticipatory postural adjustments were generated from the increased excitability of the SMA to promote gait posture stability for healthy adults. Collectively, the improved connectivity in the SMA pathway indicates the decrease in COP sway path length immediately after 15 min a-tDCS within the anticipatory postural adjustment processing network [30]. TDCS entails that sending weak direct currents to deep brain areas can drive neuromodulation. A systematic review and meta-analysis consistently demonstrated that the combination of a-tDCS over motor-related areas and repetitive gait training could improve gait rehabilitation in individuals with stroke [46]. Given the prominent role of the M1 leg area in executing lower extremity function, this stimulus area is a potential target for improving sensorimotor control scenarios in adults. In this regard, a-tDCS can significantly facilitate learning capabilities by evaluating task performance and kinematic variables in healthy young participants [25]. The same experimental protocol was applied to the elderly, but no positive effects of a-tDCS were found; thus, the authors hypothesised that inter-individual differences may be an unfavourable factor for this result [26]. Two cross-studies were performed to assess the corticospinal excitability and postural sway of a-tDCS applied over the M1 or cerebellum and to comprehensively understand the effect of a-tDCS on lower extremity sensorimotor control [14,18]. These studies suggest that apart from the different stimulation targets, age group, postural measure, and visual condition (eyes open or closed) can affect the ergogenic effects of a-tDCS. Furthermore, the cerebellum plays an important role in postural control. The most typical symptoms of a cerebellar lesion are decreased balance, abnormal gait, and increased risk of falling (i.e., ataxia) [47]. Therefore, the effect of cerebellar a-tDCS on postural steadiness has received widespread attention in the literature. The rationale behind using cerebellar a-tDCS as a tool in the context is that the increased activity of the cerebellum related to motor function could boost adults' lower extremity sensorimotor control. From this point of view, a number of studies confirmed the positive effects of 20 min cerebellar a-tDCS on postural adaptation in young and older adults using the standing dynamic platform assessment system [11,32]. Contrary to these findings, another study showed that 10 min cerebellar a-tDCS with high current density (2.8 mA) had no significant effect on improving the acquisition of motor skills in young participants [2]. The authors attributed the loss of cerebellar a-tDCS effectiveness to the small sample size, inappropriate electrode position, and size. Despite the many factors that affect the effectiveness of a-tDCS on lower extremity sensorimotor control, the duration of stimulation found in the above-mentioned studies is a pivotal factor that cannot be ignored.
Investigations specific to lower extremity sensorimotor control include studies not only on gait speed indicators but also time-on-task required to perform the postural tests. Based on previous reports, time-on-task is defined as the time from the appearance of a stimulus to the completion of the response. It reflects the coordination and rapid response ability of the human nerve and musculoskeletal system, which ensures humans perform the basic daily activities. In general, a complete time-on-task cycle needs the individual to undergo stimulus identification, then select the appropriate response, and finally finish the instruction. However, ongoing studies have shown that the SMA plays an important role in movement preparation, particularly in the case of complex tasks following visual cues [48]. In addition, existing evidence suggests that a-tDCS over the SMA can significantly reduce time-on-task in dynamic balance tests, which require a more complex planning process [49]. Given the close position between the SMA and leg M1, another study on M1 a-tDCS improving ankle time-on-task in young adults hypothesised that the ergogenic effect was partly attributed to the effect of the SMA [22]. In this context, Saruco et al. [33] found that combining motor imagery practice with a-tDCS applied over the M1 can facilitate short-term motor learning by enhancing the cortical excitability of a postural task required to reach targets located forward. In the assessment of leap task time, Lee et al. [28] also pointed out that a-tDCS over the M1 can effectively improve balance performance and shorten response time in healthy young adults. However, in another study, no significant improvement was reported [23]. The TUG test is widely utilised for evaluating the mobility of dynamic posture control. Applying the stimulation of the M1, a-tDCS has been affirmed in enhancing time-on-task using the TUG test [19,21].
In the gait speed test of single and dual tasks, small sample size is still a key factor affecting the effect of a-tDCS. This greatly affects the selection of stimulation targets, stimulation duration, and intensity in the experimental protocol, and even causes inconsistencies in results. In addition, a-tDCS generally showed a positive effect on time-on-task regardless of age.
Limitations
This study has several limitations: (a) the searched database is limited; (b) all participants included were healthy adults; (c) the study only reviewed the effects immediately after the a-tDCS session and did not examine the longer-term follow-up effects of tDCS; and (d) the effects of cathode tDCS are not discussed in this study.
Conclusions
The ergogenic effect was observed in dual-task conditions, e.g., stimulating the dlPFC using a-tDCS had an evident development in standing task performance amongst the elderly. Meanwhile, significant enhancements in gait speed and time-on-task were observed when comparing a-tDCS with sham stimulation. In particular, a-tDCS had an effective reduction in time-on-task for the young and older population during different tests.
|
2022-07-16T15:21:20.829Z
|
2022-07-01T00:00:00.000
|
{
"year": 2022,
"sha1": "7047a1d521f94b48a68c93ba4d4d3477f0df1328",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3425/12/7/912/pdf?version=1657699826",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ac15cadea063781e234ca719260d50f45d95658c",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
2517481
|
pes2o/s2orc
|
v3-fos-license
|
Methods to Recruit Hard-to-Reach Groups: Comparing Two Chain Referral Sampling Methods of Recruiting Injecting Drug Users Across Nine Studies in Russia and Estonia
Evidence suggests rapid diffusion of injecting drug use and associated outbreaks of HIV among injecting drug users (IDUs) in the Russian Federation and Eastern Europe. There remains a need for research among non-treatment and community-recruited samples of IDUs to better estimate the dynamics of HIV transmission and to improve treatment and health services access. We compare two sampling methodologies “respondent-driven sampling” (RDS) and chain referral sampling using “indigenous field workers” (IFS) to investigate the relative effectiveness of RDS to reach more marginal and hard-to-reach groups and perhaps to include those with the riskiest behaviour around HIV transmission. We evaluate the relative efficiency of RDS to recruit a lower cost sample in comparison to IFS. We also provide a theoretical comparison of the two approaches. We draw upon nine community-recruited surveys of IDUs undertaken in the Russian Federation and Estonia between 2001 and 2005 that used either IFS or RDS. Sampling effects on the demographic composition and injecting risk behaviours of the samples generated are compared using multivariate analysis. Our findings suggest that RDS does not appear to recruit more marginalised sections of the IDU community nor those engaging in riskier injecting behaviours in comparison with IFS. RDS appears to have practical advantages over IFS in the implementation of fieldwork in terms of greater recruitment efficiency and safety of field workers, but at a greater cost. Further research is needed to assess how the practicalities of implementing RDS in the field compromises the requirements mandated by the theoretical guidelines of RDS for adjusting the sample estimates to obtain estimates of the wider IDU population.
INTRODUCTION
Evidence suggests recent diffusion of injecting drug use and associated HIV infection in the Russian Federation since 1996. 1,2 Approximately 60% of HIV case reports have been associated with injecting drug use, 1,3 with recent estimates indicative of increased sexual HIV transmission. 4 According to UNAIDS classifications, HIV in much of the Russian Federation and former Soviet Union is a concentrated epidemic, with prevalence consistently above 5% in a single risk group (i.e., IDUs) but less than 1% in the general population. 5 Concentrated epidemics require targeted surveillance of the population group most at risk in order to track the spread within that group as well as potential transmission to others.
Surveillance among IDUs is problematic, and there has been much discussion on the merits of different methods to recruit marginalized and hidden groups for these purposes. [6][7][8] We know that surveying drug users in treatment settings misses an important segment of the drug using population. Evidence suggests that behaviours, characteristics and HIV prevalence amongst IDUs in treatment often systematically differ to IDUs not in treatment. [9][10][11][12][13] Many surveillance studies of IDUs conducted in the 1990s relied on nonprobability sampling such as convenience, snowball sampling or chain referral sampling to recruit members of the target group. 12,14 These methods work on the assumption that peers are better able to recruit members of a hidden population than researchers. 15 Typically studies employed Fprivileged access interviewers_ or Findigenous field workers_ to recruit IDUs from community settings. Indigenous field workers are interviewers who are either current or former drug users or individuals who have experience working with drug users and have privileged access to IDU networks. Over the last 15 years, this has become the established sampling method for recruiting hidden populations of IDUs and sex workers both in the UK and internationally. [16][17][18][19][20][21] A refinement of the chain referral methodology called respondent-driven sampling (RDS), has recently been developed. 22 RDS is inspired by the insight of Bsmall world theory^that suggests that every person is indirectly associated with every other person through approximately six intermediaries, 23 and therefore that everyone in a defined population could be potentially reached through several waves of recruitment in a chain-referral sample. 24 This implies that there is a probability greater than zero that everyone in that population will be sampled.
The unique selling point of RDS is that the collection of data on participants_ social networks allow for adjustment for non-random recruitment. RDS uses social network data to make inferences about the wider target population from which the sample is drawn to provide proportional population estimates of characteristics and behaviours. 24,25 In this paper, we do not attempt to test the statistical superiority of RDS in providing Fpopulation_ estimates over other sampling strategies but instead focus on RDS as a recruitment strategy examining the unadjusted RDS sample characteristics.
This paper compares two sampling methodologies, RDS and chain referral sampling using indigenous field workers (IFS), in terms of cost effectiveness, duration of fieldwork and effects on the demographic composition of the sample. First we offer a theoretical descriptive comparison of the two approaches.
Indigenous Field Worker Sampling
The IFS recruitment method uses a standard chain referral approach. Indigenous field workers undergo training covering aims of the study, fieldwork protocols, ethics, informed consent, interview skills and safety procedures. Field workers (FWs) identify individuals known to them from IDU networks, recruit; and then interview them in community settings, separate from the rest of the research team. Eligible participants are given an incentive to take part and also asked to introduce their peers to the FW. The use of multiple site and network recruitment ensures a wide coverage of the population, providing as representative a sample as possible. There is some evidence that the use of FWs with direct access to IDU social networks facilitates recruitment and reduces masking (undersampling reclusive respondents), volunteer bias (oversampling cooperative respondents) and underreporting of socially undesirable behaviours. 10,26 Respondent-driven Sampling In RDS, a fixed site or Bstore front^is established where all interviewing takes place, providing the research team with greater control over the fieldwork. Unlike IFS and other chain referral samples, RDS uses a dual incentive system, a primary incentive for participating in the study and a secondary incentive for recruiting others into the study. 22 Sampling begins with a set of initial subjects who serve as Fseeds_ for an expanding chain of referrals, with respondents from each link in the chain or Fwave_ referring respondents who form subsequent waves. Rather than being asked to identify their peers to interviewers, respondents inform their peers about the study and allow them to decide independently whether they want to participate or not. This theoretically reduces masking since recruiters are part of the target group with direct access to other IDUs, and it reduces volunteer bias since recruitees decide themselves whether to participate.
Information on the relationships between recruiters and recruited and their estimated network size is collected during the interview to allow for the calculation of selection probabilities. 27 This information is used to assess homophily, the extent to which recruiters are likely to recruit individuals similar to themselves, and to weight the sample to compensate or control for differences in network size, homophily and recruitment success. 24
Data Collection
Between 2001 and 2005, we undertook nine community surveys of IDUs in the Russian Federation and Estonia (Table 1). Four studies used IFS to recruit IDUs, and five used RDS. All IDUs were recruited from community settings. Seven of the studies had an epidemiological focus and measured the prevalence of HIV, HCV and associated injecting and sexual risk behaviours in IDUs. 13,28,29 Two of the studies collected data on the social and economic characteristics of IDUs. 30 All studies collected some standardised indicators and defined current IDUs as individuals who injected drugs for non-medical purposes in the last 4 weeks. For each of the IFS studies, IDUs were recruited using a team of 10-12 FWs at each site. Settings included street locations and respondents_ homes but excluded drug treatment centres and STI clinics. Volunteers and outreach workers at local harm reduction non-governmental organizations (NGOs) were employed as FWs, as well as two researchers at a local university in each site. In all IFS studies, two experienced supervisors from Moscow and a researcher from the UK provided technical expertise and management for all studies. Measures to ensure data quality and to minimise network bias included limiting the number of interviews per FW, random spot-checks in the field, and follow-up validation interviews with 10% of participants. Primary incentives included HIV prevention materials (including needles/syringes), chocolates and cigarettes.
In each RDS study, recruitment was undertaken by a team of seven to eight FWs at each site. The interview team comprised trained research staff from a local university, two FWs recruited from local harm reduction NGOs, two to three trained research staff from a local university, and two supervisors from Moscow. A researcher from the UK was also present at the studies, with the exception of the two socio-economic studies in Volgograd and Barnaul, Russia. In each study, a prefieldwork focus group was held with outreach workers from the local harm reduction NGOs to obtain information about the drug scene and to identify seeds to begin recruitment. Respondents received the same primary incentives for participating in the RDS study as in the IFS study and also an additional secondary incentive for each respondent they recruited into the study. 22,24 In all studies FWs recorded their observations on the drug scene, progress of the fieldwork and any difficulties arising from the research in detailed notes. These observations provide a useful additional comparison between the two sampling methods.
Duration and Cost of Fieldwork
We compared the duration of fieldwork for the IFS and RDS methods by calculating the mean number of days of fieldwork for each method and the proportion of the sample recruited on each day as the studies progressed. Means were compared using t-tests.
Costs were estimated for five of the seven surveys conducted in Russia and analysis focused on examining the cost effectiveness of recruiting a given sample for each of the sampling methodologies from a programmatic point of view as opposed to examining societal or health system costs. The IFS studies in Moscow, Barnaul and Volgograd were conducted in 2003 and the RDS studies in Togliatti, Barnaul and Volgograd were conducted in 2004. Costs were calculated as: (1) Foutside_ costs including salary, accommodation and travel of field work consultants; (2) local salary costs of FWs and researchers; (3) recruitment costs including the packages of goods valued at 140 roubles and 300 roubles, respectively, for primary and secondary incentives; and (4) other costs including local transport, telephone calls and logistical costs of training FWs. For the RDS study the cost of the fixed site used for interviews is not included as an explicit cost, rather it is subsumed into the local salary costs since local staff contracted to undertake the work were employed from syringe exchange programmes. Costs are presented assuming that there are elements of fixed and variable costs at each sample size and that an extra 20 respondents will require keeping the entire survey team in the field for one extra day.
Demographic and Injecting Risk Behaviours of Sample
Demographic and injecting risk behaviours of IDUs recruited through IFS and RDS were compared in the two sites (Volgograd and Barnaul) where both survey methodologies were used to ensure a cleaner comparison between survey methods. Demographic and injecting characteristics were used as the outcome variables with recruitment method included as an independent variable. In the univariate analysis, chi-squared tests were used to compare outcomes for categorical variables and Bartlett_s test for equal variance to compare continuous variables. For the multivariate analysis, logistic regression models were used to explore associations between explanatory variables and a binary outcome, multinomial logit models were used for categorical variables with multiple values and ordinary least squares for continuous variables. The multivariate analysis includes all common independent variables and a categorical variable indicating survey method used. This allows outcomes to be compared controlling for all independent variables and to identify impacts associated with only survey methodology. All statistical analyses were conducted using Stata 7 with significance set at 5% (Stata Corporation, College Station, Texas).
The Surveys
A total of 3,771 IDUs were recruited into nine surveys across four cities in the Russian Federation and two cities in Estonia (Table 1). A total of 2,049 (54%) participants were recruited through IFS and 1,722 (46%) through RDS. Only IDUs are included in the analyses we present here.
Duration of Field Work
The mean (standard deviation) duration of fieldwork for IFS surveys was 23.8 (4.1) days and for RDS 20.6 (0.9) days (t = 27.9, p G0.001). Figure 1 depicts the number of respondents recruited by each successive day of fieldwork by city and recruitment method. The RDS studies appear to follow a pattern of recruitment that we might expect: the number of respondents increases steadily as the number of waves increase and then declines towards the end of the study as completion of the target sample size approaches and respondents are asked to refer fewer contacts to the study. Kohtla Jarve, Estonia, appears to be an exception to this as recruitment peaks more sharply then abruptly finishes. The recruitment for the IFS studies does not appear to follow any set pattern across the cities. In Moscow the highest number of respondents in any 1 day occurs at the start of the study. In Barnaul, and to a less extent Volgograd, the number of participants recruited per day is more even across the duration of the study. Table 2 summarizes the characteristics of IDUs by recruitment method from the four surveys conducted in Volgograd and Barnaul. In both cities, RDS participants were younger, more likely to be male, to have attended higher education and to have official residency permits for the city. RDS participants were more likely to report injecting heroin in both sites and less likely to report injecting vint or mak * than IFS participants. Frequency of injecting did not differ by recruitment method in either city. Regarding injecting risk behaviour, there was no difference between recruitment methods in the proportion of IDUs reporting injecting with a used needle/syringe in the last 4 weeks in Volgograd but in Barnaul a higher proportion of RDS respondents reported this behaviour (21 vs. 15%, p = 0.02). In Volgograd IFS respondents were more likely to report ever having injected with a used needle/ syringe than RDS respondents (61 vs. 40%, p G 0.001). The opposite was found in Barnaul (53 vs. 63%, respectively, p G 0.003). In both cities and with both methods, the main source of new needles/syringes was pharmacies. A higher proportion of IFS respondents reported using needle/syringe exchanges or treatment centres in both cities and in Barnaul a higher proportion of RDS respondents reported using a source other than needle/syringe exchange (defined as friend, dealer, family or on the street) as their main source of needles/syringes. Table 3 summarizes the multivariate analysis for the categorical and continuous variables for Barnaul and Volgograd. After controlling for all independent variables, our findings indicate that RDS recruited a population 0.07 years younger in both cities. In both cities RDS participants had been injecting slightly longer than IFS participants. In Barnaul, RDS participants were less likely to report obtaining their new needles/syringes from pharmacies ( _ 5%) but there was no evidence to suggest a difference in Volgograd. In Volgograd RDS participants were less likely to report using needle/syringe exchanges ( _ 3%) but there was no difference in Barnaul. RDS participants in Barnaul were 4% more likely to report using another source for their new needles/syringes (Table 3).
Multivariate Analysis
In the logistic regression analysis (Table 4), RDS was more likely to result in a higher proportion of male IDUs (Barnaul OR = 2.0, Volgograd OR = 3.8) and participants who had attended higher education (Barnaul OR = 5.2, Volgograd OR = 3.0). In Barnaul RDS participants were more likely to have official residency permits (OR = 4.6) but not in Volgograd. In both cities, RDS participants were more likely to inject heroin over mak or vint than IFS participants (Barnaul OR = 2.5 and
PLATT ET AL. i48
Volgograd OR = 3.4). In Volgograd RDS participants had almost twice the odds of reporting daily injection (OR = 1.7) than IFS participants and had reduced odds of ever injecting with used needles/syringes (OR = 0.3). In Barnaul, RDS participants were more likely to report injecting with used needles/syringes in the last 4 weeks and ever (OR = 1.6 and OR = 1.4, respectively).
Costs
The total cost of conducting an IFS survey recruiting 400 respondents was estimated to be $14,651 (USD) but $16,100 for the RDS survey (Table 5). This translates to $43 per respondent using RDS and $37 using the IFS method. Increasing the sample from 400 to 500 reduced the average cost per respondent by $1 for the RDS method and by $3 for the IFS method. Reducing the sample from 400 to 300 respondents increased the cost per respondent by $2 for the RDS method and $5 for the IFS method. These results are presented in Table 5.
DISCUSSION
Our findings suggest that RDS does appear to be a faster recruitment method and that there are significant differences in the demographic characteristics of IDUs recruited via RDS in comparison with those recruited via IFS. However, evidence from the two cities is conflicting with regard to whether RDS recruits IDUs who engage in riskier injecting practices.
One of the suggested benefits of RDS is its apparent ability to recruit the hardest to reach sections of hidden populations. 22 We found some differences in measures of marginalization and risk behaviours between the two recruitment methods, RDS participants tend to be slightly younger and are less likely to use Costs are presented assuming that there are elements of fixed and variable costs in each sample size and that an extra 20 respondents will take one extra day necessitating employing the entire fieldwork team for that extra day.
b In order to protect the confidentiality of staff we report only the total amount of all salaries and fees paid to project staff. needle/syringe exchange programs. Some evidence in Russia suggests that IDUs whose primary source of needles/syringes is informal are at more risk of engaging in high-risk behaviours. 31 However, IFS participants were less likely to have attended higher education and have official residency permits to live in the city and more likely to be female. Lack of residency permit is an indicator of marginalization as it will affect an individual_s ability to use health services or obtain employment. 32,33 As no consistent trend emerges from the analysis of the effect of recruitment method on sample characteristics, then choice of method might be made on the basis of methodology and cost.
Inclusion Criteria and Data Validity
With IFS the responsibility for selecting the right target group is placed with the FWs, and its success depends on establishing a trusting relationship between the researchers and the fieldwork team. With RDS, issues of trust are less important, as researchers undertake the interviews themselves. The problems of establishing whether respondents are genuine members of the target group remain. Although measures can be put in place which might reduce this from happening, such as using indigenous field workers to screen participants or recording biometric measurements to avoid the same respondent being interviewed twice, it is very difficult to measure to what extent fabricated data may enter a survey.
A disadvantage of both methods is that study participants who are not members of the target group may lie about their membership in order to receive a reward. This was the case in two of the IFS sites where 9 and 14% of questionnaires were subsequently found to be fakes. This was discovered because strict validation processes had been set up and there was a good relationship between the FW supervisor and indigenous field workers. In the Togliatti RDS study 15 people were refused entry into the study, as they were suspected of not being current injectors. However these may be considered a minimum estimate as one cannot rule out that additional fabrication might have occurred and gone undetected. Having a modest primary and secondary incentive can minimize the chances that participants who are not members of the target population will be recruited.
Determining the best incentive size is difficult and has many implications for the study, especially for RDS studies where the secondary incentive is so crucial to recruitment success. The networks recruited through RDS are largely artificial, created as a result of the study and since their composition is dependent on the incentive, changes to the amount of incentive offered would change the composition of the network. This is illustrated with the case of sex workers in Eastern Europe, who have been found to be harder to recruit through RDS in part due to the small incentive and social network properties; this is discussed in more detail in a paper in this issue by Simic et al.
Adjusting the RDS sample to obtain Fpopulation_ estimates depends on the ability to recruit a random population within a subject_s social networks and a positive probability of recruiting everyone in that network. The possibility that the network is highly dependant on the incentive raises the question whether the latter condition obtains. This is particularly relevant when the definition of the population of study is fluid or artificially constructed by the research as with IDUs and sex workers. It should also be noted that the collection of information describing network characteristics which allows RDS analysis to produce Fpopulation_ estimates requires the respondent to recall detailed information on the composition of their network, including its size and each member_s relationship with the recruiter. This process carries a large potential for error.
Personal Safety and Capacity Building
There are safety considerations that favour RDS as respondents attend a fixed site for an interview in which a minimum number of staff is always present. In the IFS method, interviewers may find themselves travelling to an area in which they are unfamiliar, and unintentionally put themselves in danger, especially if it becomes known that they are carrying financial rewards or gift packs.
LIMITATIONS
Whilst we have tried to limit confounding in our analysis by comparing RDS and IFS studies conducted in the same cities, the studies were conducted in different years, and the findings may be confounded by time. Time may be important in relation to behaviour, but is likely to be less important in relation to socio demographic characteristics of the target group. None of the studies were set up specifically with the aim of comparing sampling methodologies, and this limited the number of characteristics that could be compared between study methodologies. A study set up specifically with the aim of comparing the methodologies might produce different results and facilitate more detailed comparison of characteristics. Additionally the starting point for both the IFS and RDS studies in all sites was the local outreach team. This may have led to more similarities in the sample composition than would have occurred if seeds had been selected through other methods. However, according to the principles of RDS, the selection of seeds does not ultimately influence the composition of the sample, since after several waves of recruitment the sample should be independent of the non-randomly selected seeds. 24
CONCLUSION
The HIV epidemic is driven by populations engaging in high-risk behaviours mixing with those engaging in lower risk behaviours. It is important to identify the parameters of risk behaviour in order to model these epidemics and to design appropriate interventions. If we assume that, after adjusting for network sizes and homophily, RDS is more successful at estimating risk behaviours across a more representative population than IFS, then it could lead to more effective modelling and prediction of such epidemics; however to date there is no evidence to suggest that this is the case. Our findings indicate that as a recruitment strategy, RDS is no better than IFS in identifying populations with highest risk behaviours. It does have practical advantages in terms of safety of the FW team, with faster recruitment at only additional costs. In the meantime, until the statistical superiority of RDS can be proven, a preferred approach may be to adopt the best aspects of both methodologies, depending on the resources available. A combination could include the use of coupons for recruitment, but also training indigenous field workers to work alongside researchers to undertake interviews, serving to increase their capacity in research skills whilst ensuring that the correct target group is being reached.
|
2014-10-01T00:00:00.000Z
|
2006-11-01T00:00:00.000
|
{
"year": 2006,
"sha1": "f4795cd0d38df837f2eba28ab17bd9d94c53b82d",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11524-006-9101-2.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "90d0bc35edc15ab46294c7314ed6990c26679f48",
"s2fieldsofstudy": [
"Sociology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
270129710
|
pes2o/s2orc
|
v3-fos-license
|
Exploring Identity Perception and Bilingual Education Dynamics in Taiwanese University Settings
,
Introduction
In response to the imperatives of globalization, Taiwan has recognized the pivotal role of English in bolstering international competitiveness.This proactive stance is evidenced by the implementation of the 2030 Bilingual National Policy Development Blueprint, which underscores the necessity of enhancing English proficiency among students (National Development Council, 2018).In addition, substantial investments in the Forward-looking Infrastructure Development Program further emphasize the commitment to fortifying infrastructure conducive to future growth and advancement across sectors (Executive Yuan, 2023).The urgency for students to attain global competitiveness has prompted universities to intensify English instruction, fostering a bilingual teaching environment aimed at elevating Taiwan's position in international higher education.Through initiatives such as promoting English as a medium of instruction (EMI), the Ministry of Education endeavors to strengthen students' English abilities, thereby enhancing the international competitiveness of Taiwan's higher education institutions.
Addressing the global demand for improved English skills, many educational systems worldwide, including Taiwan's, have integrated English into their mandatory curricula (Kirkpatrick, 2016;Ministry of Education, 2018).In Taiwan, English is compulsory from elementary through secondary schools, and all incoming freshmen at the university level are required to take a year-long Freshman English course (Chern, 2010).In recent years, universities have extended mandatory English courses beyond the freshman year, encompassing the sophomore, junior, and senior years.While English courses are commonly integrated into universities' general education requirements, the emphasis extends beyond mere linguistic proficiency.The Ministry of Education in Taiwan launched the Program on Bilingual Education for Students in College (referred to as the BEST program) in 2021, aiming to construct a bilingual teaching and learning environment in higher education institutions.This initiative seeks to enhance students' English proficiency and promote EMI to bolster the international competitiveness of higher education.
In the overall BEST program, there is a sub-program, the Generalized Enhancement program, which highlights the progress on the EMI support system and English proficiency resource system for students.This includes evaluating the effectiveness of the English teaching support system and the improvements in students' English proficiency.The mandatory Freshman English course is, of course, a prerequisite for improving all college students' English proficiency to facilitate their transition to EMI courses.
As part of the program, the Language Training and Testing Center (LTTC) is tasked with developing and administering the BEST Test of English Proficiency (BESTEP) (BESTEP, 2024).In Taiwan's BEST program for college students, it is essential to recognize the significance of English for students' future success.This importance extends beyond linguistic outcomes to encompass nonlinguistic transformations, including shifts in identity that influence perceptions of language and culture, as well as personal growth, experiences, cultural understanding, and cognition (Noels, Yashima, & Zhang, 2020).
While extensive efforts to promote bilingual education are commendable, a deeper understanding of identity perception among college students is crucial.Previous studies on college English courses prioritized enhancing linguistic outcomes, including teaching methodologies, enhancement programs, and proposals for standardized testing to evaluate effectiveness (Chern, 2010).However, research on students' identity perception in this context is limited.As the Freshman English course is mandatory for all college students, understanding freshmen's identity perception is essential for effective bilingual education.This understanding provides valuable insights into college students' English learning experiences, enabling educators to tailor pedagogical approaches effectively.Moreover, by comprehending how students perceive their identity in the context of language learning, educators can better support their linguistic and nonlinguistic development, fostering a more holistic approach to bilingual education.After all, education is a long-term endeavor.This study thus aims to address this gap by examining identity perception dynamics, thereby contributing to a deeper understanding of the multifaceted impacts of English language education.
Literature Review
In earlier literature, discussions on identity perception intersect with models of language learning outcomes.Gardner's (1985) socio-educational model, for instance, delineates two sets of outcomes: linguistic and nonlinguistic.Linguistic outcomes pertain to target language proficiency, while nonlinguistic outcomes encompass broader changes in the learner, such as attitudes towards language acquisition.Similarly, Lambert's (1975) model incorporates self-concept as a learning outcome.His distinction between subtractive and additive bilingualism has been influential in understanding learners' identity perception.Subtractive bilingualism involves the replacement of the native language and cultural identity with those of the target language, whereas additive bilingualism occurs when the acquisition of a second language and culture complements rather than replaces the first language and culture.As Baker (1993) aptly described, "when a second language and culture have been acquired with little or no pressure to replace or reduce the first language, an additive form of bilingualism may occur" (p.95).Consequently, the relationship between language and identity has become a significant area of research in bilingualism and second language acquisition, as evidenced by studies conducted by Hall (2002), Norton (2000), andSchumann (1978).
English learning in the context of Far Eastern Asia differs significantly from the earlier studies mentioned.This distinction is notable in the contrast between learning English as a second language (ESL) and as a foreign language (EFL), where the immediate use of the target language outside the classroom may or may not be prevalent.However, regardless of whether it is learning a second or foreign language, the acquisition of a new language can lead to changes in a learner's perceptions of their competence, communication styles, and values.In portraying an ideal type of successful second/foreign language learning, Gao (2002) introduced the concept of productive bilingualism, wherein command of the target language and that of the native language positively reinforce each other.Gao (2002) emphasized that a deeper understanding and appreciation of the target culture go hand in hand with a deeper understanding and appreciation of the native culture.Therefore, successful language learning transcends mere linguistic gains and involves a holistic process of cultural understanding and appreciation.
Building upon Gao's (2002) concept of productive bilingualism, empirical evidence was derived from open interviews with 52 adult learners recognized as the best foreign language learners in China, predominantly comprising professors, researchers and translators.These adult learners, who could provide reflective insights into their language learning experiences, demonstrated bilingual productiveness across cognitive, affective, and aesthetic domains, leading to overall personal growth.Moreover, Gao, Zhao, Cheng and Zhou (2007) argued that the additive concept (1 + 1 = 2), rooted in non-replacement, inadequately explains the overall value added to bilingualism exhibited among the best language learners.Consequently, as an ideal form of bilingualism, productive bilingualism could be symbolized as 1 + 1 > 2, indicating that the whole exceeds the sum of its parts (Gao et al., 2007).
Empirical studies offer compelling evidence of the impact of identity on language learning.For example, Norton (1997Norton ( , 2000) ) found that learners who view language learning as an investment in their identity are more likely to commit to ongoing efforts and long-term endeavors.Gao, Cheng, Zhao and Zhou (2005) demonstrated that college majors significantly influence identity construction, with English majors experiencing distinct identity shifts compared to other majors.The process of identity formation involves continuing interaction among personal, social and cultural factors.Positive identity formation enhances motivation, engagement, and academic performance by fostering a sense of belonging and purpose; conversely, negative identity experiences can lead to disinterest and lower achievement, often due to perceived marginalization or lack of support (Forbes et al., 2021;Norton Peirce, 1995;Stables, 2003).Understanding these dynamics is essential for educators to develop effective pedagogical strategies that support both linguistic and nonlinguistic development, ultimately promoting a more inclusive and motivating learning environment.Gao et al. (2005) examined the relationship between English learning and identity perception among Chinese college students.Utilizing a quantitative approach, they surveyed 2278 undergraduates from 30 universities, employing a custom questionnaire that operationalized six categories of self-identity change.These categories included subtractive bilingualism, additive bilingualism, productive bilingualism, self-confidence, identity split, and zero change, drawing upon existing literature on bilingual identities (e.g., Clément, Dörnyei, & Noels, 1994;Lambert, 1975).Results revealed that English learning significantly influenced learners' perception of their own competency, with self-confidence emerging as the most notable change.In essence, learners' perception of their competence was the aspect of identity most affected by English learning.Moreover, the study found that learners also experienced productive and additive changes in their values and communication styles.These findings underscore the profound impact of English learning on identity perception within the EFL context in China, distinguishing it from ESL contexts.Furthermore, previous research suggests that achieving productive bilingualism, once thought to be attainable only by exceptional language learners (Gao, 2002), is actually quite common among ordinary college students.This highlights the need for a nuanced understanding of bilingualism and its implications for identity perception among college students.
In their subsequent research, Gao et al. (2007) advocated for a comprehensive approach to English language education in EFL contexts, emphasizing the importance of not only focusing on language proficiency but also on the learners themselves.They argued that the issue of self-identity in EFL settings is equally, if not more, crucial compared to ESL contexts.In addition, individual identity perception among learners may be intertwined with broader transformations in national or regional identity, influenced by ongoing processes of modernization or globalization (Ushioda, 2006).As a result, the nonlinguistic outcomes of English learning require significant attention from EFL researchers and educators, given that language learning can potentially induce shifts in learners' identities.Gao et al. (2007) highlighted the educational significance of the relationship between identity perception and language learning motivation.This underscores the need for educators to move beyond teaching language skills and standardized linguistic outcomes, and instead, to address the broader educational implications of language learning.Recently, two studies by Chang (2022Chang ( , 2024) ) offer valuable insights into Taiwan's language policy landscape and its implications for identity perception.Chang (2022) examines Taiwan's envisioned identity as a Mandarin-English bilingual nation within the framework of the 2030 Bilingual Nation policy.The study aimed to critically examine Taiwan's 2030 Bilingual Nation policy using a two-phase analysis of the Blueprint for Developing Taiwan into a Bilingual Nation by 2030 (National Development Council, 2018).The first phase involves a macro-level analysis of keywords, while the second phase employs a micro-level qualitative content analysis to identify themes and patterns in the government's portrayal of national identities and its vision for the future.The study uncovers dominant discourses shaping Taiwan's top-down imagination and reimagination, emphasizing the primacy of English and its association with global competitiveness.However, it also highlights the need for a more critical and inclusive approach that acknowledges Taiwan's linguistic diversity and cultural heritage.
In contrast, Chang's (2024) study focuses on Taiwanese university students' perceptions of the Bilingual 2030 policy, employing Stance Theory (Du Bois, 2007) to analyze their evaluations and self-positioning.This study examines 43 undergraduate students, mainly majoring in English, from a top-ranking public research university in northern Taiwan, enrolled in sociolinguistics classes taught by the researcher.They were tasked with writing position papers on the Bilingual 2030 policy over a 4-week period.Qualitative content analysis of the papers was conducted in three phases: identifying each student's stance, examining evaluations and self-positioning, and parsing out prevalent discourses.The study reveals students' diverse responses, challenging the top-down narrative of national identity construction and emphasizing the complex linguistic and social realities in Taiwan.By examining students' nuanced understandings and critical reflections on the policy, the study underscores the importance of incorporating diverse stakeholder perspectives in language policy discussions and promoting a more inclusive and participatory approach to policy formulation and implementation.
Together, Chang's (2022Chang's ( , 2024) ) studies provide a comprehensive overview of Taiwan's language policy landscape and its impact on identity perception, emphasizing the importance of a nuanced and inclusive approach to language policy development.Chang's 2022 study highlights Taiwan's emphasis on English proficiency for global competitiveness in language policy, alongside the call for inclusivity to recognize linguistic diversity.Her 2024 study underscores the importance of integrating diverse perspectives in policy discussions and promoting inclusive policy implementation, as evident in students' critical reflections.However, there remains a need for further exploration of learners' identity perception, particularly through quantitative analysis within the national bilingual program to identify statistical patterns or trends.Specifically, Freshman English is required to facilitate a smooth transition for college students into subsequent years of EMI instruction.
In the dynamic context of English language education, learners bring diverse identities.Consequently, it is essential to further explore the impact of individual differences.Recognizing that traditional, one-size-fits-all approaches to language education are inadequate for a diverse learner population, highlights the necessity of addressing learner diversity in English language education (Tran & Duong, 2024).The conceptualization of gender as an individual variable has significantly enriched the understanding of the relationship between gender and language learning in classrooms.Norton andPavlenko (2004a, 2004b) have extensively explored how gender influences language learning and identity formation, revealing that gender dynamics play a crucial role in shaping learning experiences.These advancements point out the requirement for examining gender-specific effects in language learning, providing a foundation for investigating how gender influences identity perception in the context of English language education for Taiwanese college students.Similarly, college majors contribute differently to identity perception.For instance, Gao, Li and Li (2002) have found that English majors experience distinct identity construction compared to other majors, underscoring the need to consider academic disciplines when addressing learner diversity.This focus on college majors, much like the consideration of gender, emphasizes the varied factors that influence identity formation in educational settings.
The current study aims to address a research gap in understanding Taiwanese college freshmen's perceptions of their identity during English language learning by drawing from existing theories of bilingualism and striving for a comprehensive combination of these theories tailored to the EFL context of Taiwan.In addition, the study seeks to explore the influence of gender and college major as individual differences on identity perception among college freshmen engaged in English language learning.By investigating the dynamics of identity perception, the study endeavors to provide a comprehensive understanding of the multifaceted impacts of English language education.These insights will guide pedagogical approaches beyond linguistic proficiency outcomes, significantly contributing to our understanding of the broader impacts of English learning in EFL contexts.The research questions guiding this study are as follows: (1) How do Taiwanese college freshmen perceive their identity following exposure to English learning?
(2) What impact do gender and college major have on individual differences in identity perception among college freshmen engaged in English learning?
Participants
The study included 360 college freshmen enrolled in a Freshman English course at a university in northern Taiwan, consisting of 189 male and 171 female students.On average, these students had been learning English for 12 years since elementary school.Their proficiency levels ranged from intermediate to advanced, as assessed by the university's placement test.However, proficiency levels were not considered as an independent variable due to uncertainty about the alignment of the placement test with standardized proficiency tests like TOEIC and TOEFL.Moreover, classes were not divided based on proficiency levels but were instead organized by department, resulting in a mixed-ability student body.Consequently, proficiency levels were excluded from consideration.English majors were intentionally excluded from the study to ensure a homogeneous group for focused examination of the topic in question, as many English majors are exempt from the Freshman English course.In addition, the course aims to enhance the language abilities of all first-year students, particularly non-English majors, preparing them for EMI courses tailored to their specialized fields in subsequent years, which is a core component of the Taiwan Bilingual Program.The participants represented a diverse range of majors, coming from a total of seven departments.To facilitate effective comparison of individual differences among groups, majors were categorized based on their respective colleges, as detailed in Table 1.
Instrument
The instrument utilized in this study was a questionnaire adopted from Gao et al. (2005), consisting of 24 items designed to measure participants' perception of self-identity change (see Appendix).The decision to employ this instrument was based on its alignment with existing literature on bilingual identities, extending its applicability to the context of EFL learning among Taiwanese students.Given the unique linguistic and cultural dynamics of language learning in Far Eastern Asian contexts compared to Western settings, and the distinction between EFL and ESL contexts, tailored instruments were deemed necessary.The questionnaire assessed self-identity change through 24 items, utilizing a five-point Likert scale (5 = strongly agree; 4 = agree; 3 = uncertain; 2 = disagree; 1 = strongly disagree).There were a total of six categories representing shifts in self-identity, each comprising four elements.These categories are explained as follows: (1) Change in self-confidence involves an alteration in one's perception of personal competence.Examples include feeling confident when excelling in English, doubting abilities during difficulties, and recognizing growth after overcoming challenges (Items 1-4).
(2) Additive change entails the simultaneous existence of two language sets, behaviors and values tailored to specific contexts.Instances incorporate easily switching between Chinese and English, having different levels of confidence in each language, and preferring original language dialogues in movies (Items 5-8).
(3) Subtractive change encompasses the replacement of native language and cultural identity with those of the target language.Examples involve feeling less idiomatic in Chinese, adopting Western behaviors, and rejecting traditional Chinese ideas (Items 9-12).
(4) Productive change involves mutual reinforcement of proficiency in both native and target languages.
Examples include better appreciation of subtleties in the native language, increased sensitivity to external changes, and improved communication skills (Items 13-16).
(5) Split change entails identity conflict arising from struggles between languages and cultures.Examples include the subconscious mixing of languages, confusion in behavioral patterns, and conflicts in values and beliefs (Items 17-20).
(6) Zero change represents the absence of self-identity alterations.Examples include remaining unchanged regardless of the language used and viewing oneself as inherently constant (Items 21-24).
According to Gao et al. (2005), zero change was used as a reference point for comparing different categories of self-identity changes, while self-confidence change is regarded as independent of cultural identities.The remaining four categories represent changes in cultural identity.However, in line with Festinger's (1957) theory of cognitive dissonance, conflicting attitudes and behaviors must be resolved.Hence, split change is seen as an intermediate phase, with learners often developing other types of identity changes afterward to alleviate cognitive dissonance.
Data Collection and Analysis
Prior to the main study, a pilot study was conducted to assess the questionnaire's reliability.Out of 231 distributed copies, 200 valid responses were obtained (return rate of 87%), with a Cronbach's alpha reliability coefficient of 0.70, indicating acceptable internal consistency.This result demonstrates that the questionnaire is reliable and suitable for data collection to address the research questions.In light of the satisfactory outcome of the pilot study, minor adjustments were made to the questionnaire, primarily focused on improving the clarity and comprehensibility of the Chinese translation for implementation in the main study.
During the main study, participants were guaranteed anonymity and assured that their answers would not impact their course grades.The distribution of questionnaires was authorized by course instructors, and involvement was voluntary, without any incentives offered.From the 398 questionnaires distributed, 360 valid responses were gathered, yielding a return rate of 90%.These responses exhibited satisfactory internal consistency, with a reliability coefficient of 0.73.
The data analysis, carried out using SPSS, comprised two primary stages.Firstly, descriptive statistics were calculated for the different categories of self-identity change.Following this, a multivariate analysis of variance (MANOVA) was utilized to assess how gender and college major influenced individual differences in identity perception among college freshmen involved in English language learning.MANOVA was selected due to its appropriateness in analyzing the impacts of several independent variables, like gender and major, on two or more dependent variables, represented here by the six categories of self-identity change.
Results and Discussion
This section presents the results of the data analysis and subsequent discussion, aiming to address the research questions regarding how Taiwanese college freshmen perceive their identity in English learning, and the influence of gender and college major on individual differences in identity perception among those engaged in English learning.As the instrument employs a 5-point Likert scale, with scores ranging from 1 to 5, and each category of self-identity change in the questionnaire comprises four items, totaling a possible score of 20, a critical value of 12 is used to distinguish between changed and unchanged states, indicating the threshold at which participants agree with self-identity changes in each category.The following part will detail the results of the data analysis and ensuing discussion in response to the two research questions.
How do Taiwanese College Freshmen Perceive their Identity Following Exposure to English Learning?
Table 2 presents descriptive statistics of how Taiwanese college freshmen perceived their identity across six categories of self-identity change.The most noticeable change for these participants was in self-confidence (mean = 14.73), indicating that many students experienced shifts in their perception of personal competence.The second-highest score (mean = 14.37) under zero change suggests a lack of exposure to the target culture in the Taiwanese EFL context.However, there were some changes in the cultural aspects of learner identity, with the productive (mean = 12.47) and additive (mean = 12.43) types showing apparent changes.This result suggests that productive bilingualism, as proposed by Gao (2002), also exists among Taiwanese college freshmen, although the mean score is just slightly higher than 12.Therefore, quite a few participants in this study recognized productive bilingualism, agreeing that a deeper understanding of the target culture is linked to that of the native culture.Next, the responses to statements for each self-identity change category will be discussed, reflecting the extent (descriptive statistics indicating the percentages of choices) to which they agree, disagree, or express uncertainty.Under self-confidence changes, it's noteworthy that up to 60% of students question their own competence when facing challenges in learning English.While previous research often emphasized the role of self-confidence in language learning (e.g., Clément et al., 1994), Gao et al. (2005) proposed that self-confidence can be an outcome of English learning.It's not merely a cause for learners to pursue language proficiency but can also result from language learning.In addition, 73.8% of students reported feeling accomplished when their English proficiency surpassed that of others, and 71.3% noted personal progress when overcoming challenges in English learning.
Self-confidence can serve as both a motivator for achieving more in English learning and as a consequence of overcoming difficulties and progressing in English proficiency.This clarifies why up to 64.9% of students agree that English learning significantly impacts their self-confidence (see Table 3).In the context of zero change, the second highest score (mean = 14.37) likely suggests a limited exposure to the target culture within the Taiwanese EFL environment.Notably, a significant majority (77.8%) of students maintain their sense of self irrespective of the language they use, implying a lack of significant shifts in self-identity when communicating in English.This observation aligns with the perspective that, for many students (72.5%),English primarily functions as a tool for communication and does not lead to fundamental transformations in their identities.What's more, 46.1% of students report no perceived changes in themselves after participating in English language learning.However, responses to Item 24 are mixed: 38% agree that discussing personal changes after learning English is meaningless, 31.4% disagree, and 30.6% are uncertain.Therefore, responses to Item 24 do not clearly indicate whether students undergo zero change in self-identity (refer to Table 4).As highlighted by Gao et al. (2005), zero change serves as a benchmark for assessing various categories of self-identity shifts, signifying the absence of alterations in one's self-perception.Significantly, many students retained a consistent self-image regardless of the language used, likely due to limited exposure to the target culture within the Taiwanese EFL environment.Since zero change was employed as a reference point, the subsequent analysis delves into students' experiences of self-identity changes concerning cultural identity.It's worth noting that language learning encompasses more than just changes in proficiency; it also involves transformations in values, behaviors, communication styles, beliefs, and other non-linguistic outcomes.
For productive change, the third highest score (mean = 12.47) emphasizes the mutual reinforcement of proficiency in both native and target languages.Particularly, 49.8% of students reported a better appreciation of nuances in Chinese as their English proficiency improved, while 43.1% became more attuned to external changes after engaging in English learning, reflecting an increased sensitivity to external stimuli.However, changes in empathy or communication skills post-English learning were less apparent (in response to Item 15).Moreover, students' growing appreciation for English literature and arts did not correspond with an increased interest in Chinese literature and arts (in response to Item 16).Therefore, although the mean score of 12.47 indicates that students agree they experience productive change in identity, agreement for Items 13 to 16 is all below 50%, demonstrating notable differences in students' opinions on these items (see Table 5).As previously discussed, Gao (2002) introduced the concept of productive bilingualism as an optimal language learning approach.Furthermore, she pointed out the limitations of the additive change approach, which involves the simultaneous presence of two language sets without replacement, in fully capturing the benefits observed in highly proficient language learners.In this study, the mean (12.47) for productive change slightly exceeds that for additive change, with a mean of 12.43.However, both means surpass the critical value of 12, indicating significant changes.
In the additive change category, 58.9% of students have both English and Chinese names, each used in specific contexts, while 57.1% prefer English dialogue in English movies and Chinese dialogue in Chinese movies.However, only 19% feel capable of seamlessly switching between Chinese and English, and just 15.6% report feeling self-assured when communicating in English and more reserved when using Chinese (Table 6).This is likely because students are learning English in an EFL context, where daily opportunities to use English are limited.Unlike an ESL context, where there are daily opportunities to switch between two languages, it is less likely for students to easily switch between languages and naturally adapt to the expressive styles of both.Given that the additive change approach originates from ESL learning contexts, Gao (2002) proposed productive bilingualism as an effective language learning method for EFL learners.In this study, the mean score for productive change (mean = 12.47) is higher than that for additive change (mean = 12.43).Therefore, Taiwanese college freshmen perceived more change involving the mutual reinforcement of proficiency in both native and target languages (productive change) than in the simultaneous existence of two language sets, behaviors and values tailored to specific contexts (additive change).This result also highlights the difference between ESL and EFL learning contexts, where immediate use of the target language in daily lives is less common in Taiwan.Although Gao's (2002) study focused on adult learners, including professors, researchers and translators, showcased bilingual proficiency across cognitive, affective and aesthetic areas, fostering holistic personal development through their language learning experiences, her advocacy for productive bilingualism as an ideal type of bilingualism for EFL learners is also applicable to Taiwanese college students.
Split and subtractive changes, with means of 10.26 and 8.74, respectively, did not surpass the critical value of 12 used to distinguish between changed and unchanged states.As observed in Tables 7 and 8, the majority of students express disagreement about split and subtractive changes.
For split change, Table 7 indicates that most students do not experience significant conflict when mixing Chinese and English in their speech or switching between cultural behaviors.Specifically, 58.1% of students strongly disagree or disagree with feeling weird when their Chinese speech is mixed with English words (Item 17), and 58.4% strongly disagree or disagree with feeling a painful split when switching between English and Chinese behavioral patterns (Item 18).Only 18.1% of students express agreement with Item 20, about being caught between conflicting values and beliefs, which could indicate internal conflict.However, it's worth noting that 41.4% of students express uncertainty about how to bid farewell to foreign friends (Item 19).This uncertainty suggests a lack of clarity regarding appropriate cultural norms for parting ways with friends from other cultures, such as whether to shake hands, hug, or kiss, and may indicate limited exposure or contact with the target culture.Split change could be seen as a transitional stage where students experience internal conflict between contradicting values and beliefs after learning English.To circumvent cognitive dissonance, learners undergoing split change might subsequently cultivate alternative forms of identity shifts (Festinger, 1957;Gao et al., 2005).Regarding subtractive change, Table 8 reveals that a large proportion of students strongly disagree or disagree with statements reflecting a loss of Chinese cultural identity after learning English.For instance, up to 76.3% of students strongly disagree or disagree with feeling repugnant to some Chinese conventions after learning English (Item 11).Similarly, in Item 12, 73.3% of students strongly disagree or disagree with starting to reject some traditional Chinese ideas after learning English.Furthermore, in Item 9, 68.9% of students strongly disagree or disagree with feeling that their Chinese is becoming less idiomatic as their English proficiency improves.However, it's essential to recognize the substantial percentage of students expressing uncertainty in Item 10, with a percentage of 26.7%, indicating some ambiguity or internal conflict about whether their behaviors have become somewhat Westernized after learning English.These results suggest a strong preservation of Chinese cultural identity among students, despite their engagement in English language learning.Overall, these findings underscore the complex interplay between language learning and cultural identity formation, highlighting the need for educators to support students in navigating these challenges while fostering a positive bilingual experience.
What Impact do Gender and College Major have on Individual Differences in Identity Perception among College Freshmen Engaged in English Learning?
To explore the impact of gender and major on identity perception, a MANOVA test was employed, using the six types of self-identity changes as dependent variables and gender along with college major as independent variables.The results of the MANOVA revealed significant main effects for both gender (F[6, 353] = 6.32, p = .000)and major (F[18, 993] = 2.48, p = .001)on identity changes.Subsequent analysis will probe the main effects of gender and major on these changes.
Figure 1.The Effect of Gender on Self-identity Changes Female students demonstrated greater sensitivity to both success and frustration in their English learning experiences, manifesting more pronounced changes in self-perception, including feeling confident when successful in English, doubting their abilities during challenges, and recognizing personal growth after overcoming obstacles (i.e., self-confidence change).In addition, they showed a stronger inclination towards the simultaneous existence of two sets of language, behavior and values tailored to specific situations, as evidenced by their agreement on their ability to switch languages effortlessly, varying levels of confidence in each language, and preference for original language dialogues in movies (i.e., additive change).Furthermore, female students exhibited enhanced proficiency in both their native and target languages, indicated by their improved understanding of subtleties, heightened awareness of external changes, and enhanced communication skills (i.e., productive change).These findings align with Gao et al.'s (2005) study, suggesting that female students may be more susceptible to changes in self-confidence and exhibit more permeable ego boundaries, leading to increased adaptability in managing conflicts between different linguistic and cultural frameworks through situational adjustments or productive integration.
College major also had significant main effects on identity perception across the same three categories: self-confidence (F[3, 356] = 5.13, p = .002),additive (F[3, 356] = 6.48, p = .000)and productive (F[3, 356] = 3.68, p = .012)(see Figure 2).Remarkably, liberal arts majors exhibited more pronounced changes than other majors across all three categories.Firstly, significant differences were found between liberal arts majors and business majors (MD = 2.166, p = .000),between liberal arts majors and science majors (MD = 1.418, p = .010),and between liberal arts majors and engineering majors (MD = 1.396, p = .011)regarding self-confidence changes.Liberal arts majors experienced more changes in self-perceived competence, such as feeling confident in success, doubting abilities in challenges, and acknowledging growth after overcoming obstacles.Secondly, liberal arts majors demonstrated more changes in additive change compared to business majors (MD = 1.301, p = .006),science majors (MD = 1.670, p = .000)and engineering majors (MD = 1.952, p = .000).This suggests that liberal arts majors have a stronger tendency toward the coexistence of two language sets, behaviors, and values in specific situations.
Thirdly, liberal arts majors experienced more changes in productive change than business majors (MD = 1.536, p = .002),science majors (MD = 1.386, p = .006)and engineering majors (MD = 1.389, p = .005).This indicates that liberal arts majors demonstrated a greater enhancement of proficiency in both native and target languages compared to other majors.
The liberal arts majors in this study were from the department of Teaching Chinese as a Second Language (TCSL; Table 1), which focuses on preparing Chinese teachers with international mobility, professional language teaching skills, and foreign language proficiency to promote Chinese culture worldwide.As a result of their departmental focus, these students may have exhibited greater sensitivity to both success and frustration in their English learning experiences, leading to more pronounced changes in self-confidence change, a stronger inclination towards the simultaneous existence of two language sets (additive change), and enhanced proficiency in both their native and target languages (productive change) compared to students in other majors.
The finding that liberal arts majors displayed more pronounced changes than other majors across self-confidence, additive and productive changes resonates with Gao et al.'s (2005) study in China, which observed similar patterns among English majors.However, in Gao et al.'s (2005) study, English majors also experienced significant changes in subtractive aspects, indicating a stronger inclination towards Westernization.Gao et al. (2005) explained that English majors may reject or toggle between native cultures, languages, and cultural norms, or integrate them more effectively.In contrast, students from the TCSL department did not undergo significant subtractive changes.This discrepancy may stem from the nature of their profession, which focuses on promoting the Chinese language and culture to non-Chinese speakers.While English majors devote significant time and effort to integrating themselves into the English language and culture, TCSL majors prioritize promoting the Chinese language and culture to an international audience.Therefore, while both TCSL and English majors experience changes in self-confidence and cultural identity through their engagement in English learning, English majors may exhibit more pronounced Westernization.
In the current study, it is interesting to note that the trends of changes in each major appear to be developing similarly.This means that there are no particularly extreme differences within each category.For example, in terms of self-confidence change, the mean scores for each major are relatively high; for subtractive change, none of the mean scores for each major exceed the critical value.
The significant main effects of major on self-confidence, additive, and productive changes were detected, prompting a Post Hoc test to explore other significant differences among these multiple groups.Concerning subtractive change, business majors scored markedly higher than science majors (MD = 1.053, p = .026),indicating that business majors experience more substitution of their native language and cultural identity with the target language compared to science majors.This might be because business majors are more frequently exposed to international business practices and interactions, which necessitate a greater adaptation to the target language and culture.Similarly, regarding split change, business majors also scored significantly higher than science majors (MD = .885,p = .030),suggesting that business majors experience more identity conflict stemming from language and cultural struggles than science majors.This might be explained by the diverse and often conflicting demands of the global business environment, which requires business majors to navigate multiple cultural and linguistic contexts.Conversely, for zero change, science majors scored substantially higher than liberal arts majors (MD = 1.158, p = .042),indicating that science majors experience more absence of self-identity changes than liberal arts majors.The reason for this might be that science majors' studies are less focused on cultural and linguistic aspects, leading to fewer opportunities for identity changes.
Conclusion
The conclusion of this study reveals significant insights into identity perception dynamics in EFL contexts and their relevance to bilingual education.Taiwanese EFL college freshmen experience notable changes in self-confidence, productive, and additive aspects, indicating enhanced proficiency and a balanced integration of language sets, behaviors and values.Most maintain a stable self-identity across languages, with no significant subtractive or split changes observed.Gender and college major differences influence identity perception, with females showing more self-confidence, additive and productive changes.Liberal arts majors exhibit the most pronounced changes across self-confidence, additive and productive ones, while business majors show more subtractive and split changes, and science majors demonstrate a greater absence of self-identity changes.
Theoretical and Pedagogical Implications
The findings of this study contribute significantly to the theoretical understanding of identity perception dynamics in EFL contexts.By examining the nuanced changes in self-confidence, productive and additive aspects among Taiwanese EFL learners, the study adds depth to existing literature on bilingualism and identity development.These insights underscore the importance of considering individual differences, such as gender and major, in understanding how identity perception evolves within language learning environments.Moreover, the absence of significant subtractive or split changes suggests stable self-identity maintenance among learners, challenging previous assumptions about identity fluidity in bilingual settings.Theoretical frameworks, such as those proposed by Baker (1993), Lambert (1975) and Gao (2002), provide a lens through which to interpret these findings and highlight the need for further research into the intricate interplay between language learning, cultural identity, and psychological development.
From a pedagogical standpoint, the insights from this study have several implications for English language teaching in Taiwan and similar EFL contexts.Educators can use the findings to design more tailored language learning programs that address the specific needs and challenges faced by different student populations.For instance, acknowledging the greater self-confidence and adaptive capacity observed among female learners can inform teaching strategies that foster a supportive and inclusive learning environment for all students.In addition, recognizing the pronounced changes in self-identity perception among liberal arts majors underscores the importance of interdisciplinary approaches to language education that integrate cultural studies and language proficiency development.Furthermore, the identification of business majors' higher propensity for subtractive and split changes suggests the need for targeted interventions aimed at promoting a more balanced integration of linguistic and cultural identities in professional contexts.Overall, these pedagogical implications emphasize the importance of adopting a holistic approach to language education that considers the multifaceted nature of identity development within EFL settings.
Limitations of the Study and Suggestions for Future Research
While the study provides valuable insights into identity perception dynamics among Taiwanese college freshmen engaged in English language learning, several limitations warrant consideration.Firstly, the sample size and representativeness of the participants may be limited, potentially affecting the generalizability of the findings.Future research could address this by employing larger and more diverse samples to enhance the external validity of the results.Secondly, the cross-sectional design utilized in the study limits the establishment of causal relationships between variables.Longitudinal studies could provide a more comprehensive understanding of how gender and college major influence identity perception over time.
In addition, reliance on self-report measures for assessing identity perception may introduce bias or social desirability effects.Employing objective measures or triangulating findings with multiple methods, including mixed methods for data collection and analysis, could strengthen the validity of the results.Incorporating qualitative methods such as interviews or reflection reports would allow for a more comprehensive and in-depth exploration of students' identity perceptions.
Furthermore, the study may not adequately control for confounding variables such as English proficiency, socioeconomic status, or cultural background, which could impact the interpretation of findings.Future research should consider controlling for these variables to better isolate the effects of gender and college major on identity perception.The inclusion of English proficiency as a variable is particularly crucial, as it could significantly influence identity dynamics.
Moreover, the cultural specificity of the study population limits the generalizability of the findings to other cultural or linguistic contexts.Comparative studies across different cultural settings could provide a more nuanced understanding of identity perception dynamics.Finally, while the theoretical frameworks utilized in this study offer valuable insights, varying interpretations may lead to potential biases in analysis and findings.To mitigate this, future research could benefit from providing explicit and detailed definitions of key concepts and theoretical assumptions.By enhancing clarity and transparency in theoretical application, researchers can strengthen the validity and robustness of their findings, thereby contributing to the advancement of knowledge in the field.
Addressing these limitations could strengthen the methodological rigor and validity of studies in this area, advancing our understanding of identity perception in EFL contexts.
Figure 2 .
Figure 2. The Effect of Major on Self-identity Changes
Table 1 .
Distribution of Majors by College among Participants
Table 2 .
Descriptive Statistics for Self-identity Changes
Table 3 .
Responses to Statements Reflecting Self-confidence Change
Table 4 .
Responses to Statements Reflecting Zero Change
Table 5 .
Responses to Statements Reflecting Productive Change
Table 6 .
Responses to Statements Reflecting Additive Change
Table 7 .
Responses to Statements Reflecting Split Change
Table 8 .
Responses to Statements Reflecting Subtractive Change I have overcome a difficulty in English learning, I can feel my own growth.listen to the original English dialogue when watching English movies, just as I enjoy the original Chinese dialogue when watching Chinese movies.
|
2024-05-31T15:06:44.619Z
|
2024-05-29T00:00:00.000
|
{
"year": 2024,
"sha1": "0ba5aaa29af86ff5a98cf1187e854dce5fd1003d",
"oa_license": "CCBY",
"oa_url": "https://ccsenet.org/journal/index.php/elt/article/download/0/0/50254/54395",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6ede1d459c211c7adfd7f2318611f7d09ea29c89",
"s2fieldsofstudy": [
"Education",
"Linguistics"
],
"extfieldsofstudy": []
}
|
12224733
|
pes2o/s2orc
|
v3-fos-license
|
Host nutritional status: the neglected virulence factor
The emergence of new infectious diseases and old diseases with new pathogenic properties is a burgeoning worldwide problem. Severe acute respiratory syndrome (SARS) and acquired immune deficiency syndrome (AIDS) are just two of the most widely reported recent emerging infectious diseases. What are the factors that contribute to the rapid evolution of viral species? Various hypotheses have been proposed, all involving opportunities for virus spread (for example, agricultural practices, climate changes, rainforest clearing or air travel). However, the nutritional status of the host, until recently, has not been considered a contributing factor to the emergence of infectious disease. In this review, we show that host nutritional status can influence not only the host response to the pathogen, but can also influence the genetic make-up of the viral genome. This latter finding markedly changes our concept of host–pathogen interactions and creates a new paradigm for the study of such phenomena.
The emergence of new infectious diseases and old diseases with new pathogenic properties is a burgeoning worldwide problem. Severe acute respiratory syndrome (SARS) and acquired immune deficiency syndrome (AIDS) are just two of the most widely reported recent emerging infectious diseases. What are the factors that contribute to the rapid evolution of viral species? Various hypotheses have been proposed, all involving opportunities for virus spread (for example, agricultural practices, climate changes, rainforest clearing or air travel). However, the nutritional status of the host, until recently, has not been considered a contributing factor to the emergence of infectious disease. In this review, we show that host nutritional status can influence not only the host response to the pathogen, but can also influence the genetic make-up of the viral genome. This latter finding markedly changes our concept of host-pathogen interactions and creates a new paradigm for the study of such phenomena.
The unexpected and sudden emergence of human immunodeficiency virus (HIV) is the most widespread recent example of the ability of viruses to continue to cause a great deal of morbidity and mortality in human populations. Recently, the outbreak of severe acute respiratory syndrome (SARS) has again demonstrated our continuing vulnerability to newly emergent viruses. It is important to understand the underlying mechanisms involved in the emergence of new viral pathogens with altered pathogenic potential. Understanding how emergence occurs will assist in recognizing conditions of risk for new viral outbreaks and also in developing therapeutic strategies to prevent or limit them. Data from our laboratory [1][2][3] and others [4,5] have demonstrated that one driving force for the emergence of new viral variants is the nutritional status of the host. Using two very different viruses (coxsackievirus and influenza virus) as model systems we have shown that a host deficiency in either selenium (Se) or vitamin E, or an excess of iron, results in a change in the viral genome. In other words, specific, stable and reproducible viral mutations occur in the genome when nutritionally compromised animals are infected with these viruses; these mutations result in increased virulence of both coxsackievirus and influenza virus [1,2]. Once these mutations occur, even hosts with normal nutritional status are susceptible to the newly virulent virus. This work represents a new area of research into the interaction of host nutrition and emerging infectious disease.
Coxsackievirus and Keshan disease: the nutrition-virus nexus
In 1935, a severe outbreak of an endemic cardiomyopathy that afflicted mainly infants, children and women of childbearing age occurred in Keshan County, Heilongjiang Province, China [6]. Within a number of years, Keshan disease (as the condition came to be known) affected thousands of people and it became the top disease priority of the Chinese Ministry of Public Health. Several hypotheses were proposed to explain the cause of the disease, but it was not until 1979 that a connection was established between nutritional Se deficiency and Keshan disease. The amount of evidence that supported this hypothesis was impressive. Epidemiological surveys showed that Se levels in the soils, foods and people residing in highly endemic areas were very low compared with levels in control regions free of the disease [7]. Moreover, Chinese scientists carried out a large intervention trial that demonstrated quite conclusively that supplementation of individuals with nutritional amounts of sodium selenite effectively prevented the disease [8]. Widespread use of Se supplements in the endemic Keshan disease areas led to a drastic decline in the number of cardiomyopathies observed in these areas.
Despite the great success of the 'selenium hypothesis' in explaining multiple features of Keshan disease, it became apparent that nutritional Se deficiency in itself could not account for all the characteristics of the disease. For example, Keshan disease exhibits wide swings in prevalence from one year to another and even from one season to another. Such behavior is more consistent with an infectious disease than with a nutritional deficiency. The Chinese scientists realized this and were able to demonstrate that certain enteroviruses, particularly a coxsackievirus B4 isolated from a Keshan disease victim from Chuxong County in Yunnan Province, were able to induce heart lesions with greater severity in mice fed a diet low in Se than in mice fed the same diet supplemented with Se [9].
More recently, it has been possible to show that enterovirus isolates from patients with heart muscle disease in a Se-deficient area of China were predominantly coxsackievirus group B serotypes in the region in which Keshan disease is endemic. Thus, these viruses might contribute to the pathology of Keshan disease, as coxsackie B viruses are known etiological agents of myocarditis [10].
Coxsackievirus B3 and Se deficiency: animal models To understand the relationship between host nutritional status and virus infection, we used our well-characterized murine model of coxsackievirus-induced myocarditis. Coxsackievirus B3 (CVB3) infection of mice can cause myocarditis, similar to that found in human populations. However, infection of mice with an avirulent strain of CVB3 (designated CVB3/0) does not lead to myocarditis, although replicating virus can be isolated from the hearts of infected mice. For our model, we divided mice into two groups and fed one group a normal diet and the other group a diet deficient in Se. After four weeks, all mice were infected with the benign strain CVB3/0. As expected, the infected mice fed the Se-sufficient diet did not develop any cardiac inflammation. However, the Se-deficient mice developed moderate to severe myocarditis [11]. To determine if the increase in virulence was due to host factors alone, or a result of alterations in the virus, we isolated virus from the hearts of Se-deficient mice and passed it back into Se-adequate mice. If host factors alone were the cause of the increase in virulence, then the Se-adequate mice infected with virus isolated from Se-deficient mice should not develop disease. However, the infected mice did develop myocarditis, suggesting that the virus itself had been altered [11].
Sequencing of the viral genomic RNA obtained from infected Se-adequate and Se-deficient mice confirmed that a viral genome change had occurred ( Table 1). Out of the ten nucleotide positions that were reported to co-vary with cardiovirulence in CVB3 strains [12], six reverted to the virulent genotype in those virions that replicated in Se-deficient mice [1]. No nucleotide changes were found in viral genomes isolated from Se-adequate control mice. The mutations persisted after the now virulent virus was passed into naive Se-adequate mice, producing pathology ( Figure 1). Therefore, replication in a Se-deficient host led to specific viral mutations, which changed an avirulent virus into a virulent one. Once these mutations occurred, even Se-adequate mice were susceptible to the newly pathogenic virus.
CVB3 mutations and oxidative stress
One of the functions of Se is that it acts as an antioxidant, primarily through its association with the antioxidant enzyme glutathione peroxidase (GPX). GPX incorporates Se as selenocysteine (a novel 21 st amino acid in addition to the 20 commonly recognized ones). When Se is limiting in the diet the activity of GPX declines. Se is also incorporated into more than 20 other proteins, some of which have functions other than antioxidant protection. To determine if a decrease in GPX activity was a crucial step in Se-associated change in virulence, we infected GPX-1 knockout mice with CVB3/0. These mice, similar to Se-deficient mice, developed myocarditis, whereas infected wild-type mice did not. Sequencing of the viral genome demonstrated mutation to the cardiovirulent genotype at seven nucleotide positions, of which six were identical to the mutations found in the virus isolated from Se-deficient mice [13] (Table 1).
Because vitamin E also acts as an antioxidant, although it works by a very different mechanism to Se, we wanted to determine if a lack of vitamin E would also affect the viral genome. As was found for the Se-deficient mice, mice fed a diet deficient in vitamin E and infected with CVB3/0 developed myocarditis [14]. Sequencing of the virus revealed that the same mutations occurred in the virus isolated from vitamin E-deficient mice as were found for Se-deficient mice. All of the experimental data led to the conclusion that oxidative stress is the common mechanism for the viral genome changes.
CVB3, vitamin E, excess iron and HIV
The redox-active ferrous ion is known to exert a powerful pro-oxidant effect in vivo as a result of its reaction with hydrogen peroxide to produce the extremely reactive hydroxyl free radical. In this way, excess dietary iron can damage a variety of cellular components, including lipids, nucleic acids and proteins [15]. Therefore, it was of interest to determine the effect of dietary iron overload on the ability of CVB3/0 to cause cardiopathology in our mouse model. Mice were fed either a diet containing a normal level of iron (35 parts per million or ppm) or an iron overload diet containing 1050 ppm of iron. At each level of dietary iron, half the mice received the same diet lacking vitamin E. After consuming their assigned diets for four weeks, the mice were infected with CVB3/0 (the amyocarditic strain of CVB3). In those mice that received the vitamin E-supplemented diets, consumption of the high iron diet resulted in elevated viral titers and increased heart damage versus the normal iron controls [16]. Consumption of the high iron diet that lacked vitamin E resulted in further increases in viral titers and heart damage. Therefore, here we have another example of how nutritional manipulation of host oxidative stress status can have an impact on viral pathogenesis, such that an amyocarditic form of the virus was converted into a myocarditic one. It has been reported that the clinical course of some HIV patients might be unfavorably affected by elevated iron status [17]. In pregnant Zimbabwean women, for example, there was a positive association reported between HIV-1 viral load and serum ferritin levels [18]. However, this positive association between HIV progression and iron status is not universally observed [19,20], and therefore the correlation is controversial [21,22]. Needless to say, any damaging effect of iron in HIV infection would have important public health implications because of the general use of iron supplements to prevent or cure anemia. Because of the strong combined effect of iron excess and vitamin E deficiency observed during infection with CVB3/0, it might be useful to assess vitamin E nutritional status in HIV patients who are given iron supplements.
Host nutritional status and influenza virus infection
The results observed during coxsackievirus infection suggested that viruses other than CVB3 might be susceptible to host nutritional stresses. To test this hypothesis, Se-deficient and Se-adequate mice were infected with influenza A/Bangkok/1/79, which normally induces only a mild pneumonitis in mice. Mice that were Se-deficient were found to develop severe lung pathology post-infection, whereas the Se-adequate mice developed only mild pathology [23].
Influenza virus contains a single-stranded segmented RNA genome, a lipid bilayer, which is of host derivation, and a matrix protein that lies underneath the lipid layer. The viral genome consists of eight RNA segments containing genes that encode different viral proteins, including the hemagglutinin (HA) and neuraminidase (NA) proteins (required for entry into and exit from the infected host cell, respectively), matrix proteins (M1 and M2), polymerase proteins and nucleoproteins. Viruses recovered from both Se-deficient and Se-adequate mice have been sequenced [2]. Consistent mutations in the M gene were recovered from Se-deficient mice ( Table 2). Three separate isolates from three individual Se-deficient mice all had identical mutations in 29 positions. One of the three isolates had an additional five mutations, with one additional amino acid change. Therefore, similar to what was found for coxsackievirus B3, host deficiency in Se leads to increased viral mutations in the influenza virus genome, resulting in a more virulent phenotype. How do changes in the M protein lead to increased virulence of the influenza virus? The M1 protein has been shown to influence virulence by increasing viral replication due to rapid uncoating from the viral ribonucleoproteins. Therefore, the faster the uncoating occurs, the quicker viral replication can begin [24,25]. Consequently, mutations in the M region of the genome might lead to increased viral replication of the mutant virus. Increased viral titers in turn might lead to increased lung pathology, and hence increased pathogenicity of the virus. In support of this hypothesis, viral titers of the mutant virus were higher in infected mice compared with wild-type virus [23].
Poliovirus and Se in humans
Poliovirus, similar to the coxsackieviruses, is a human enterovirus and a member of the Picornaviridae family. But in contrast to coxsackievirus, poliovirus cannot be studied using the usual mouse models, because rodents do not normally carry the human poliovirus receptor. However, it is possible to generate transgenic mice that express poliovirus receptors, thereby making them suitable for investigating numerous properties of poliovirus, including neurovirulence, attenuation and tissue tropism. Another experimental approach, of course, would be to study poliovirus in human subjects rather than in animal models. Broome et al. [4] supplemented three groups of healthy people (22 members, including 11 males and 11 females in each group) with 0, 50 or 100 mg of Se (as sodium selenite) per day for 15 weeks (for a discussion of what constitutes a nutritionally relevant dose of Se, see Ref. [26]). All subjects were judged to be of relatively low initial Se status as indicated by plasma Se concentrations !1.2 mmol/L. After six weeks of supplementation, all subjects were given an oral live attenuated poliomyelitis vaccine. Supplementation continued uninterrupted after vaccination for a further nine weeks.
Supplementation with Se increased several indices of Se status in these subjects, including plasma Se concentrations and lymphocyte glutathione peroxidase activities. Supplementation also enhanced certain aspects of the cellular immune response, such as increased interferon (IFN)-gamma production, earlier peak T-cell proliferation, and increased number of T-helper cells. Humoral immune responses were not affected. However, perhaps the most intriguing observation was the fact that individuals receiving Se exhibited a more rapid clearance of the poliovirus. Moreover, poliovirus RT-PCR products isolated from the feces of supplemented subjects had fewer mutations. The Broome study [4] presents for the first time direct evidence for the involvement of Se status in determining viral replication and mutation rates in people. These data confirm in humans what Beck and colleagues [3] have been saying on the basis of their mouse models for several years, namely that Se (and vitamin E) exerts a powerful control over viral replication and mutation rates in vivo, such that a nutritional deficiency of either of these two dietary antioxidants enables RNA viruses to convert to more virulent strains. Additional study of the influence of Se and/or vitamin E on the evolution of viruses in large population groups appears warranted.
Reactive oxygen species (ROS) and reactive nitrogen species (RNS) in viral infection
Previous work has shown that ROS and RNS play a crucial role in the development of influenza-induced pathogenesis in the lung [27][28][29]. Akaike et al. [5] reported increased rates of mutation of an RNA (Sendai) virus that had been exposed to reactive nitrogen species (RNS), such as nitric oxide (NO) and peroxynitrite (ONOO K ). Both NO and O 2 K have been shown to increase the pathogenesis of an influenza virus infection in laboratory experiments [27,30,31]. Notably, an inducible form of NOS (nitric oxide synthetase or iNOS) is strongly activated by a variety of pathogens, including neurotropic, cardiotropic and pneumotropic viruses (e.g. coxsackievirus or influenza virus), causing an overproduction of NO in infected tissues [28]. Importantly, inhibition or elimination (knockout) of iNOS activity significantly reduces pathological consequences of various viral infections [28], including pneumonitis caused by influenza virus in mice [27]. The work of these groups with RNS and of our group with Se and vitamin E deficiency strongly suggests that oxidative and/or nitrosative stress in the host tissues significantly contributes to the modification of viral RNA during virus replication. Therefore, a nutritional deficiency of an antioxidant that leads to increased production of ROS and/or RNS is probably responsible for viral mutations.
Host nutrition and viral genome changes: possible mechanism(s) RNA viruses have adapted to fill all available host nichesfrom bacteria to plants, fish, birds, reptiles, amphibians and mammals. One method that viruses use to exploit a wide range of hosts is that of genetic diversity. A population of viruses exists as a large number of closely related mutants, rather than a single fixed sequence, and is therefore known as a 'quasispecies'. This variation occurs because of the error-prone replication of RNA viruses, lack of viral proofreading enzymes and short generation times. During viral replication, the quasispecies will reach equilibrium and a consensus, or dominant sequence, will emerge. It has been suggested that maintaining a diverse quasispecies provides an evolutionary advantage to the virus and enables rapid adaptation to changing host environmental conditions.
Within the quasispecies structure, a variety of subpopulations can coexist; and by adjusting their numbers, the population as a whole can move rapidly through 'sequence space' from one 'fitness peak' to another. Thus, determination of the 'genomic sequence,' even for a carefully cloned population, is really an assessment of the dominant (or consensus) sequence. The dominant genotype might shift gradually, or it could change suddenly if environmental pressures are imposed [32]. Most variant sequences, however, are present as tiny minorities within the overall population; nevertheless, it has been shown that a given sequence that once had a selective advantage might persist through many replicative cycles, unobserved by phenotype or consensus sequencing, and might re-emerge rapidly when it is again favored by selective pressure. This phenomenon has been termed the 'memory' of viral quasispecies [33].
We hypothesize that increased oxidative stress in the host, induced by dietary deficiencies in antioxidants or by increased consumption of pro-oxidant nutrients, might provide a selective environment by which the more virulent genotype (already present in the viral quasispecies) is able to outcompete the original consensus sequence. Consequently, a new genotype becomes dominant, which has a more pathogenic phenotype.
How does the nutritionally induced oxidative stress status of the host contribute to the selection of a new viral quasispecies? One possibility is an altered immune response. Our own work [3,11,23,34], and the work of many others [35][36][37], has demonstrated that host nutritional deficiency leads to impaired immune function. For example, a deficiency in Se can lead to decreased T cell function, impaired neutrophil chemotaxis and decreased antibody production [38]. An impaired immune response might permit a more virulent viral quasispecies, normally kept in check, to escape elimination by the immune response and therefore replace the previously dominant less-virulent genotype.
It is also possible that a shift of the intracellular redox balance toward oxidation permits faster viral replication, consequently increasing the size of the quasispecies population and permitting selection of rare variants. Nencioni et al. [39] reported that lower intracellular concentrations of reduced glutathione permitted influenza virus replication to higher titers in several cell lines, apparently by inhibiting expression of late viral proteins, including HA and M.
A third possibility is that an increase in nutritionally induced oxidative stress could lead to a new viral quasispecies by direct oxidative damage to the viral RNA, thus accelerating the mutation rate. In addition, the oxidative damage to cell membranes and enzymes of the replication complex might also accelerate the viral mutation rate, thus leading to a new dominant viral quasispecies with altered pathogenicity.
To date, the precise mechanisms for selection of new viral variants in a host under nutritionally induced oxidative stress are not known. However, we would propose Review that several mechanisms are operating together to influence the outcome. Thus, both immune dysfunction and oxidative damage to the viral RNA might be occurring together to drive the selection of a new viral quasispecies. Figure 1 presents a schematic of the hypothesis put forward by our data. The viral quasispecies (in which the consensus or dominant genotype is avirulent) is inoculated into either a nutritionally adequate or nutritionally deficient host. However, within the quasispecies is a small minority population of virus with pathogenic potential. Replication of the viral quasispecies within a nutritionally adequate animal (not oxidatively stressed) results in the dominant consensus genotype remaining dominant and therefore no disease is induced. However, replication of the viral quasispecies within a nutritionally deficient host (oxidatively stressed) leads to a much different outcome. The previous minority genotype is now able to outcompete and replace the previously dominant genotype. This might be due to impaired immune function as a result of the nutritional deficiencies, enabling the minority genotype to escape immune clearance. In addition, oxidative damage to intracellular structures might favor the replication of the minority genotype, again enabling the expression of a new viral variant, which now replaces the previous consensus sequence. Further, the mutation rate might be increased by direct damage to viral RNA, resulting in faster emergence of new genotypes. These mechanisms are not mutually exclusive and might work together.
Concluding remarks
The old nutritional adage 'You are what you eat!' appears to have found novel application in our work relating host diet to viral virulence. By using relatively simple nutritional manipulations we and others were able to increase the oxidative stress in our host animals either by withholding crucial cellular antioxidants from their diets (e.g. selenium or vitamin E) or by feeding with excess amounts of a pro-oxidant nutrient (e.g. iron). All techniques tested to increase oxidative stress in host animals led to the common outcome of increased viral virulence with reproducible genome mutations found in two RNA viruses: coxsackievirus and influenza. The demonstration that this phenomenon occurs within two different viral RNA families suggests that host nutritional deficiencies can have an effect on several different viral infections. These results represent a new paradigm for the interaction between host nutritional status and the emergence of new viral diseases in the human population. Widespread nutritional deficiencies occur in many developing countries, which are frequently the site of emergence of new viral diseases as well as old viral diseases with new pathogenic properties. We suggest that host nutritional status be considered when studying the causes for viral emergence, and that adequate nutrition of the population is an important form of protection against the emergence of new viral pathogens.
|
2017-04-18T21:07:40.148Z
|
2004-07-30T00:00:00.000
|
{
"year": 2004,
"sha1": "7ed85b7692c174741491c19df4fd82c51ac7d14a",
"oa_license": null,
"oa_url": "http://www.cell.com/article/S0966842X04001647/pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "8d7b60549449a75ef1d14b3b1ed4a66817fb926b",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
119127217
|
pes2o/s2orc
|
v3-fos-license
|
On a Twisted Version of Linnik and Selberg's Conjecture on Sums of Kloosterman Sums
We generalise the work of Sarnak-Tsimerman to twisted sums of Kloosterman sums and thus give evidence towards the twisted Linnik-Selberg Conjecture.
Introduction
The study of Kloosterman sums S(m, n; c) = a mod(c) (a,c)=1 e ma + na c , where e(z) = e 2πiz and aa ≡ 1 mod(c), is interesting for a variety of reasons. One of these reasons is their connection to the spectral theory of automorphic forms. In particular the sign changes of S(m, n; c), for c varying in the arithmetic progression c ≡ 0 mod(s), are related to the Selberg conjecture about the smallest positive eigenvalue of the Laplacian on the space Γ 0 (s) \ H. Concretely we have that the smallest positive eigenvalue λ s 1 ≥ 1 4 if and only if the following conjecture holds (see [13,Theorem 16.9]).
In this paper however, we are interested in the sharp cut-off variant of the above conjecture. The first non-trivial progress towards this conjecture was made by Kuznetsov [17], who managed to prove (1.1) c≤C 1 c S(m, n; c) ≪ m,n C 1 6 log(2C) 1 3 , by exploiting the Kuznetsov trace formula (see Proposition 6), which was established in the same paper. The bound (1.1) is still the best known bound to date and the Kuznetsov trace formula has become a very powerful tool in a variety of contexts. In their paper [22] Sarnak-Tsimerman have made the dependence on m, n in (1.1) explicit and moreover achieved a non-trivial bound in the harder 'Selberg' range (C ≤ |mn|). Their result has further been generalised to the arithmetic progressions c ≡ 0 mod(s) by Ganguly-Sengupta [10], and to c ≡ a mod(r) with (a, r) = 1 by Blomer-Milićević [1]. Recently Kiral-Young [16] have indicated a simple approach which allows one to incorporate both congruence conditions c ≡ 0 mod(s) and c ≡ a mod(r) simultaneously (assuming (r, as) = 1).
Motivated by an application to the efficiency of a certain universal set of quantum gates, Browning-Kumaraswamy-Steiner [3] have proposed the following twisted version of the Linnik-Selberg conjecture.
In this paper we are concerned with establishing some progress towards this conjecture. Before we state our results we shall introduce some simplifying notation: F G means |F | ≤ K ǫ (Cmns(1 + |α|)) ǫ G for some positive constant K ǫ , depending on ǫ, and every ǫ > 0. Theorem 1. Let C ≥ 1, α ∈ R, s ∈ N and m, n ∈ Z with mn > 0, s ≪ min{(mn) where Y t is the Bessel function of the second kind of order t, θ is the best known progress towards the Ramanujan-Selberg conjecture, and the summation t h is over all exceptional eigenfunctions h with eigenvalue 1 4 + t 2 h of the Laplacian for the manifold Γ 0 (s) \ H, where ρ h (n) denotes its n-th L 2 -normalised Fourier coefficient.
A few remarks are in order about this theorem. First we should remark that one has θ ≤ 7 64 by the work of Kim-Sarnak [15]. Next we observe the appearance of a main term, which is contrary to [10]. Indeed, the latter has an erroneous treatment of the exceptional spectrum a . One may further analyse the main term by making use of asymptotics of the Bessel function of the second kind Y t (y) for y → 0. However the reader familiar with Bessel functions may know that these asymptotics behave quite differently for t = 0 and t > 0 and therefore it would generate uniformity issues in the parameter s. One may also bound the main term altogether. In this case one gets the following corollary.
a The compact domain to which they apply the mean value theorem of calculus varies and this may not be circumvented, since if the exceptional spectrum is non-empty then the function they consider has a pole at 0. s .
As far as the restrictions go in Theorem 1, they are not very limiting. Indeed if s ≥ C 1 2 , then the Weil bound, which gives the bound s −1+ǫ C 1 2 +ǫ , is more than sufficient, and if (mn) 1 4 ≤ s ≤ C 1 2 then one is automatically in the easier Linnik range and for instance the holomorphic contribution is negligible. One may also consider mn < 0, which would lead one to analyse different Bessel transforms, or incorporate the further restriction c ≡ a mod(r) with (a, r) = 1. However, for the latter, an analogue to Proposition 9 for the group Γ 0 (s) ∩ Γ 1 (r) has to be derived. In fact the associated Kloosterman sums for this group admit further cancellation, thus leading to stronger results in terms of the parameter r. Investigations of this sort shall be considered by the author in future work.
For |α| < 1 one may improve Theorem 1 slightly, thereby recovering the results of [22] and [10]. s The main goal in [3] was to show that it is possible to improve Sardari's work on covering exponents for S 3 [21] under the assumption that Conjecture 2 holds. It is unfortunate that the derived upper bounds in Theorem 1 and 3 are not strong enough to offer any unconditional improvement. The reason behind this is that in the application one is very deep in the Selberg range, for which the trivial bound is still the best known bound. Discussions on exactly why the Selberg range poses great difficulties can be found in [22].
Finally, we would like to point out a little gem that is hidden inside Theorem 1. This has as a consequence that either there is cancellation in the sign or very often the inner exponential sum is much smaller than √ c.
Acknowledgements. I would like to thank my supervisors Andrew Booker and Tim Browning for the detailed read-throughs and comments on earlier versions of this paper as well as Mehmet Kiral, Matt Young and Nick Andersen for discussions on this and related topics. This material is partially based upon work supported by the National Science Foundation under Grant No. DMS-1440140 while the author was in residence at the Mathematical Sciences Research Institute in Berkeley, California, during the Spring 2017 semester.
Holomorphic and Maass Forms
In this section we set up some notation and recall necessary facts about holomorphic and Maass forms.
Let H be the upper half-plane and let SL 2 (R) act on it by Möbius transformations: We consider the following congruence subgroup For a given cusp a of Γ 0 (s) we fix a matrix σ a ∈ SL 2 (R), such that σ a ∞ = a and if Γ a denotes the stabilizer of a then σ −1 a Γ a σ a = Γ ∞ , where Γ ∞ = {±T n |n ∈ Z} is the stabilizer at ∞ and T = 1 1 0 1 . Such a matrix is called a scaling matrix for the cusp a.
The space of cuspidal Maass forms consists of the real-analytic square integrable eigenfunctions of the Laplacian on the space L 2 (Γ 0 (s) \ H) with respect to the inner product Such a Maass form h possesses a Fourier expansion of the shape where W a,b is the Whittaker function, z = x + iy, and 1 is the eigenvalue with respect to the Laplacian. A theory of Hecke operators as well as Atkin-Lehner theory can be developed for this space. In particular for a newform h we have where λ h (n) is the eigenvalue with respect to the n-th Hecke operator, which furthermore satisfies λ h (n) ≪ ǫ n θ+ǫ , where θ = 7 64 is admissible by the work of Kim and Sarnak [15].
We shall require a special basis of this space which has been worked out in [2] b . For a Maass newform of level r|s define the arithmetic functions where χ 0 is the trivial character modulo r, and the multiplicative function For l|d define Write d = d 1 d 2 with d 1 square-free and d 2 square-full and (d 1 , d 2 ) = 1. Then for l|d define Then an orthonormal basis of Maass forms of level s is given by We furthermore need a bound on the size of the Fourier coefficient of an element of the above basis. We have where we have made use of (2.2) and λ h (n) ≪ ǫ n θ+ǫ . Since h is new of level r, but normalised with respect to the inner product of level s (2.1) we further have due to Hoffstein and Lockhart [11].
Other Maass forms which are important in our discussion are the Eisenstein series associated to a cusp c. They are defined for Re(τ ) > 1 as and admit a meromorphic extension to the whole complex plane. They also admit a Fourier expansion of the same shape, which at the point τ = 1 2 + it we write as E c (z, 1 2 ϕ c (n, t)W 0,it (4π|n|y)e(nx).
b Corrections can be found at http://www.uni-math.gwdg.de/blomer/corrections.pdf For holomorphic forms the situation is quite analogous. A holomorphic cusp form of weight k ∈ N of level s is a holomorphic function h : H → C that satisfies j(γ, z) −k h(γz) = h(z) for all γ ∈ Γ 0 (s) and is square integrable with respect to the inner product They admit a Fourier expansion of the shape and there is a theory of Hecke and Atkin-Lehner operators. For h a newform we have where λ h (n) is the eigenvalue of the n-th Hecke operator, which furthermore satisfies the bound λ h (n) ≪ ǫ n k−1 2 +ǫ due to Deligne [4], [5] and Deligne-Serre [6]. Analogous to the Maass case we have a nice orthonormal basis of the space S k (s) of holomorphic cusp forms of level s and weight k: We furthermore need a bound on the size of the Fourier coefficients of an element of the above basis. We have where we have made use of the Deligne bound as well as (2.2). We further have the bound when h is new of level r, but normalised with respect to (2.6); see for example [18, pp. 41,42].
Proof of the Theorem
We shall prove a dyadic version of Theorem 1 from which we shall then deduce Theorem 1.
We follow the argument in [22] and [10], and replace the sharp cut off with a smooth cut off and then use Kuznetsov's trace formula. We shall require the following version of the Kuznetsov trace formula.
Proposition 6 (Kuznetsov's trace formula). Let s ∈ N and m, n ∈ Z be two integers with mn > 0. Then for any C 3 -class function f with compact support in ]0, ∞) one has c≡0 mod(s) Here h is a sum over an orthonormal basis of Maass forms with respect to the group Γ 0 (s) and the Bessel transforms are given by where J t (y)is the Bessel function of the first kind of order t.
From now on let f (x) = e iαx g(x) with g smooth real-valued bump function satisfying the following properties is a parameter to be chosen at a later point. Note that we have We now wish to compare the smooth sum with the sharp cut off in Theorem 5. By making use of the Weil bound for the Kloosterman sum we find that their difference is bounded by Now we apply Kuznetsov (see Proposition 6) to the smooth sum (3.1). This leads to the expression We shall deal with each of these terms separately. In what follows we shall use many estimates on the Bessel transforms of f , which we shall summarise here, but postpone their proof until Section 4.
Lemma 7. Let f be as in the beginning of Section 3. Then we have where 1l I is the characteristic function of the interval I. Finally when |t| ≥ 1 and either |t| / One should mention that similar estimates have been derived previously by Jutila [14], for a slightly different class of functions and ranges.
3.1. The Continuous Spectrum. The goal of this section is to prove the following bound on the continuous contribution For this endeavour we need the following lemma.
Lemma 8. Let s = s ⋆ s 2 with s ⋆ square-free and let m, n positive integers. We have Proof. This is part of [1, Lemma 1].
Substituting this inequality into (3.11) yields the bound We split the integral up into three parts For I 1 we use (3.3) and arrive at 1.
For I 2 we use (3.3) again and arrive at 1.
For I 3 we use (3.9) and arrive at This concludes the proof of (3.11).
3.2. The Holomorphic Spectrum. The goal of this section is to prove the following inequality (3.12) H s (m, n; f ) 1 + X.
In order to prove this inequality we choose our orthonormal basis as in (2.7). Then where we have made use of (2.8), (2.9), and dim S k (r) ≪ rk . The latter sum we split up into k ≤ 9 and k > 9. Using (3.3) we find We also find k≡0 mod(2) k>9 Using (3.5) we find Using (3.6) we find Using (3.7) we find Using (3.8) we find The claim (3.12) now follows.
3.3. The Non-Holomorphic Spectrum. In this section we shall prove the following two estimates M s (m, n; f ) + 2π s .
We shall require the following proposition.
Proposition 9. Let A ≥ 1 and n ∈ N. Then we have for the group Γ 0 (s) Let us first prove (3.13). We split the summation over t h in M s (m, n; f ) into various ranges I 1 , . . . , I 4 which are treated individually. They are , ∞ \I 2 , The first way to treat the range I 1 is to choose the basis (2.3) and use (2.4) as well as (2.5): .
A second way to treat the range I 1 is to apply the Cauchy-Schwarz inequality in conjunction with Proposition 9 and (3.3): .
The range I 2 we treat in exactly the same manner and we arrive at the inequalities .
The range I 3 we further split into dyadic ranges Again we can estimate , for l > log 2 (max{1, X 1 2 }).
Combining (3.19), (3.20) and (3.21) we find that the contribution stemming from l ≤ log 2 (max{1, X s , for a sufficiently small δ > 0. For the contribution from I 4 we first note that we have |t h | ≤ θ for t h ∈ I 4 by [15]. We first insert (3.4) and further find Let us now turn our attention to (3.14). This time we split up into the intervals By making use of (3.3) we find that the contribution from I 1 is bounded by s .
As before we split up I 2 into dyadic ranges I 2 (l) = [2 l , 2 l+1 ], l ≥ 0 and use which follows from (3.9) and (3.10). Thus we find that the contribution from I 2 is bounded by (3.26) Theorem 1 follows now at once by estimating the range c ≤ (1 + |α| ≤ c ≤ C we use Theorem 5. Furthermore note that uniformly for t ≤ θ and hence we have s .
This proves Theorem 1. In order to prove Corollary 2 we need to show when C ≥ √ mn. This follows from the two estimates Theorem 3 is proved analogously.
Transform estimates
In this section we prove the claimed upper bounds in Lemma 7 on the transforms of f . Since all the estimates are very different in nature we split them up into multiple lemmata. We generally follow the arguments of [22] and [7], but tweak them to account for our introduced twist. First we shall need two preliminary lemmata, which will be used frequently.
Proof. We integrate by parts and find
from which the first statement is trivially deduced.
Lemma 11. Let G, H ∈ C 1 ([A, B], C) and assume G has a zero and H ′ has at most K zeros. Then we have H ′ 1 ≤ 2(K + 1) H ∞ by splitting up the integral into intervals on which H ′ has a constant sign.
Lemma 12. Let f be as in the beginning of Section 3 and |α| ≤ 1 then we have Proof. We follow the proof of Lemma 7.1 in [7] and Proposition 5 in [22]. To prove the first statement we use the Bessel representation 2π 0 e i(x sin ξ−tξ) dξ.
Lemma 13. Let f be as in the beginning of Section 3 and |α| ≥ 1 then we have Proof. As before we find f (t) ≪ 1 and for X ≥ 1 we have We also require some more refined estimates. For this we consider the different regions of the J-Bessel function.
Lemma 14. Let f as in the beginning of Section 3 and |α| ≤ 1. Then we have for t ≥ 8 where 1l I is the characteristic function of the interval I.
Proof. We require some uniform estimates on the J-Bessel functions of real order. For small argument we have exponential decay where The left hand side follows from the fact that the first zero of the Bessel function of order t is > t and the right hand side follows from [24, pp. 252-255]. We will also make use of Langer's formulas see [9, pp. 30,89]. The first formula is where w = x 2 t 2 − 1 and z = t(w − arctan(w)). The second one is And finally for the transitional range |x − t| ≤ t 1 3 we have by [24, pp. 244-247]. The first inequality follows directly from (4.2) Note that if X ≤ 1 2 , then this covers everything, thus we may assume X ≥ 1 2 from now on. For the range [ t 2 , t − t (z) ≪ e −z , ∀z ≥ 0. Thus we find 2 3 t .
For the range t − t 1 3 ≤ y ≤ t + t 1 3 we use (4.5) and get t .
We are left to deal with the range t + t 1 3 ≤ y. We make a change of variable y → ty and we are left to estimate J t (ty)e iαty g(ty) dy y .
We make use of (4.3) and find z ≫ 1 in this range of y. By making use of Langer's formula (4.3) we introduce an error of the size which is sufficient. Since z ≫ 1 we are able to make use of the classical estimates (4.7) Inserting (4.7) into (4.6) introduces another error of the size where w = y 2 − 1 and z = t(w − arctan(w)). We have z ≫ t min{w 3 , w} and thus we are able to estimate the above as where we have made use of Lemmata 10 and 11 with F (y) = y(y 2 − 1) − 7 4 and G(y) = g(ty) respectively F (y) = y − 5 2 and G(y) = g(ty). This is again sufficient. For the main term we have to consider e it(±ω(y)+αy) g(ty) (y 2 − 1) where ω(y) = y 2 − 1 − arctan y 2 − 1, We would like to integrate t(±ω ′ (y) + α)e it(±ω(y)+αy) by parts, but for the sign '− sign(α)' and y 0 = (1 − α 2 ) − 1 2 we have ω ′ (y 0 ) = |α| and we pick up a stationary phase. Let us first assume α is close to 0, such that y 0 < 1 + t − 2 3 . For |α| ≪ t − 1 3 or the sign 'sign(α)' we have | ± ω ′ (1 + t − 2 3 ) + α| ≫ t − 1 3 and we get by means of Lemmata 10 and 11 with F (y) = (±ω ′ (y) + α)e it(±ω(y)+αy) , G(y) = g(ty) and H(y) = [(±ω ′ (y) + α)(y 2 − 1) 1 4 y] −1 a satisfying contribution of t −1 . So from now on we can assume α > 0, α ≥ kt − 1 3 , for some small constant k, and the sign being '−'. We treat first the case where α < 1, where we make use of a Taylor expansion around y 0 . We split up the integral (4.8) into three parts I 1 , I 2 , I 3 corresponding to the intervals [1 + t − 2 3 , y 0 − A], [y 0 − A, y 0 + A], [y 0 + A, ∞] respectively. For I 1 and I 3 we again make use of Lemmata 10 and 11 with F (y) = (ω ′ (y) − α)e it(ω(y)−αy) , G(y) = g(ty) and H(y) = [(ω ′ (y) − α)(y 2 − 1) We have We have that R ′ (x) is decreasing and positive and hence R(x) is increasing with a zero at y 0 . Furthermore we have R ′′ (x) is increasing and negative. We conclude For the second factor we have we find that the contribution from I 3 is at most We claim that −R(x)(x 2 − 1) 1 4 increases first and then decreases in [1, y 0 ]. For this it suffices to prove that its derivative has exactly one zero in that interval and is positive at 1 + ǫ. Note that since our function is zero at the endpoints we have by Rolle's Theorem that there is at least a zero of the derivative. The derivative is , which is clearly positive at 1 + ǫ. Assume now that we have two zeros y 1 , y 2 in [1, y 0 ]. They both satisfy the equation Now by Vieta's formula we have and thus a contradiction. With this information we conclude that if α ≥ Kt − 1 3 , for some large constant K, we have that the contribution from I 1 is at most Further more we estimate the integral over I 2 trivially and get the bound , which we are allowed for K large enough we get that (4.8) is bounded by hence the contribution from I 3 is bounded by giving a contribution of Lemma 16. Let f be as in the beginning of Section 3 and |α| ≤ 1 then we have Proof. We follow the proof of Lemma 7.1 in [7] and Proposition 5 in [22]. To prove the first inequality we use the equation cos(x cosh ξ) cos(2tξ)dξ.
We have by partial integration Thus we find Hence it suffices to bound the latter integral. It is bounded by For X ≥ 1 this is bounded by and for X ≤ 1 it is bounded by ≪ ǫ 1 + | log(X)|. The first inequality follows immediately.
The final two inequalities require some more work. Note that f (t) is even in t, thus we can restrict ourselves to t ≥ 1. We make the substitution x → 2tx in the definition of f (t) and use the uniform asymptotic expansion of the function G iν (νs) from [8] pages 1009-1010 with n = 0.
|
2017-07-07T10:36:46.000Z
|
2017-07-07T00:00:00.000
|
{
"year": 2017,
"sha1": "126fc72dd4953d52e4d5cb350b423690145aa151",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1707.02113",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "059a1a5cfc0cef6f2ccd5e1d12dda34fbcb4e609",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
221052236
|
pes2o/s2orc
|
v3-fos-license
|
Transformation of CMML to AML presenting with acute kidney injury
ABSTRACT Characterized by bone marrow dysplasia and peripheral blood monocytosis, chronic myelomonocytic leukemia (CMML) is one of the most aggressive chronic leukemias and has a propensity for progression to acute myeloid leukemia (AML). Patients with newly diagnosed AML generally present with symptoms related to complications of pancytopenia but can also present with renal insufficiency. We present a 79-year-old male with a past medical history of CMML and chronic kidney disease stage 3 (baseline creatinine 1.8 mg/dL) who presented with one day of inability to urinate and 20-lb unintentional weight loss, fatigue, and bone pain over 3 months. Laboratory evaluation revealed leukocytosis of 88.5 x 103/uL (normal 4.8–10.8 x 103/uL) with 24.0% monocytes on differential, creatinine 2.94 mg/dL (baseline creatinine 1.7–1.9 mg/dL), uric acid 19.8 mg/dL, potassium 4.0 mmol/L, phosphorus 4.0 mg/dL, calcium 9.2 mg/dL, and albumin 3.2 g/dL. Urinalysis was significant for protein 200 mg/dL, 20/LPF granular casts, and 7/LPF hyaline casts. Bone marrow biopsy revealed 20–30% blasts with monocytic features of differentiation consistent with acute myeloid leukemia. Computed tomography (CT) of the abdomen and pelvis appreciated splenomegaly with retroperitoneal, and pelvic lymphadenopathy. Kidney failure can complicate the presentation of AML but can be rapidly reversible with treatment. In patients with CMML who have progressive renal insufficiency and hyperuricemia, there should be a high index of suspicion for progression to AML.
Introduction
CMML, a myeloid neoplasm with features of myelodysplastic syndromes (MDS) and myeloproliferative neoplasms (MPN), is characterized by dysplasia in one or more hematopoietic cell lineages, abnormal production and accumulation of monocytic cells, and an elevated risk of transforming into secondary acute myeloid leukemia (AML) [1]. Prognosis is extremely variable in CMML. The rate of leukemic transformation to AML has an approximate incidence of 15%-20% over five years [2]. In most prognostic studies, the percentage of blasts in peripheral blood and bone marrow appear to be the most important factors in determining survival [3]. The prognostic grading system proposed by the World Health Organization (WHO) splits CMML into CMML-0, CMML-1, and CMML-2 based on the blast cell count (see Figure 4) [4].
Patients with AML generally present with symptoms related to complications of pancytopenia, including weakness and easy fatigability, infections, and/or hemorrhagic findings such as gingival bleeding and menorrhagia [5]. More rarely, acute myeloid leukemia can present with acute kidney injury (AKI). There are different mechanisms that can contribute to AKI in this setting, which include hypoperfusion, acute tubular necrosis, kidney infiltration by leukemia, intrarenal leukostasis, tumor lysis syndrome, hyperuricemia, lysozymuria, and obstruction (see Figure 5) [6]. A case report of spontaneous tumor lysis syndrome secondary to the transformation of CMML to AML highlights the importance of recognizing sTLS as a cause of renal failure and electrolyte disturbance before cancer treatment begins [7]. Although our case does not meet the criteria for laboratory or clinical tumor lysis syndrome based on Cairo-Bishop criteria, ours is the only other reported case of severe hyperuricemia and AKI in the setting of CMML transformation to AML. It is important that internists consider transformation to AML in patients with CMML presenting with AKI and hyperuricemia.
Case description
A 79-year-old male with a past medical history of CMML diagnosed 4 years prior, anemia related to CMML and CKD receiving erythropoiesis-stimulating agent, hypertension, and chronic kidney disease stage 3 (baseline creatinine 1.8 mg/dL) presented with one day of decreased urination and an unintentional 20 pound weight loss and fatigue over the preceding three months.
The patient's blood pressure was 141/84 mmHg, pulse 104 beats per minute, respiratory rate 16 respirations per minute, SpO 2 96%, and temperature of 36.6°C. The physical exam did not reveal any abdominal tenderness to palpation but did reveal splenomegaly. There was no palpable cervical, supraclavicular, axillary, or inguinal lymphadenopathy.
Laboratory evaluation was significant for profound leukocytosis, 88.5 × 10 3 cells per mm 3 , with 24.0% monocytes compared to his baseline WBC 4.5-7 × 10 3 cells per mm 3 over the preceding 4 years. Additional laboratory abnormalities were significant for uric acid 19.8 mg/dL and creatinine 2.94 mg/dL as well as potassium 4 mmol/L, phosphorus 4 mg/dL, calcium 9.2 mg/ dL, and albumin 3.2 g/dL. Urinalysis was significant for protein 200 mg/dL, 20/LPF granular casts, and 7/LPF hyaline casts. A renal ultrasound measured the left kidney as 10.2 cm long, with at least two cysts the largest at 3.5 cm, and without hydronephrosis. The right kidney measured as 10.5 cm long, with one cyst at 2 cm, and also without hydronephrosis. The CT of the chest, abdomen, and pelvis identified splenomegaly with a splenic diameter of 14.6 cm. No renal calculi were appreciated. Also visualized were several borderline subcentimeter retroperitoneal and pelvic lymph nodes.
Peripheral blood smear was performed and revealed a myeloid predominance with left shift and a small blast population (0.6%) as well as monocytic phenotypic aberrance (see Figure 1). Subsequently, a bone marrow biopsy was performed which identified 20-25% of CD 34+ blasts and morphologic features consistent with AML with monocytic differentiation (see Figures 2 and 3). Flow cytometry showed prominent monocytes which demonstrated loss of expression of HLA-DR and CD14 as well as coexpression of CD56. Next generation sequencing (NGS) revealed a pathogenic mutation in the NPM1 gene.
With presenting hyperuricemia and acute on chronic kidney injury, he received one dose of rasburicase 3 mg IV given concern for early TLS with concomitant initiation of daily allopurinol. He commenced cytoreduction with hydroxyurea 1000 mg twice daily and after 48 hours, his white blood cell count and uric acid down-trended to 48.5 × 10 3 cells per mm 3 and 5.5 mg/dL respectively. Creatinine also trended down to 1.98 mg/dL.
Due to the patient's performance status and age, hematology offered reduced-intensity therapy consisting of azacitidine with or without venetoclax or best supportive care rather than intensive induction chemotherapy. Ultimately, the patient elected for home hospice services and passed away 10 days later.
Discussion
Renal dysfunction is a common presentation in patients with AML. It is usually the result of combined glomerular and tubular dysfunction and is associated with a poor prognosis. There are many different causes for renal injury, most common of which are pre-renal or post-renal. Once these more common causes are ruled out, the focus can shift to intra-renal pathology. Pre-renal etiology was excluded, as our patient continued to have adequate blood pressure with mean arterial pressure greater than 65 mmHg in addition to continuous oral intake and adequate hydration. Post-renal etiology was also excluded, as there was an absence of hydronephrosis, renal calculi, or other pathology suggestive of obstruction on both renal ultrasound and CT of chest, abdomen, and pelvis. Then intra-renal causes must be considered. Three major causes of intra-renal AKI include leukemic infiltration, hyperuricemia, and lysozymuria. One cause for glomerular dysfunction is direct infiltration of the kidneys by blasts which can cause enlarged kidneys as a sign of leukemic infiltration [8] which most commonly occurs in AML with monocytic differentiation, like our patient. This particular AML subtype predisposes patients to granulocytic sarcomas, or chloromas, which are both terms used to describe an extramedullary tumor occurring in soft tissue or bone with the presence of atypical myeloid or monocytic blast cells [9]. The presence of renal leukemic involvement is extremely rare about 1%, although there are a few reported cases of renal failure secondary to diffuse bilateral infiltration [10]. In our case, there was no reported nephromegaly on CT imaging.
A second mechanism is an increase in lysozyme production, which is postulated to be from high concentrations of circulating monocytes and granulocytes. Normally, lysozyme is reabsorbed in the proximal convoluted tubule. However, increased concentration of lysozyme can be a direct tubular toxin leading to damage of the proximal tubule cells. This is similar to the tubular disorder in adult Fanconi syndrome [11]. An absolute monocytosis of 1 × 10 9 /L or greater that persists for more than 3 months is needed to meet the diagnosis of CMML. Our patient had a persistent absolute monocytosis of 21.1 × 10 9 /L. With such a significant absolute monocytosis, we postulate that lysozymuria was a contributing etiology to the AKI in our patient. As the gold standard test is a kidney biopsy, it was decided not to pursue this as the patient was frail and comfort-directed after the diagnosis of AML. There was no known cause for the patient's underlying chronic kidney disease (CKD) stage 3 with no history of diabetes mellitus or hypertension. Lysozymuria may actually have been the etiology for his CKD given significant monocytosis which preceded his CKD by 2 years.
Another postulated mechanism of kidney injury in our patient is hyperuricemia. In states of increased purine breakdown, such as leukemia, the insoluble uric acid load accumulates in the kidneys leading to intrarenal precipitation. Most commonly, cell lysis and the increase in purine byproducts occur with chemotherapy and radiation. However, spontaneous TLS is not an uncommon event in AML, but appears much less described as a presenting feature of CMML transforming to AML with only 1 case in the literature [7]. The Cairo-Bishop definition of tumor lysis syndrome consists of laboratory evidence for tumor lysis plus at least one clinical complication, which include creatinine 1.5 x ULN, cardiac arrhythmia, or seizure. Laboratory tumor lysis syndrome requires two or more laboratory changes within three days before or seven days after cytotoxic therapy. These laboratory changes from baseline include 25% increase of uric acid, 25% increase in potassium, 25% increase in phosphorus, and 25% decrease in calcium. As our patient only met one of the laboratory criteria in conjunction with the AKI, he did not meet strict criteria for clinical tumor lysis syndrome.
Even in the absence of tumor lysis syndrome, hyperuricemia can cause acute urate nephropathy with serum uric acid levels greater than 15 mg/dL [12,13]. It is postulated that uric acid may lead to renal insufficiency through two mechanisms. The first being obstruction of tubule lumens and the second being hindrance of renal venous blood flow [14]. Our patient had uric acid of 19.8 mg/dL supporting the possibility of uric acid nephropathy contributing to his renal pathology.
The etiology of his AKI was likely multifactorial. We postulate that lysozymuria and hyperuricemia both played a role in his progressive renal decline. Further supporting hyperuricemia and lysozymuria as causes of our patient's AKI was his dramatic improvement in kidney function with treatment of his hyperuricemia and leukocytosis. Two days later after a single dose of rasburicase and being started on hydroxyurea 1000 mg twice daily, creatinine decreased to 1.98 mg/dL from 2.94 mg/dL while uric acid and white blood cells decreased to 5.5 mg/ dL and 48.5 × 10 3 cells per mm 3 respectively. Rasburicase is safe and highly effective for the prophylaxis or treatment of hyperuricemia in patients with leukemia or lymphoma [15]. As lysozymuria is hypothesized to be from the high concentration of circulating monocytes, the cytoreductive therapy with hydroxyurea would be expected to reduce lysozymuria and hence improve renal function.
An important indicator of transformation from CMML to AML can be AKI. CMML has a poor prognosis, and risk of transformation to AML is directly proportional to age and blast count. For 2 years prior to admission, our patient had 0% blast count in his peripheral blood consistent with CMML-0 subtype, which has a 31-month medial survival B n i s t s a l b % 9 -5 r o / d n a B P n i s t s a l b % 4 -2 1 -L M M C r o / d n a , M B n i % 9 1 -0 1 , B P n i s t s a l b % 9 1 -5 2 -L M M C when any Auer rods are present time [16]. There should be a high index of suspicion for transformation to AML in any patient with CMML who presents with AKI particularly after excluding pre-and post-renal causes. Furthermore, once pre-and post-renal causes are ruled out, then intra-renal causes such as lysozymuria and hyperuricemia, even in the absence of clinical tumor lysis syndrome, should be considered as precipitating factors for for AKI.
|
2020-08-06T09:08:46.087Z
|
2020-07-03T00:00:00.000
|
{
"year": 2020,
"sha1": "9b19be0b56a3d95723e3fc06f9914eea3cd8becd",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/20009666.2020.1774271?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9f3f41ff52e0d2aa921cf53697ab07f2e68d35d0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
31044907
|
pes2o/s2orc
|
v3-fos-license
|
Evaluation and modifications of media for enumeration of Clostridium perfringens.
The suitability of the Shahidi-Ferguson perfringens, TSC (tryptose-sulfite-cycloserine), and oleandomycin-polymyxin-sulfadiazine perfringens agars for presumptive enumeration of Clostridium perfringens was tested. Of these, the TSC agar was the most satisfactory. The TSC agar method was improved by eliminating the egg yolk and using pour plates. The modified method allowed quantitative recoveries of each of 71 C. perfringens strains tested and is recommended. For confirmation of C. perfringens, the nitrite test in nitrate motility agar was unreliable, particularly after storage of the medium for a few days. In contrast, positive nitrite reactions were obtained consistently when nitrate motility agar was supplemented with glycerol and galactose.
The suitability of the Shahidi-Ferguson perfringens, TSC (tryptose-sulfitecycloserine), and oleandomycin-polymyxin-sulfadiazine perfringens agars for presumptive enumeration of Clostridium perfringens was tested. Of these, the TSC agar was the most satisfactory. The TSC agar method was improved by eliminating the egg yolk and using pour plates. The modified method allowed quantitative recoveries of each of 71 C. perfringens strains tested and is recommended. For confirmation of C. perfringens, the nitrite test in nitrate motility agar was unreliable, particularly after storage of the medium for a few days. In contrast, positive nitrite reactions were obtained consistently when nitrate motility agar was supplemented with glycerol and galactose.
SPS agar selectively inhibits growth or interferes with the formation of black colonies by the sulfite-reducing Enterobacteriaceae and Achromobacteriaceae; it also inhibits growth of most other facultative anaerobes and of the genera Pseudomonas, Bacillus, and Lactobacillus (1,5). However, low recoveries of C. perfringens in commercial SPS agar have been reported (12; L. F. Harris and J. V. Lawrence, Bacteriol. Proc. 70: 6,1970). Hauschild et al. (7) recovered 12 strains of C. perfringens quantitatively in SPS agar prepared in the laboratory from its ingredients, but in only one out of four commerical lots of SPS. Handford and Cavett (4) and Harmon et al. (5) also obtained low recoveries in laboratory-prepared SPS agar. In this laboratory, we have usually obtained complete recoveries of C. perfringens in SPS agar prepared from its ingredients, but in two preparations the recoveries of some C. perfringens strains were below 1% (D. Dobosch and A. H. W. Hauschild, unpublished data). In one preparation, the cause could be traced to a particular lot of yeast extract. It appears that the selective ingredients of this agar are at a level where a slight adverse change in the medium may result in inhibition of C. perfringens. TSN agar has been used less extensively than SPS agar, but the few reports on the suitability of this medium indicate that it is inhibitory to a number of C. perfringens strains (4, 5; Harris and Lawrence, Bacteriol. Proc. 70: 6,1970).
SFP agar appears to allow quantitative recovery of C. perfringens (4)(5)(6)12). Unfortunately, it does not prevent growth of a large number of facultative anaerobes, some of which are sulfite reducing (6,12). Its applicability, therefore, seems to be limited to specimens in which C. perfringens is the predominant microorganism, i.e., foods responsible for C. perfringens enteritis or fecal samples from patients recovering from the disease. The use of neomycin-blood agar commonly used in the United Kingdom (8,14) is similarly limited to investigations of food-poisoning incidents. Another disadvantage of the SFP agar is its relatively elaborate preparation: it requires addition of fresh egg yolk, surface plating, and pouring of cover agar.
Harmon et al. (6) modified SFP agar by replacing polymyxin B and kanamycin with 0.04% D-cycloserine. This antibiotic had been shown to selectively inhibit growth of essentially all of the common facultative anaerobes (2). In this modified medium (TSC agar), each of 10 C. perfringens strains tested was enumerated quantitatively by Harmon et al. (6).
Presumptive enumeration of C. perfringens is followed by confirmatory tests. The simplest of these involves stab culturing of an adequate number of black colonies into nitrate motility (NM) agar (1). However, the nitrite test as described by Angelotti et al. (1) is unreliable (3,12). Shahidi and Ferguson (12), therefore, intro-78 on March 23, 2020 by guest http://aem.asm.org/ Downloaded from duced egg yolk into their medium and proposed to enumerate only black colonies with an opaque halo around them and to confirm these in lactose motility (LM) agar. Of the clostridial species that produce sulfide as well as lecithinase, only C. perfringens is nonmotile and lactose positive. ln our experience, this method has the following main shortcomings: (i) several C. perfringens species do not produce a discernible halo after 20 to 24 h of growth in SFP and TSC agars; (ii) due to excess gas formation in LM agar, nonmotility of these isolates is difficult to ascertain.
This work was initiated to evaluate the suitability of the SFP and TSC agars for enumeration of C. perfringens and to determine the conditions required to obtain consistent results in the nitrite motility test. While this work was in progress, Handford and Cavett (4) published a note on the enumeration of C. perfringens in OPSP (oleandomycin-polymyxin-sulfadiazine perfringens) agar. An evaluation of this medium is included in the present paper.
MATERIALS AND METHODS
Cultures. Seventy-one strains of C. perfringens were examined; 51 of these were isolated from foodpoisoning incidents, 11 from pathological specimens, 7 from soil and normal feces, and 2 were of unknown origin. Strains were supplied by C. R. Amies, Willowdale, Ontario (six strains); R. J. Avery, Hull, Quebec The working cultures were preserved in 15% glycerol (10) at -18 C; they were thawed, inoculated into screw-cap test tubes containing 15 ml of cooked meat medium (Difco), and incubated at 37 C for 20 h.
Enumeration procedures. The cultures were diluted in 0.1% peptone (13). When egg yolk-containing media were used, 0.1-ml volumes of diluted culture were spread on the agar surface in standard petri plates. Two plates were used per dilution. When completely dry, the surface was covered with about 10 ml of cover agar. Egg yolk-free media were used in pour plates with 1.0-ml volumes of diluted culture per plate. All plates were incubated anaerobically at 37 C for 20 h.
All plating media were prepared from the same agar base consisting of 1.5% tryptose (Difco), 0.5% Soytone (Difco), 0.5% yeast extract, 0.1% ferric ammonium citrate (British Drug Houses), 0.1% sodium metabisulfite (Na2S2O5; British Drug Houses), and 2% agar. The ingredients were dissolved in distilled water to either 92% of the final volume to allow for subsequent addition of egg yolk suspension, or to the final volume. The pH was adjusted to 7.6 before addition of the agar. The agar base was also obtained commercially (SFP agar base, Difco). Antibiotics and egg yolk suspension were added to the autoclaved medium at 50 C.
SFP agar. Complete SFP agar was prepared by adding to 920 ml of agar base: the contents of one antimicrobial vial P (30,000 U of polymyxin B [Difco] in 10 ml of distilled water); 4.8 ml of the contents of an antimicrobial vial K (25 mg kanamycin [Difco] in 10 ml of distilled water); and 80 ml of egg yolk suspension containing one egg yolk per 20 ml of 0.85% NaCl. The SFP cover agar had the same composition as the complete SFP agar, except that it contained no egg yolk.
Media with D-cycloserine. The second group of plating agars contained varying amounts of D-cycloserine (D-CS; Nutritional Biochemical Corp., Cleveland, Ohio) instead of polymyxin B and kanamycin (6). The medium containing 400 gg of D-CS per ml (0.04%) is identical with the TSC agar of Harmon et al. (6). The antibiotic was added as a 4% filter-sterilized solution in water. The plating procedure was as described for the SFP agar.
The third group of plating agars differed from the second in two aspects: no egg yolk was added, and they were used in pour plates only.
OPSP agar. Details for the preparation of OPSP agar not contained in the note of Handford and Cavett (4) were obtained by personal communication. The basic ingredients, including ferric ammonium citrate and sodium metabisulfite, were the same as in the SFP and TSC agars. The final concentration of sodium sulfadiazine 272 molecular weight (American Cyanamid Co., Pearl River, N.Y.) was 109 mg/liter, which corresponds to 0.01% sulfadiazine (250 molecular weight) used by Handford and Cavett. Concentrations and origins of the antibiotics were the same as in the work of these authors; the final concentrations of oleandomycin phosphate (Pfizer Co., Montreal) and polymyxin phosphate (aerosporin; Burroughs Wellcome Co., Montreal) were 0.5 mg/liter and 10,000 IU (equivalent to 1.0 mg of polymyxin standard) per liter, respectively. The OPSP agar was used without egg yolk and in pour plates.
The results of all enumerations were expressed as percentages of the counts in the corresponding antibiotic-free control medium. Confirmatory tests. Single colonies were stabinoculated into LM agar (12), NM agar (1), and NM agar supplemented with 0.5% each of glycerol and galactose (11) (12) and of Harmon et al. (5,6), but the experiment revealed two considerable shortcomings of both media. (i) Most C. perfringens strains produced large colonies in SFP and TSC agars; counts of over 50 per plate therefore became progressively inaccurate, and 10-fold dilutions were often inadequate. (ii) Of 21 strains, 8 had no discernible opaque halos around the black colonies after the first day of incubation; presumably, such colonies would not be counted in the procedure of Shahidi and Ferguson (12). These drawbacks as well as the lengthy plating procedure are all associated with the dependence of the method on the egg yolk reaction which had been introduced because the nitrite motility test was unreliable. The following experiments were designed to determine the conditions (i) for quantitative enumeration of C. perfringens in a selective medium without egg yolk and (ii) for consistent nitrite reactions in the confirmatory tests.
Enumeration in egg yolk-free agar with D-CS. Table 1 shows the recoveries of 71 strains in egg yolk-free agar with different concentrations of D-CS. The recoveries were essentially quantitative at D-CS concentrations of 200 and 400 jig/ml; the lowest count at 400 Ag/ml was 64% of the count in the control medium. Several strains were partially or totally inhibited at D-CS concentrations of 600 and 800 jig/ml, with mean recoveries of 63 and 39%, respectively. About 40% of the strains listed in Table 1 were tested in medium with the commercial agar base; due to supply problems, the remaining strains were tested in medium prepared from the ingredients. This did not appear to affect the results. A detailed table showing the recoveries of each strain is available upon request.
At all concentrations of D-CS (Table 1), the strains produced only black colonies, even at the surface. However, a few strains occasionally produced colonies at the surface with a narrow, white halo around the black center.
The medium with the highest D-CS concentration (400 jig/ml) that does not significantly Table 2. Of the 142 isolates (duplicate colonies of the 71 strains), only 112 showed a positive nitrite reaction in basic NM medium. Five tubes contained traces of nitrite as evidenced by a faint red color, and 25 tubes contained no detectable nitrite. Most of the negative tubes showed only little growth. All of the isolates grown in supplemented NM medium showed good growth and produced positive nitrite reactions; most of these were very intense, in contrast to the reactions in basic NM medium ( Table 2). Two strains each of C. sporogenes and C. bifermentans were used as negative controls; none of these showed a color reaction.
During incubation, the pH of the C. perfringens stab cultures dropped from 7.1 to 6.7-6.9 in NM agar and to 5.6-6.1 in supplemented NM agar. Uninoculated tubes with supplemented NM agar, adjusted to pH 5.5, were therefore incubated as additional controls; they were all negative for nitrite.
All NM media were used within 2 weeks after preparation, and all were de-aerated before stabbing. In a separate experiment, we compared the suitability of freshly prepared, supplemented NM medium with the same medium stored for 5 weeks at 4 C. No difference was found. In contrast, basic NM medium deteriorated rapidly during storage: in the fresh medium, 24 out of 28 isolates showed positive nitrite reactions, and 4 gave trace reactions; in the same medium stored for 3 weeks at 4 C, only two isolates produced nitrite; the remaining 26 were negative for nitrite.
Comparison of surface-plated egg yolk media with egg yolk-free pour media. We compared the recoveries of 19 C. perfringens Table 1) and in surfaceplated medium with egg yolk and cover agar (6). The SFP medium was included for comparison. In both media containing 400 Ag of D-CS per ml and in SFP agar, the recoveries were essentially quantitative (Table 3). At the higher concentrations of n-CS, the recoveries in medium with egg yolk were considerably lower than in egg yolkfree medium. The differences were likely due to exposure of the C. perfringens cells to high oxygen tension in the surface plating procedure. The results demonstrate that the recoveries of C. perfringens in the proposed procedure are equal to or higher than recoveries in the existing procedures.
In this work, we have not tested the selective inhibition of single strains of facultative anaerobes by the EY-free TSC agar. However, applications of this medium for enumeration of C. perfringens in naturally contaminated foods and in fecal specimens (A. H. W. Hauschild and R. Hilsheimer, manuscript in preparation) have shown essentially the same degree of selectivity as that of the egg yolk medium of Harmon et al. (6).
Some shortcomings of the media containing egg yolk have been listed above: (i) the low selectivity of SFP agar; (ii) the relatively elaborate procedures; (iii) the frequent occurrence of C. perfringens colonies without discernible halos (false negatives); and (iv) the large and frequently spreading colonies which make 10fold dilutions impractical. We have also found that SFP agar allows growth of egg yolk-positive facultative anaerobes from foods. In some cases, these organisms produced completely opaque plates and thus masked the egg yolk reaction of C. perfringens. The lack of selectivity of the SFP agar has been overcome by replacing it with TSC agar (6). The remaining shortcomings of the egg yolk agars may be overcome by using EY-free TSC agar in pour plates and stab-culturing black colonies of supplemented NM agar for confirmation of C. perfringens. Comparison of D-CS from different sources. Since we did all of our work with D-CS from a single supplier (Nutritional Biochemicals Corp.), its effect on the enumeration of C.
perfringens was compared with that of 1-CS from another company (Sigma Chemical Co., St. Louis, Mo.). Five C. perfringens strains were tested. Essentially the same results were obtained with -CS from both suppliers ( Table 4).
Comparison of EY-free TSC agar with OPSP agar. Table 5 shows the recoveries of 22 C. perfringens strains in EY-free TSC and OPSP agars. As in preceding experiments (Tables 1 and 3), the recoveries of all strains were essentially quantitative in EY-free TSC agar. Twenty of these strains were also enumerated quantitatively in OPSP agar, but one of them (8247) produced only pin-point colonies that (9) by replacing its antibiotics with D-CS. The modified medium has not as yet been thoroughly tested.
|
2018-04-03T01:46:04.745Z
|
1974-01-01T00:00:00.000
|
{
"year": 1974,
"sha1": "426d9b03fb71f94a09273e9ab2d649e57ff209f8",
"oa_license": null,
"oa_url": "https://aem.asm.org/content/aem/27/1/78.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "c34a16e75e6088cc070570ca13b4dcd47ee7e360",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
259691143
|
pes2o/s2orc
|
v3-fos-license
|
Advances and potential of regenerative medicine in pediatric nephrology
The endogenous capacity of the kidney to repair is limited, and generation of new nephrons after injury for adequate function recovery remains a need. Discovery of factors that promote the endogenous regenerative capacity of the injured kidney or generation of transplantable kidney tissue represent promising therapeutic strategies. While several encouraging results are obtained after administration of stem or progenitor cells, stem cell secretome, or extracellular vesicles in experimental kidney injury models, very little data exist in the clinical setting to make conclusions about their efficacy. In this review, we provide an overview of the cutting-edge knowledge on kidney regeneration, including pre-clinical methodologies used to elucidate regenerative pathways and describe the perspectives of regenerative medicine for kidney patients.
Introduction
Chronic kidney disease (CKD) in children poses a high burden for patients and their families and is a global health care problem.CKD is defined as abnormal kidney function that is present for more than 3 months with implications to health [1].The childhood incidence of CKD in Europe is estimated to be around 11-12 per million of age-related population (pmarp) for stages 3-5, and its prevalence is around 55-60 pmarp [2,3].Both the incidence and the prevalence of CKD are higher in males due to the high frequency of congenital abnormalities of the kidney and urinary tract (CAKUT) [2].Incomplete recovery from acute kidney injury (AKI) can also result in CKD; however, developmental defects and hereditary diseases are the main causes of CKD from birth until the age of 4. Between 5 and 14 years, hereditary diseases, nephrotic syndrome, and systemic diseases most frequently underlie permanent kidney damage.From 15 until 19 years, mainly glomerular diseases are responsible for the onset of CKD in adolescents and young adults [3].CKD is a progressive disease that cannot be effectively treated.It is associated with numerous comorbidities, impact on child development and decreased quality of life, especially in those with kidney failure in need of kidney replacement therapy (KRT) [4].The median incidence of KRT in children (0-19 years) is around 9 pmarp, and the prevalence is around 65 pmarp worldwide [2].Kidney transplantation is currently the best solution for patients with kidney failure, but it is associated with a shortage of donor organs and therefore long waiting lists, risk of rejection, and limited lifetime of the donor organ.From the clinical perspective, there is a high need for novel treatments for pediatric kidney patients.
Regenerative medicine has emerged as an important research field during the past decade focusing on disease modeling and improving, renewing, or replacing tissue function.While genetic kidney diseases are a prevalent cause of CKD in children, genetically modified mouse models do not fully replicate human physiology; therefore, stem cell-derived systems emerge as a promising tool for studying human disease mechanisms and drug testing [5].Additionally, various strategies have already been used to boost the endogenous regenerative capacity of the kidneys, or to create replacement of organs using organoids (Table 1) and 3D bioprinting.However, the kidney is an extremely challenging organ because of its anatomic and cellular complexity.Therefore, before being able to regenerate the kidney, a detailed understanding of kidney development and kidney repair mechanisms is essential as these processes seem to be interconnected.
How kidney development connects to the mechanism of regeneration
Knowing the complexity of the mature kidney, one can marvel at its humble beginning.The current knowledge of kidney development is based on decades of studies in animal models such as zebrafish, frogs, mice, and rats.In this review, we will focus on human kidney development and its relation with regeneration.
The human kidney originates from a succession of three stages: pronephros, mesonephros, and metanephros.The latter is the final kidney prototype, and it arises through cellto-cell interactions and signaling pathways between the metanephric mesenchyme (MM) and the ureteric bud (UB), both deriving from the intermediate mesoderm [6].The interplay of both structures is greatly influenced by the production of nephrogenic factors derived from a restricted kidney stem/progenitor cell (KSPC) population (Table 1) localized at the cap mesenchyme (CM) (Fig. 1).Expression of the transcription factor SIX2 (Sine Oculus Homeobox Homolog 2) is a determinant feature of KSPC, since SIX2 + KSPCs will give rise to all cell types of the nephron [7].
Branching of the UB tree is accompanied by the release of several factors such as Wnt9b and later Wnt4, and the concomitant downregulation of SIX2 gradually transits KSPCs into epithelial cells through a process of mesenchymal-toepithelial transition (MET).Consequently, these epithelial cells form the pre-tubular aggregates from which the renal vesicles (RVs) emerge [6].The RV evolves into a commashaped body and S-shaped body.Endothelial cells migrate into the cleft of the S-shaped body to assist in the formation of the renal corpuscle.The upper part of S-shaped bodies fuses with what has been the UB, to enable the connection to the collecting duct [8].As a result of epithelial differentiation, the glomerular capsule, the podocytes, the descending and ascending limb, and the distal tubules are formed, all originating from the SIX2 + KSPC.Next, the kidney vasculature emerges in close contact with other structures of the nephron.When ready, the podocytes encapsulate the glomerular capillaries, and the two structures fuse to the glomerular basement membrane.The cells surrounding the nephrons (interstitial cells) originate from another pool of progenitor cells, which are FOXD1 + (Forkhead Box 1), localized at the top of the CM [9].Stemming from the UB, larger kidney structures emerge: the collecting system and the renal pelvis.In humans, by the 36th week of gestation, the kidneys are functionally mature, and the SIX2 + KSPC population is exhausted; therefore, no new nephrons are formed [10], which limits the regenerative potential of the organ (Fig. 1).Children born premature are at increased risk of CKD, likely due to decreased nephron number and exposure to post-natal The secretome is the total set of molecules secreted by cells and consists of bioactive molecules such as cytokines/chemokines and growth factors EV Extracellular vesicles are secreted by cells with a variable cargo composition.They originate from the endosome or plasma membrane and can establish cell-cell communication nephrotoxins.They are also at higher risk of neonatal acute kidney injury (nAKI), which may further decrease the viable nephron number and potentiate progression to CKD [11].
Kidney regeneration after birth
What do we know about the behavior of kidney cells after injury and how does this relate to potential therapeutic approaches?Upon AKI, proximal tubular cells rapidly lose their brush border and dedifferentiate into a mesenchymal phenotype.Processes like cell migration, detachment, apoptosis, and necrosis result in denudation of the tubular basement membrane.In the glomerulus, injury can lead to podocyte loss with a complete glomerular collapse when the podocyte cell number falls below 20% of the original amount [12].The capacity of the kidney to functionally regenerate upon AKI is a major determinant of the outcome, but no specific therapeutic approach has been shown to improve the effectiveness of regeneration so far [13].Following AKI, a variety of intrinsic repair processes are activated rapidly, but repetitive or prolonged injury may lead to an unwanted maladaptive regenerative process.Maladaptive repair of tubular cells occurs when epithelial cells fail to fully re-differentiate or become growth arrested in G2 cell cycle phase, becoming an additional source of profibrotic factors, inflammation, and senescence leading to CKD and eventual kidney loss [14] (Fig. 1).Mechanisms of (limited) kidney regeneration have been an ongoing debate in the stem cell research field during the last decades.Whereas many studies attempted to identify cells with the ability to generate nephrons, growing evidence shows that the regeneration process arises through phenotypic and metabolic plasticity of tubular cells mediated by the microenvironment.Previously, it has been postulated that bone marrow-derived stem cells could translocate into the kidney upon damage, but that has not been substantiated, and the possibility of regeneration driven by extrarenal cells has been disregarded [15,16].
Although it has previously been suggested that a specific population of CD133 + CD24 + , Vimentin + , and PAX2 + cells would function as stem cells in the kidney, displaying self-renewal properties and potential to differentiate into epithelial cells to repair damaged tissue [17,18], it is now clear that after kidney injury, the repair process is accomplished by remaining reparative cells within the tubule that dedifferentiate, proliferate, and re-differentiate without any contribution from a preexisting specific progenitor cell population [19,20].To elucidate this process, cells from mice were genetically labeled during ischemic injury to mark individual damaged tubular cells and to follow subsequent recovery.The number of labeled cells increased significantly upon injury, indicating that tubular epithelium can arise from any surviving tubular cell and not from a fixed progenitor population [21].The debate goes on to define the exact mechanism of repair and regeneration.Although kidney function is restored to baseline after AKI, patients frequently develop CKD, which demonstrates that re-entering the cell cycle is insufficient to fully regenerate nephrons [22].This highlights the importance of research and development of techniques to identify target pathways and the factors that drive effective regeneration to boost this process.
Regeneration versus development
In rodent models, injured tubular cells start to re-express genes and proteins that are active during development, such as PAX2, LHX1, SOX9, followed by increased proliferation rates, release of several growth factors such as epidermal growth factor (EGF), insulin-like growth factor (IGF), and transforming growth factor-β (TGF-β) and involvement of the canonical Wnt signaling pathway [23,24].A recent study has demonstrated a similar mechanism in humans.Using intercellular cross-talk analysis, it has been shown that upon AKI, tubular epithelial cells activated the transcription factor SOX9, and these cells released factors such as VEGF, complement, SPP1, and CALCR, which influenced the surrounding cells to facilitate endogenous repair [25].In the same study, the authors identified S100 calcium-binding protein 9 (S100A9) as a protein that enhances cell proliferation and might be directly related to tissue regeneration [25].
Still, upregulation of genes that are specific for nephrogenesis, such as SIX1, SIX2, CITED1, OSR1, LGR5, GDNF, and RET, has not been reported after injury or during repair (Fig. 1).Therefore, kidney regeneration does not fully recapitulate development.
Recently, it has been shown for the first time that injection of SIX2 + neonatal kidney stem/progenitor cells (nKSPC) into human deceased donor kidneys induces the de novo expression of SIX2 in proliferating proximal tubular cells.These cells were derived from the urine of neonates born prematurely; therefore, they endogenously express SIX2 [26].nKSPCs were injected via the kidney artery into human grafts that were not used for transplantation and were perfused during 6 h in normothermic machine perfusion (NMP).Besides SIX2 expression, these kidneys showed upregulation of regenerative markers, such as SOX9 and VEGF, and had significantly lower levels of kidney injury biomarkers and reduced inflammatory cytokines [27].The reactivation of SIX2, a nephrogenic factor, might be related to the initiation of an endogenous regenerative repair of the kidney tissue and can reflect a possibility to therapeutically re-induce nephrogenesis.
Nevertheless, the mechanisms of regeneration are not fully understood, and several technologies have been developed to study and improve the regenerative potential of the kidney tissue for therapeutic purposes.
Fundamental research pushes forward the field of regenerative medicine
Innovative technologies including genomics, transcriptomics, proteomics, metabolomics -conjunctively referred to as "omics"-have generated large data set collections and analyses to improve our understanding of basic principles and mechanisms of kidney development, repair, and regeneration.Based on knowledge acquired from omics studies, researchers were able to develop the protocols to induce pluripotent stem cells (iPSCs) to form nephrons in vitro, the so-called kidney organoids.Single cell RNA sequencing (scRNA-seq) data was used to optimize the kidney cell differentiation and to reduce the rate of non-kidney cell types [28].Kidney organoids are nowadays one of the most important in vitro models to elucidate kidney development.
Transcriptomic characterization of repair and regeneration
Transcriptomic studies are based on RNA sequencing (RNA-seq) technology and indicate transcriptional activity of genes.RNA-seq technology has been used for analyzing kidney biopsies and supports diagnosis and prognosis in some of the kidney diseases.Bulk RNA-seq, scRNA-seq, and single-nuclei RNA-seq (snRNA-seq) are three commonly used RNA-seq techniques.Bulk RNA-seq provides an overview of the average of gene expression.ScRNA-seq and snRNA-seq can provide information about each cell and show the potential to find molecular differences which are only linked to specific cell types.ScRNA-seq can measure both cytoplasmic and nuclear transcripts whereas snRNAseq can only measure nuclear transcripts.Not many RNAseq studies have been performed to characterize repair and regeneration in AKI or CKD; however, some studies report molecular characterization of the transition from acute to chronic kidney injury following ischemia/reperfusion injury (IRI).An scRNA-seq study defined key differences in adaptive and fibrotic repair, suggesting potential druggable pathways.The authors found that specific maladaptive/profibrotic proximal tubules (PT) expressed proinflammatory, profibrotic cytokines, and myeloid cell chemotactic factors (e.g.CXCL2, IL1b, CCL3) after long IRI.Additionally, maladaptive PT cells showed a marked enrichment of ferroptosis and pyroptosis.Pharmacological targeting of pyroptosis/ ferroptosis (VX-765, pyroptosis inhibitor and liproxstatin, ferroptosis inhibitor) in vivo induced cells towards adaptive repair and improvement of fibrosis [29].This supports the potential of RNA-seq for the identification of regenerative therapeutic targets [30].SnRNA-seq analysis of biopsies of 8 individuals with severe AKI revealed common epithelial cell response patterns including oxidative stress, hypoxia, interferon response, and epithelial-to-mesenchymal transition [31].Similarly, scRNA-seq of urine samples demonstrated that urinary cells in adaptive states are potentially derived from the thick ascending limb and show regenerative signatures by expressing PAX2, SOX4, and SOX9, which were predominantly expressed in the presumed progenitor clusters [32].These outcomes underline the possibility of applying RNA-seq technology in humans, but still leave a gap between research and clinical application.
Metabolomic characterization of repair and regeneration
Besides transcriptomics, metabolomic studies may reveal the molecular signatures of cells and tissues upon injury and repair.A very promising advance in metabolic pathways has been made by using isotope tracing in dynamic metabolic processes [33] based on matrix-assisted laser desorption/ ionization mass spectrometry imaging (MALDI-MSI).This technology might provide a truly comprehensive understanding of the interplay between biochemical alterations and cell type-specific functions, metabolic fluxes, and dynamic interpretations of cellular states [34].Using this technology in a mouse model of AKI, authors concluded that the maladaptive repair in tubular cells is characterized by differences in production of lactate, which could possibly be the result of higher glycolytic activity and, as injury progresses, a concomitant reduction of the tricarboxylic acid (TCA) cycle metabolite consumption [34].Over the past decades, metabolomics has added a promising number of new biomarkers through better pathophysiology knowledge, paving the way for insightful perspectives on the management of different kidney diseases [35].However, the metabolome greatly varies with age, diet, drug consumption, lifestyle, and, in adults, with gender, making it difficult to compare studies in adults and in children and neonates.More metabolomic studies focused on pediatric patients are required to determine the practical clinical impact of metabolomics in conditions of kidney damage and repair.
Kidney organoids and their potential for clinical implementation
In the field of regenerative medicine, organoid cultures are widely used to model disease, study physiology, and develop clinical applications, like drug screening and personalized medicine approaches, as well as their use for regenerative therapies.Organoids are self-organized 3-dimensional (3D) tissue cultures of stem cells, which have self-renewal and differentiation capacity.They contain multiple organ-specific cell types which spatially organize and can recapitulate organ function (Table 1).Here, we will describe two types of kidney organoid models which have been developed in the past 5 years: iPSC-derived kidney organoids and adult stem cell (ASC)-derived tubuloids (Table 1).
iPSC-derived organoids
iPSC-derived organoids have emerged as advanced in vitro models of kidney development, physiology, and disease.iPSC-derived organoids reflect mainly kidney developmental aspects by mimicking nephrogenesis.When iPSCs are differentiated into kidney organoids, most of the nephron segments are present: both proximal and distal tubule cells, glomerular structures, and loop of Henle as well as collecting duct cells can be found [36].Recently, novel hybrid protocols have been designed to culture podocyte-like cells in kidney iPSC-derived organoids [37].Patient mutations can be created by genome-editing technologies like CRISPR/ Cas9 in order to model diseases, or patient-derived cells can be reprogrammed to have pluripotent features [38] (Table 2).To date, iPSC-based disease modeling has successfully been used for studying genetic kidney diseases such as cystinosis [39], nephronophthisis (NPH), and nephrotic syndrome [27,40,41], as well as for drug development (Table 2).Nevertheless, there are several limitations of using kidney iPSC-derived organoid models.The organoids most closely resemble human embryonic kidney tissue and not a mature organ [36].In addition, a vascular system is missing [36].So far, both endothelial cells (CD31 +) and interstitium could be induced in kidney organoid culture [42]; however, this still needs to be further developed into more mature and functional vascularization.Recently, iPSC-derived vascular organoids showed to be a new cell source of functional and flow-adaptive vascular cells for the creation of a perfused macrovessel model [43], which might inspire future vascularized kidney organoid culture.This new model recapitulates the bi-layer vessel architecture and allows in vitro studies of vascular disease [43].Unfortunately, after directed differentiation protocols toward kidney organoids, still 10-20% off-target non-renal cells appear in culture, including mainly neuronal-like and muscle-like cell populations.Protocols on how to improve iPSC-derived organoid cultures are under development [44].For example, iPSC organoidderived tubuloid cultures showed disappearance of immature and off-target cell populations, accessibility of the apical site, and prolonged expansion capacity [45].Notably, iPSCderived organoids generated by knocking out a single gene of interest miss potential genetic or epigenetic modifiers that can be present in individual patients.On the other hand, this provides a model to identify potential modifiers when comparing patient-derived organoids with gene-edited iPSCderived organoids.
Tubuloids
Kidney organoids can also be generated from ASCs obtained either from kidney biopsy material or from urine, and thus carry the exact genetic and epigenetic information of the patient.These 3D structures are called tubuloids (Table 1) as they mainly represent the tubular epithelium and lack differentiation into glomerular cells.In tubuloid cultures, podocytes and stroma are absent, and like in iPSC-derived organoids, the vasculature is also absent in tubuloids.Tubuloids can be long-term expanded, without the need of genetic modification and without the risk of off-target differentiation [46].To date, genome-editing protocols in tubuloids have not yet been published.Recently, tubuloids derived from urine of cystinosis patients were used to develop novel treatment strategies [47] (Table 2), indicating their power as a disease model for translational applications.In this study, an omics-inspired drug screen revealed a novel combination therapy, which has been tested in patient-derived tubuloids.Age-and gender-matched healthy donor-derived tubuloid cultures are used as controls.Biobanks of healthy donor and patient material-derived tubuloids will facilitate the development of personalized medicine and will create a short line from bench to bedside.Kidney organoids can be cultured non-invasively from urine from (pediatric) kidney disease patients, as being long-term expandable and genetically stable cultures.For pediatric nephrology, a urine cellderived tubuloid biobank [48] will be of interest to study hereditary kidney disease.For children and adolescents, non-invasive ways of collecting primary patients' cells are preferred.As an advancement of primary urine cell cultures in 2D [49], the 3D tubuloid cultures will provide an improved cell culture model for fundamental and translational research, including drug development (Fig. 2).For example, drug screening on tumor organoids derived from childhood malignancies showed successful identification of potential therapeutic agents targeting pediatric tumors [50].Kidney organoid cells can also be cultured in flow chambers [51].The introduction of flow in perfused cell systems of Patient-derived iPSC organoids Disease modeling [40,41] tubuloid cells on a chip reflects another way to create a more advanced in vitro kidney model for research applications [51].ASC-derived organoids were first developed to model murine intestine [52].To date, one key application of the intestinal organoids is the development of forskolin swelling assays allowing drug response monitoring in cystic fibrosis patients.In vitro swell responses can be monitored and correlate with the individual's clinical response to therapy [53].This is an important showcase stressing the value of organoids for clinical applications.
Application of organoids in future transplantable kidney tissue
Kidney organoid cultures are also further investigated in the direction of replacing damaged kidney tissue (Fig. 2).Several attempts have been made to create functional kidney structures by transplanting iPSC-derived organoids in mice.Organoid transplantation under the kidney capsule or subcutaneously resulted in functional glomerular perfusion, connection to vascular networks or vascularization, and improved morphogenesis [54,55].In addition, iPSCderived nephron sheets which contain many nephrons could be transplanted in immunodeficient mice, and this scalable protocol was demonstrated to produce kidney tissue with glomerular filtration function [56].A critical step in future experiments is to establish a robust connection between the transplanted organoids and the host's vasculature, and to overcome graft overgrowth by stromal cells in the long term [57].To further optimize kidney organoid engraftment, material-driven applications refine organoid culture by implementing improved hydrogel engineering [58].
Soft hydrogels demonstrate better performance of kidney organoids in comparison to stiff hydrogels [58,59].3D bioprinting of kidney matrix could facilitate high throughput culture of highly consistent and reproducible organoid cultures in transplantation-compatible hydrogels in the future.applications in pediatric nephrology of kidney organoids or tubuloids have been reported so far.Nevertheless, over the long term, kidney organoids might have the potential to advance KRT and, as an outlook, have a significant contribution to the bioengineering of kidney grafts and of bioartificial kidney tissue.Implantable bioartificial kidneys, which are attached to the systemic circulation, are not yet clinically tested, as both technical and biological challenges must be overcome.The development of a transplantable auxiliary kidney would hold great promise for people with kidney disease by (partly) implementing kidney function.In comparison to wearable and portable dialysis machines, which are cell-free, a bioartificial kidney would overcome dialysis shortcomings characterized by poor clinical outcomes and low quality of life.Functional requirements, which need to be addressed, include membrane characteristics, cell characteristics, and functional aspects like toxin excretion and nutrient and water reabsorption capacity.Finding cellular components that are capable of taking over tubular functions in a bioartificial kidney and which are biocompatible at the same time will substantially advance its development [60].
The current clinical applications of regenerative medicine: the status of stem cell transplantation and extracellular vesicle injection
In preclinical models of kidney injury, repair and regeneration have been observed upon different types of (stem) cell transplantations, cell secretome injection, and extracellular vesicle (EV) injection (Fig. 2).However, translation of these methodologies to the clinics still faces several challenges.
Cell therapy-based clinical trials
Mesenchymal stromal cells (MSC) (Table 1) have immunomodulatory properties and might play a role in tissue repair, therefore being the most widely used type of cells for cell therapy of damaged kidneys.A few clinical trials have been performed showing feasibility and safety of MSC infusion in patients [61][62][63][64].In an 18-month follow-up study, seven patients with CKD of different etiologies such as hypertension, nephrotic syndrome, and unknown etiology were infused with 1-2 million autologous bone marrow MSC/kg for safety and tolerability evaluation.Although none of the patients had adverse events related to the therapy, kidney function (GFR and serum creatinine measurements) did not improve [62].
A similar trial of bone barrow autologous MSC transplantation in 6 ADPKD patients confirmed feasibility of the therapy, but in these patients, serum creatinine levels were significantly improved [63].
In kidney transplantation, MSC therapy shows promising results with potential to induce immunotolerance.In a phase 1 trial, four living-donor kidney transplant patients were given autologous bone marrow MSC 1 day before or 7 days after kidney transplantation while receiving induction therapy.All patients had stable graft function in 5-7-year followup, but the protolerogenic effect of MSC was variable.One of these patients was successfully weaned off immunosuppressive drugs and stayed free from anti-rejection therapy with optimal long-term kidney allograft function [65].In a larger study, patients received MSC infusion 6 and 7 weeks after kidney transplantation in a randomized prospective, single-center, open-label trial.Twenty-nine patients received MSC and had early tacrolimus withdrawal, while 28 patients were in the control tacrolimus group.Early tacrolimus withdrawal with MSC therapy was feasible and safe, and there was no increased rate of rejection.Kidney function was preserved in both groups, but the MSC-treated patients were not prevented from developing progressive fibrosis [64].
These studies suggest that MSC therapy in humans is feasible and safe on a short-term basis, and that the immunomodulatory effect is promising.Nevertheless, MSCs of different tissues are devoid of differentiation capacity into mature kidney epithelial cells, and trials have not shown effective tissue repair or regeneration leading to improved kidney function.Therefore, it remains crucial to find a superior source of (stem) cells to develop effective kidney tissue regeneration, and kidney-derived cells might be the ideal candidate.Autologous selected renal cells (SRC), a pool of proximal tubule and glomerular cells and other cell subpopulations, such as interstitial cells, were used in a trial of 22 patients with advanced type 2 diabetes-related CKD (D-CKD).In this study, the cell therapy seemed to improve kidney function and possibly halted type 2 D-CKD progression [66].In another safety study, 18 patients with CKD of unknown cause (stages 3-5) were followed for 36 months after receiving a single infusion of angiogenic/anti-fibrotic autologous adipose-derived stromal vascular fraction (SVF) cells into their kidneys bilaterally via renal artery catheterization.Both kidney structure and function were shown to be improved [67].
Ex vivo cell therapy
With the increasing need for kidney transplantation in the adult and pediatric populations, the criteria for organ recruitment have been extended as an attempt to reduce the waiting time by increasing the donor organ pool.This implies that kidneys of older donors (> or = 60 years) or donors who are aged 50 to 59 years and have two of the following three features: hypertension, serum creatinine > 1.5 mg/dl, or death from cerebrovascular accident [68] can be offered to patients.However, these kidneys are of suboptimal quality, rendering grafts more susceptible to ischemia reperfusion injury (IRI) in comparison with organs derived from living healthy donors [69].These are the targeted organs for receiving ex vivo cell therapy in order to reduce IRI, reduce the immunogenicity of the kidney graft, and promote regeneration of already injured organs.
Ex vivo cell therapy can be performed during machine perfusion (MP), the current method of choice for preserving kidney allografts obtained from deceased donors.The first study of administration of human MSC into human kidneys during MP suggested tissue regeneration by increased cellular proliferative rates and ATP production [70].Using preterm neonatal KSPC, we have shown that cell therapy of human deceased donor kidneys during normothermic MP (NMP) induced de novo expression of SIX2 in the donor tubular cells while it also upregulated regenerative genes such as VEGF and SOX9 [26,27].This suggested that kidney stem cells or developmental factors released by these cells are necessary to induce endogenous regeneration in kidneys.Still, long-term perfusions will be fundamental to prove effective regeneration of the kidney tissue leading to improved graft function.
Secretome and EVs
The described beneficial effects of cell therapy have been mainly attributed to paracrine mechanisms [5].In preclinical studies, the secretome (Table 1) of different kinds of MSCs has shown regulatory function in cell proliferation, cell migration, cell differentiation, and modulation of the immune system.The secretome consists of bioactive molecules such as cytokines/chemokines and growth factors including granulocyte-colony stimulating factor, leukemia-inhibitory factor, macrophage-colony stimulating factor, PGE2, IL-10, TGFβ, IDO, HO-1, HGF, VEGF, FGF, and IGF-1, which can also modulate kidney regenerative responses [70,71].MSC-derived secretome was shown to drive kidney regeneration by inducing surviving kidney cells to dedifferentiate and replicate to restore the lost kidney cells in animal models [72,73].
Being part of the secretome, EVs (Table 1) play an important role in inducing kidney regeneration.EVs are a type of nanoscale vesicles encapsulated by cytomembranes, which have an important function in intercellular communication, and are widely present in the body fluids, including blood, urine, and amniotic fluid.Preclinical research has proven that EVs can improve AKI by inhibiting inflammation, apoptosis, and oxidation and by regulating angiogenesis [73][74][75].
In human donor kidneys, MSC-EVs were infused during hypothermic oxygenated perfusion (HOPE).HOPE + EV kidneys had lower ischemic damage score and better kidney ultrastructure.They had higher HGF and VEGF levels with lower apoptosis rate than control kidneys.Moreover, HOPE + EV kidneys had lower lactate release and higher glucose levels than controls, suggesting that the gluconeogenesis system was preserved [76].
Only one clinical trial has been performed using EVs in CKD patients.In this study, cell-free cord-blood MSCderived EVs were administered to 20 CKD patients stage III and IV (eGFR 15-60 mg/ml).The 20 patients in the treatment group exhibited improvement of eGFR, serum creatinine level, and blood urea and urinary albumin creatinine ratio compared with 20 patients in the placebo group at the end of the study period of 1 year [77].Although the biopsies of some of the treatment group patients did not show significant histologic changes, the expression of Ki67 (a marker of proliferation) in some tubular cells confirmed the ability of MSC-EVs to activate tubular cells [77].Longer observation and larger number of patients are required to demonstrate the efficacy and safety over the long run of EV injection as a treatment inducing kidney regeneration.
Future perspectives
Regenerative medicine is a rapidly evolving field.Here, we describe the current state of knowledge and understanding of kidney development, repair, and regeneration.Furthermore, we provide an overview of methodologies to understand and potentially enhance kidney regeneration.Unfortunately, clinical readiness of applications based on preclinical findings using omics, stem cells, or bioartificial transplants has not yet been achieved.However, the field is rapidly moving forward, and detailed knowledge about kidney development, repair, and regeneration paves the way for novel translational solutions for kidney patients (Fig. 2).Nephron-like structures are being cultured in 3D and hold promise for advanced modeling of kidney (patho)physiology, drug screening, and personalized medicine, as well as for regenerative therapies as part of a bioartificial kidney or transplantable kidney tissue.Available omics technologies are propelling our understanding of the mechanisms of kidney injury and repair, which opens opportunities for finding new druggable targets or interventional gene/cellular therapies to finally improve the outcome of kidney disease.
Currently, there are no FDA-approved stem cell-based therapies for CKD.Nevertheless, clinical trials to test efficacy and safety of stem cell-based therapies for kidney disease are being conducted.In the future, it will be important to also conduct clinical trials specifically designed for children to improve outcomes and advance knowledge.There is no doubt that clinical trials including children aiming to ameliorate CKD or find a cure for kidney failure are facing many insecurities.In addition, ethical aspects of these novel regenerative medicine therapies have to be carefully considered.Regarding the long-term goals to create donor organs and to develop personalized regenerative medicine approaches, there is a long way to go.Nevertheless, every day, we get closer to regenerative solutions for kidney patients.
Key summary points
1.During kidney development, functional nephrons are formed; however, this process cannot be repeated during repair and regeneration in a mature kidney.2. Advanced basic science approaches and advanced organoid cell culture models enable understanding mechanisms of kidney tissue repair and regeneration.3. Cell(-based) therapy is currently being tested for regenerative medicine applications.However, these cell therapies are not yet available in the clinic for kidney failure patients.4. Generation of a bioartificial kidney is on the horizon; however, more knowledge about kidney development, repair, and regeneration is required for future progress.
Table 1
Definitions widely used in regenerative medicineRegeneration Generation of new nephrons to replace/renew damaged nephrons leading to effective functionality Repair Restoration of damaged cells or nephrons leading to effective functionality iPSC Induced pluripotent stem cells are somatic cells that are reprogrammed by overexpression of 4 key transcription factors (cMyc, Sox2, Klf4, Oct4) into pluripotent stem cells ASC An adult stem cell (or tissue stem cell) remains undifferentiated and can replace cells after cell division (multipotent).Their selfrenewal capacity is a key characteristic MSC A mesenchymal stromal cell is an adult stem cell that can differentiate into other cell types (multipotent).Human sources of MSCs, which are of stromal origin, include bone marrow, umbilical cord tissue, adipose tissue, and amniotic fluid KSPC Kidney stem/progenitor cells represent a unique population of stem cells derived from developing kidneys.They self-
Table 2
iPSC-derived organoid or tubuloid models in pediatric genetic kidney diseases ARPKD autosomal recessive polycystic kidney disease, NPH-RC nephronophthisis-related ciliopathy; CAKUT congenital anomalies of the kidney and urinary tract
|
2023-07-05T06:17:08.381Z
|
2023-07-03T00:00:00.000
|
{
"year": 2023,
"sha1": "020a581dd13db3ed7e47288e3349dc8b02041532",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00467-023-06039-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "09dcfaf199d6ead304cc9e30e6e9c3f5855478a3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
261919231
|
pes2o/s2orc
|
v3-fos-license
|
Maltodextrin as a Drying Adjuvant in the Lyophilization of Tropical Red Fruit Blend
Guava, pitanga and acerola are known for their vitamin content and high levels of bioactive compounds. Thus, the preparation of combinations of these fruits comprises a blend with high nutraceutical potential, yielding a strong and attractive pigmentation material. In this study, the influence of different proportions of maltodextrin on the lyophilization of a blend of guava, acerola and pitanga was evaluated considering not only the physicochemical, physical and colorimetric parameters but also the bioactive compounds in the obtained powders. The blend was formulated from the mixture and homogenization of the three pulps in a ratio of 1:1:1 (m/m), then maltodextrin was added to the blend, resulting in four formulations: blend without adjuvant (BL0), and the others containing 10% (BL10), 20% (BL20) and 30% (BL30) maltodextrin. The formulations were lyophilized and disintegrated to obtain powders. The powders were characterized in terms of water content, water activity, pH, total titratable acidity, ash, total and reducing sugars, ascorbic acid, total phenolic content, flavonoids, anthocyanins, carotenoids, lycopene, color parameters, Hausner factor, Carr index, angle of repose, solubility, wettability and porosity. All evaluated powders showed high levels of bioactive compounds and the increase in maltodextrin concentration promoted positive effects, such as reductions in water content, water activity and porosity and improved flow, cohesiveness and solubility characteristics.
Introduction
Red fruits are commonly linked to certain temperate climate species; nevertheless, the term also aptly applies to a variety of tropical fruits that share reddish pigments, indicating the presence of bioactive antioxidant compounds.Among these examples are guava, acerola and pitanga, which belong to different genres but share these common characteristics.
Guava (Psidium guajava L.) is appreciated by consumers, and it is rich in dietary fiber and bioactive compounds with antioxidant activity [1], possessing the ability to prevent the incidence of chronic and degenerative diseases [2], including arthritis, arteriosclerosis and cancer [3].The pitanga (Eugenia uniflora L.) has a high nutritional value, standing out for the amount of vitamins, polyphenols and antioxidants; in addition, it has an exotic flavor and aroma [4] and also several biological activities, such as anti-inflammatory, antidiabetic, antimalarial and diuretic.Acerola (Malpighia emarginata L.) is rich in vitamin C, amino acids and phenolic compounds, mainly flavonoids, anthocyanins and carotenoids.It is widely disseminated in several countries [5,6] and has several biological functions, such as antihyperglycemic effect, anticancer activity against lung cancer and protective effect against genotoxicity [7].
Despite the sensory and nutritional advantages of consuming fresh fruit, its commercialization in this state involves difficulties and significant losses due to its short shelf life.In addition, consumers often find a lack of practicality, as their demand is met by derivatives obtained from processing, leading to the consumption of more juices, jellies and other products.
Each fruit has its own bioactive compounds, such as polyphenols and carotenoids, which have antioxidant properties and may contribute to health.The combination of fruits (acerola, guava and pitanga) in the form of a blend can result in a higher concentration of such compounds, enhancing the benefits for the organism and also providing unique gustatory and aromatic experiences.
Under ideal processing conditions, it is possible to generate dehydrated products with safe water content levels, making them healthy, easy to prepare and consume on a daily basis, with characteristics close to those of fresh products [8].Among the drying methods, lyophilization is the process used by the food industry to produce the highest quality dry products [9], more effectively promoting the preservation of heat-sensitive nutrients and providing stability of these compounds, since it operates at low temperatures and under high vacuum [10].
It is difficult to produce certain freeze-dried products because they have a low glass transition temperature, requiring the addition of an additive to aid the drying process [9].Several additives are used; however, maltodextrin is one of the most commonly used, especially to obtain fruit powders, having characteristics such as high solubility in water, the ability to generate solutions with low viscosity, being tasteless, easily biodegradable and providing good cost-benefit value for money [11,12].In addition, it supports the shelf life of the product by providing a protective barrier against moisture absorption and helps to improve the texture and mouthfeel of the reconstituted product.
It is important to note that the effects of adding maltodextrin may vary depending on the proportion used, the specific characteristics of the raw materials, the freeze-drying process and other processing conditions [13].Therefore, it is essential to carry out specific studies and tests to optimize the formulation according to the desired objectives, ensuring the preservation of bioactive compounds and obtaining the characteristics met for the final product.Thus, the objective was to evaluate the influence of different proportions of maltodextrin in the formulation of a blend of freeze-dried tropical red fruits (guava, pitanga and acerola), characterizing them in terms of physicochemical, physical, colorimetric parameters and bioactive compounds.
Physicochemical Characterization of Red Fruit Blend Powders
Table 1 presents the average values with the respective standard deviations of the physicochemical parameters evaluated in the blend powders obtained by lyophilization.It was observed that the water content of all powdered samples differed significantly from each other, with a reduction as the maltodextrin concentration was high.A similar trend occurred in relation to water activity (a w ), demonstrating that the added amount of maltodextrin helped in the lyophilization process and reduced the water content and a w .Similar behavior was reported by Barroso et al. [14] in lyophilized mangaba pulp (Hancornia speciosa Gomes) with the addition of maltodextrin from 0 to 30%, resulting in water contents ranging from 11.44 to 1.16 g/100 g wb and by Andrade et al. [15] in lyophilized guava pulp with 14, 21 and 28% maltodextrin, obtaining a w of 0.150, 0.103 and 0.073, respectively.
Rahman [16] stated that products with a w < 0.6 have low availability of water to be used in biochemical reactions, which promotes prolonged stability, as long as they are packaged in a way that there is no absorption of water from the environment.This a w limit reported by the author is in line with the values obtained in the present study in all powders.
Costa et al. [17] reported a water content of 4.37 g/100 g wb in lyophilized Palmer mango pulp with 20% maltodextrin, which is lower than the powders in the present study.Similar values were quantified by Almeida et al. [18], evaluating lyophilized jabuticaba peels without any adjuvants, with an a w of 0.320.
The pH of the powders is characteristic of highly acidic products (pH < 4.0), resulting from the fruits used to prepare the blend.Some additives can chemically react when combined, which can also affect the pH; but in the case of maltodextrin incorporation, there was no effect on this parameter.The samples did not differ significantly, except between BL0 (3.71) and BL10 (3.64), which still had close values.The powdered mixture of acerola and lyophilized pineapple without adjuvants was also classified as highly acidic, with a pH of 3.52 [19].
It was observed that the BL0 powder (control) had the highest total titratable acidity, with a significant reduction in the samples with the addition of maltodextrin in relation to the control; however, it is noted that when exceeding the concentration of 20% of maltodextrin, there was no significant increase in acidity, with similar BL20 and BL30 values.The high acidity in the red fruit blend powders is due to the reduction in the water content that concentrates the organic acids present in the samples, as evidenced by Barroso et al. [14].The tendency of acidity reduction with the increase in maltodextrin concentration was also verified by Andrade et al. [15] in powders of lyophilized guava pulp with maltodextrin (14 to 28%), ranging from 1.72 to 1.09 g citric acid/100 g db.
High ash contents were quantified in the powders and the incorporation of the adjuvant significantly reduced the mineral content, corroborating what was verified by Ermis, Guner and Yilmazz [20] in lyophilized hazelnut milk with 0, 5, 10 and 15% maltodextrin, obtaining values from 2.12 to 1.16 g/100 g db.
As the concentration of maltodextrin increased, the total sugar (TS) content of the powders increased and there was a reduction in the reducing sugar content.As a result of maltodextrin being a carbohydrate that contains a large amount of glucose units in its structure, when it is added to the fruit blend, the total amount of carbohydrates present in the product increases.However, freeze-drying can result in a decrease in the content of reducing sugars in the final product, since the process involves the removal of water by sublimation, which can lead to the formation of reaction products involving the sugars present.Maciel et al. [21] also observed similar behavior for total sugars in lyophilized cupuaçu pulp powders with 5, 15 and 25% maltodextrin, with contents of 47.72, 55.62 and 62.32 g/100 g db, respectively.
Bioactive Compounds
The levels of bioactive compounds in the powders of the lyophilized formulations are presented in Table 2.It was verified that the increase in the proportion of maltodextrin resulted in reductions in all parameters, which were gradually reduced with the percentage increases of incorporation, with statistically significant effects.Part of the reduction is the effect of incorporating the material into the samples itself, considering that maltodextrin is free of bioactive compounds.But it is also possible that the encapsulation promoted by the additive affected the extraction of these compounds and the consequent measurement.Although the addition of maltodextrin reduced the contents of most bioactive compounds, they still remain at considerably high levels.For example, in the BL30 sample, ascorbic acid still exceeds the daily intake recommended by legislation [22], which is 45 mg for adults, with a much higher amount in 100 g.The results obtained were higher than those reported for lyophilized Palmer mango pulp with 20% maltodextrin (90.46 mg/100 g db evaluated by Costa et al. [17] and for lyophilized cubiu (Solanun sessiflorum Dunal) pulp (11.86 mg/100 g) as analyzed by Oliveira, Silva e Silva [23].
In total phenolic content (TPC), the greatest decline (82.5%) was observed in the BL30 sample, with the highest concentration of adjuvant, in relation to the control sample (BL0).Identical behavior was found in the phenolic extract of wild pomegranate peel, in which the content of phenolic compounds increased from 96.39 mg GAE/100 g (db) in a 1:1 ratio (extract: maltodextrin) to 49.33 mg GAE/100 g (db) when the additive amount was increased to 1:10 [24].However, even the BL30 sample presents values higher than those determined in guabijú pulp (Myrcianthes pungens) lyophilized without adjuvants, studied by Detoni et al. [25], which presented levels of 2003.39 and 2179.95mg GAE/100 g db; and values similar to those found in mixed pulp powders of jambolan and acerola produced by drying in a foam layer (50-80 • C) with different additives were observed, with values ranging from 5144.35 to 6999.34 1 mg/100 g db [26].
According to Souza et al. [27], all samples evaluated in the present study can be classified as having a high content of total phenolic content (above 500 mg GAE/100 g).The presence of phenolic compounds in food products is of great importance, as they are natural bioactive molecules that demonstrate antioxidant, antimicrobial, anti-inflammatory and antiproliferative activities, among others [28].
For flavonoids and anthocyanins, the reductions in the BL30 samples in relation to the BL0 control were 59 and 45.89%, respectively.Although there was a considerable reduction, the blend with 30% of the additive still showed a flavonoid content close to that determined in the grape foam powder (BRS Rúbea×IAC 1398-21) lyophilized with only half the addition of maltodextrin (15%), which was reported by the authors as 11.01 mg/100 g [29], indicating the richness in flavonoids of the powders studied in the present work.
Much higher values of anthocyanins were reported for lyophilized myrtle pulp powder with 20% maltodextrin, with levels from 145.56 to 146.12 mg/100 g db before storage [30].However, despite the reduced content of anthocyanins in the red fruit blend powders, these pigments are derived from flavonols and are responsible for the variation in the red hue of various fruits with acidic pH, with important bioactivities beneficial to health and are widely used as natural dyes [28].
Among the determined compounds, carotenoids showed the second-largest decline (79.63%) between BL0 and BL30, while lycopene showed a maximum reduction of 68.31%.Even with the reductions, the red fruit blend powders still had higher carotenoid contents than the lyophilized cubiu (Solanun sensiflorum Dunal) pulp without adjuvants, which had 0.246 mg/100 g [23].
Color Measurement
The averages and standard deviations referring to the colorimetric parameters of the tropical red fruit blend powders with the addition of maltodextrin obtained through lyophilization are presented in Table 3 and Figure 1, with their images.determined in the grape foam powder (BRS Rúbea×IAC 1398-21) lyophilized with only half the addition of maltodextrin (15%), which was reported by the authors as 11.01 mg/100 g [29], indicating the richness in flavonoids of the powders studied in the present work.Much higher values of anthocyanins were reported for lyophilized myrtle pulp powder with 20% maltodextrin, with levels from 145.56 to 146.12 mg/100 g db before storage [30].However, despite the reduced content of anthocyanins in the red fruit blend powders, these pigments are derived from flavonols and are responsible for the variation in the red hue of various fruits with acidic pH, with important bioactivities beneficial to health and are widely used as natural dyes [28].
Among the determined compounds, carotenoids showed the second-largest decline (79.63%) between BL0 and BL30, while lycopene showed a maximum reduction of 68.31%.Even with the reductions, the red fruit blend powders still had higher carotenoid contents than the lyophilized cubiu (Solanun sensiflorum Dunal) pulp without adjuvants, which had 0.246 mg/100 g [23].
Color Measurement
The averages and standard deviations referring to the colorimetric parameters of the tropical red fruit blend powders with the addition of maltodextrin obtained through lyophilization are presented in Table 3 and Figure 1, with their images.All samples showed brightness (L*) lower than 50, on a scale from 0 (black) to 100 (white), with samples BL0 and BL10 being considered the darkest samples, BL20 and BL30 the lightest, according to can be seen in Figure 1.The addition of maltodextrin provided a statistical increase in this parameter, indicating the transition to lighter tones, a result of its characteristic brightness, greater than that of the studied pulp samples.The increase in the L* parameter is due to the use of maltodextrin, which tends to whiten the sample, in addition to diluting the pigments in the sample, thereby making it appear lighter [21].
The positive a* values are a consequence of the predominance of the red hue in relation to the green one, observing that the increase in the proportion of the additive led to reductions.The intensity of yellow (+b*), the predominant hue in BL0, was also progressively reduced with the addition of maltodextrin.In their study of a blend composed of freeze-dried acerola and pineapple, Silva et al. [19] found L* of 42.85 and red (+a*) and yellow (+b*) intensities of 36.39 and 25.75, respectively.In their study characterizing flours obtained from fruit residues, Menezes Filho and Castro [31] found L* of 58.67, +a* of 12.54 and +b* of 27.60 for the flour obtained from the rind and pulp of ripe guava.
The overall effect of additive addition on the a* and b* chromaticity components is given by the chroma and hue angle.The chroma (C*) values were also statistically reduced with the increase in the proportion of maltodextrin.When these values are close to zero, they correspond to neutral colors (gray tones), while values close to 60 indicate bright colors [32].Considering the values found, it appears that the powders tended to shift towards neutral colors, more grayish, with increases in the proportion of additives.
In the hue angle ( • ), there was a predominance of the yellow hue in the BL0 sample (h* > 45 • ), transitioning to red with the increase in the percentage of maltodextrin.The hue angle assumes values in the intervals of 0 • for red, 90 • for yellow, 180 • for green and 270 • for blue [33].
Physical Characterization
Table 4 presents the mean results and standard deviations of the physical parameters of the powders of the lyophilized formulations with different proportions of maltodextrin.The Hausner factor (HF) and the Carr index (CI) are used in the evaluation of the properties of cohesion and flow of powders, with the HF being related to the friction between the particles, while IC indicates their aggregation capacity [34].Powders that present HF lower than 1.2 are classified as low cohesiveness, between 1.2 and 1.4 have intermediate cohesiveness and HF > 1.4 are considered high cohesiveness.For flow index, powders with CI values between 15 and 20% have good flow, between 20 and 35% have poor flow, between 35 and 45% also have poor flow, and >45 have very poor flow [35].
The addition of maltodextrin to the blend caused a decrease in HF and CI, with the cohesion factor (HF) being in the intermediate cohesiveness range (between 1.22 and 1.37), except for BL30, which showed low cohesiveness.According to the Carr index (CI), the blend powders showed good fluidity (BL20 and BL30) and poor fluidity (BL0 and BL10).In their study on the effects of adding carrier agents (maltodextrin, gum Arabic and dextrin) to lyophilized red dragon fruit (Hylocereus polyrhizus) pulp powders, Alves et al. [36] observed behavior similar to that of the present study.They noted HF values ranging from 1.29 and 1.75 and CI from 22.61 to 43.00%.In their study on the effects of gum Arabic and inulin additives on the lyophilization of Hibiscus acetosella extract, Mar et al. [37] reported a Hausner factor between 1.3 and 1.4.
The angle of repose was smaller in the sample that did not receive maltodextrin, differing significantly from the other formulations, which, in turn, did not present statistical differences between them.Rocha et al. [38] observed the same behavior analyzing the flow of mango pulp powders containing different concentrations of maltodextrin.These results can be attributed not only to the composition of the samples but also to the way the material was pulverized.The samples were ground after dehydration, in a way that the sample without additive could have generated particles with less rough surfaces.The angle of repose is used to characterize the flow properties of solids, relating to interparticle friction or resistance to motion between particles.In general, the smaller the angle of repose, the better the fluidity of the powder.Geldart et al. [39] established a limit in which powders with angles of repose > 50 • present flow difficulties and those with values < 30 • flow with good fluidity.
The addition of maltodextrin significantly affected the solubility of the blend powders, which increased significantly in direct relation to the additive concentration.This fact is associated with the high solubility of the carrier agent used and with the particle size of the material produced (Figure 1), since the smaller the particle size, the greater the surface area available for hydration.Another aspect that must be considered is the dispersion capacity of the particles, given that less agglomeration favors solubility [40].
In their study on the effects of carrier agents at different concentrations on the physicochemical properties of lyophilized date powder, Seerangurayar et al. [41] reported a similar trend, in which the control powder showed significantly lower solubility compared to the powders added with maltodextrin (DE 10), whose solubility values ranged from 80 to 81%.Ribeiro et al. [42] found a higher value than the one found in the present work, i.e., 94%, for the solubility of lyophilized acerola pulp powder with 19% maltodextrin (DE 20).In addition to the high solubility of maltodextrin, the dehydration method applied can generate a more porous product with greater rehydration capacity.
The wettability was higher in the sample without additive, decreasing with the increase in the percentage of maltodextrin incorporation.Similar behavior was described by Andrade et al. [15] for the pulp of guava (Eugenia stipitata) lyophilized with different concentrations of maltodextrin (14, 21 and 28%).The reduction in the wettability of the samples with maltodextrin must be attributed to a lower wettability of the additive itself in relation to the blend, resulting in longer immersion times with the increase in the added percentages of the additive.The fact that the BL0 sample does not have an additive in its formulation, generates a higher water content in the product which, in turn, favors the increase in the size of the particles.According to Custodio et al. [43], a larger particle size results in a shorter wetting time.
The porosity of the powders with the addition of maltodextrin was reduced in relation to the control, reaching the lowest value in the BL30 sample.The reduction of porosity in stored powders is important for the preservation of their characteristics, since powders with high porosity have many empty spaces, allowing for a greater presence of oxygen, which can trigger oxidation reactions [44].Thus, maltodextrin has a potentially protective effect during storage.
Raw Materials and Processing
Acerola (Malpighia emarginata), guava (Psidium guajava) and pitanga (Eugenia uniflora L.) fruits were collected between January and March 2020, in the municipalities of Petrolina The fruits were selected at the maturation stage when the acerola and pitanga had completely red skin, and the guava had completely yellow skin, with no visible injuries.They were then washed in running water to eliminate foreign materials and sanitized by immersion in chlorinated water (50 ppm) for 15 min, followed by rinsing in potable water to remove excess sanitizer.Then they were pulped in a horizontal pulper (Laboremus ® , model DF-200, Campina Grande, Paraíba, Brazil).The three whole pulps obtained (acerola, guava and pitanga) were used to prepare the blend of red fruits, using a ratio of 1:1:1 (g/g) for the formulation (this proportion was determined through preliminary tests), with the pulps weighed individually and homogenized during 2 min in a domestic blender (2000 rpm) (Arno ® , Power Mix model, Itapevi, São Paulo, Brazil).
Initially, the four formulations with the blend were placed in plastic trays for ice (22.9 × 14.6 × 3.8 cm) and subjected to freezing in a freezer (−18 • C) for 48 h.After this step, the materials were taken to the lyophilizer (Liobras ® , model BL101, São Carlos, São Paulo, Brazil) and kept in the equipment at −50 • C (<500 µHg) for 72 h.
After the lyophilization of the samples, they were disintegrated with the aid of a mortar and pestle, and then the physicochemical, physical, bioactive compounds and color analyses were performed.
Physicochemical Characterization
The physicochemical analyses of the blend powders were performed in quadruplicate using the methodologies proposed by the AOAC [45]: water content, performed by the standard oven method (QUIMIS ® , model Q319V, Diadema, São Paulo, Brazil) at 105 ± 3 • C, up to constant mass; total titratable acidity determined by the acidimetric method, using 0.1 M sodium hydroxide solution, with the results expressed as a percentage of citric acid; ash determined by calcining the sample in a muffle at 550 ± 5 • C; the TSS/ATT ratio was estimated by the quotient of the values of total soluble solids and total titratable acidity.
Water activity was determined by direct reading of the sample in a dew point hygrometer (Aqualab, model 3TE, Decagon, Washington, DC, USA) at a temperature of 25 • C.
The pH was determined by the potentiometric method [45], with a pH meter (Tecnal, model TEC-2, São Paulo, São Paulo, Brazil), previously calibrated with buffer solutions of pH 4.0 and 7.0.
The total soluble sugar contents (g/100 g) were determined using the methodology of Yemm and Willis [46] and for the reducing sugar contents (g/100 g) the methodology of Miller [47] was used.Both analyses were performed using a spectrophotometer (Coleman ® , model 35 D, Santo André, São Paulo, Brazil).
Bioactive Compounds
The ascorbic acid content (mg ascorbic acid/100 g) was determined based on the protocol by Oliveira, Godoy and Prado [48]; the content of total phenolic content (TPC) was quantified by the method described by Waterhouse [49]; total flavonoids (mg/100 g) and total anthocyanins (mg/100 g) by the methodology described by Francis [50]; total carotenoids (g/100 g) according to Lichtenthaler [51]; lycopene according to Nagata and Yamashita [52].All absorbance readings during the analyses were performed in a spectrophotometer (Coleman ® , model 35 D, Santo André, São Paulo, Brazil).
Color Measurement
Colorimetric parameters were evaluated using a portable spectrophotometer (MiniScan, Hunter Lab XE Plus, model 4500 L, Hunter Associates Laboratory, Reston, VA, USA).Color coordinate readings were performed using the CIELAB system: L* (brightness), a* (transition from green to red) and b* (transition from blue to yellow); hue angle (h*) and color saturation or chroma (C*) were calculated according to Equations ( 1) and (2), respectively. h C * = (a * 2 + b * 2 (2)
Physical Characterization
Solubility was determined according to the method described by Cano-Chauca et al. [53].
The wettability was determined using the Schubert method [54], expressed by the ratio between the mass (g) and the time required for the sample to disappear from the surface (min).
The Carr index (CI) and the Hausner factor (HF) were determined using the methodology of Santhalakshmy et al. [35], calculated from apparent density (ρ ap ) and compacted density (ρ c ) data, according to Equations ( 3) and ( 4): HF = ρ c ρ ap (4) For the evaluation of the angle of repose, the methodology described by Aulton [55] was used.The porosity was calculated through the expression whose relation relates to the apparent density and the absolute density.
The apparent density was determined from the relationship between the mass and the volume occupied in the cylinder, according to Goula and Adamopoulos [56].The results are expressed in g/cm 3 .To determine the compacted density, the method described by Tonon [57] was used.The absolute density was determined using a pycnometer, where the powder sample and hexane were added as immiscible liquids, at a temperature of 25 • C. With the obtained data, the relationship between the sample mass and the volume of the pycnometer was calculated.
Statistical Analysis
All analyses were performed in quadruplicate and the data obtained were statistically evaluated using the Assistat ® software version 7.7, with the experiments carried out in a completely randomized design (CRD) and with the comparison between means by Tukey's test at a significance level of 5% [58].
Conclusions
All powders obtained from the evaluation of the tropical red fruit blend have high levels of bioactive compounds.The increase in maltodextrin concentration during the lyophilization of the tropical red fruit blend promotes positive effects on the powders, such as reductions in water content, water activity and porosity, providing better flow, cohesiveness and solubility characteristics.Among the evaluated formulations, BL20 and BL30 with the highest concentrations of maltodextrin are likely to have better stability during storage, with better flow characteristics.
Table 1 .
Mean values and standard deviations of the physicochemical parameters of the lyophilized tropical red fruit blend powders with the addition of maltodextrin.
Table 2 .
Mean values and standard deviations of bioactive compounds in lyophilized red fruit blend powders with added maltodextrin.
Table 3 .
Mean values and standard deviations of the colorimetric parameters of lyophilized tropical red fruit blend powders with added maltodextrin.
Table 3 .
Mean values and standard deviations of the colorimetric parameters of lyophilized tropical red fruit blend powders with added maltodextrin.
Table 4 .
Mean values and standard deviations of physical parameters of lyophilized tropical red fruit blend powders with added maltodextrin.
|
2023-09-16T15:16:18.170Z
|
2023-09-01T00:00:00.000
|
{
"year": 2023,
"sha1": "9e9c1696deec9ecd78440076a6497cacfaea244d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/28/18/6596/pdf?version=1694601666",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a3ca053ebbefb9cf26623006d74610648067c36d",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
52954103
|
pes2o/s2orc
|
v3-fos-license
|
Immune-related adverse events with immune checkpoint inhibitors affecting the skeleton: a seminal case series
Background The use of immune checkpoint inhibitors is increasing in cancer therapy today. It is critical that treatment teams become familiar with the organ systems potentially impacted by immune-related adverse events associated with these drugs. Here, we report adverse skeletal effects of immunotherapy, a phenomenon not previously described. Case presentations In this retrospective case series, clinical, laboratory and imaging data were obtained in patients referred to endocrinology or rheumatology with new fractures (n = 3) or resorptive bone lesions (n = 3) that developed while on agents targeting PD-1, CTLA-4 or both. The average age of patients was 59.3 (SD 8.6), and five were male. Cancer types included melanoma, renal cell carcinoma and non-small cell lung cancer. All fracture patients had vertebral compression, and two of the three had multiple fracture sites involved. Sites of resorptive lesions included the shoulder, hand and clavicle. Biochemically, elevated or high-normal markers of bone resorption were seen in five of the six patients. Erythrocyte sedimentation rate was elevated in three of the four patients where checked. Conclusions This case series represents the first description of potential skeletal adverse effects related to immune checkpoint inhibitors. These findings are important for providers caring for patients who experience musculoskeletal symptoms and may merit additional evaluation.
Background
Immune checkpoint inhibitors (ICIs) are widely considered to be a therapeutic breakthrough for cancer. Antibodies targeting immunoregulatory molecules such as programmed death-1 (PD-1), its ligand PD-L1, and cytotoxic T-lymphocyte associated protein 4 (CTLA-4) are in widespread use for the treatment of lung, gastric, bladder, kidney, urothelial, head and neck, hepatocellular, and mismatch repair deficient/microsatellite instability-high cancers. These agents modulate host immune responses principally by activating cytotoxic T-cells that are responsible for tumor cell destruction [1]. As these therapies continue to demonstrate efficacy in clinical trials and, consequently, garner approval for an increasing number of indications, ICI use is expected to increase in the years to come.
Toxicities associated with ICIsoften referred to as immune-related adverse events (irAEs)have been reported in nearly every organ system. The mechanisms that underlie irAE development are poorly understood, but are likely due to increased systemic inflammation caused by ICI therapy, resulting in autoimmune responses as well as dysregulation of T-cell self-tolerance [2]. More commonly recognized irAEs include colitis, hepatitis, pneumonitis, thyroiditis, hypophysitis and skin rash [3]. Rheumatologic irAEs have been reported including inflammatory arthritis, myositis, and polymyalgia rheumatica-like syndromes [4][5][6][7][8]. Absent from the literature to date are descriptions of ICI effects on the skeleton.
The important interaction between the immune system and bone is increasingly appreciated [9,10]. Studies of pro-inflammatory states demonstrate that alterations in T-cell mediated cytokines favor bone resorption [11][12][13][14][15][16]. We therefore hypothesize that immune activation induced by ICIs may adversely impact T-cell-mediated skeletal remodeling, leading to bone erosion and/or diffuse loss. To our knowledge, this report represents the first case series describing skeletal irAEs associated with ICIs. Among six patients treated with ICIs, we observed two distinct skeletal phenotypes: 1) new-onset osteoporosis leading to fracture, and 2) localized bony resorption. Herein, we briefly describe each patient's treatment history, irAE presentation, and clinical outcome.
Patients and methods
Included in this series are patients evaluated and treated at the Sidney Kimmel Comprehensive Cancer Center at Johns Hopkins Hospital who were referred to the endocrinology or rheumatology services for new skeletal issues (osteoporosis/osteopenia, pathologic fractures, and destructive or resorptive bone lesions) that arose during treatment with one or more ICIs, administered as standard-of-care or as a part of a clinical trial. Patient and tumor features including medical history, tumor histology, cancer therapies, and use of concomitant medications (including bisphosphonates or RANK ligand inhibitors) were collected. Risk factors for bone loss were gathered from clinical assessment and review of the electronic medical record including: focal bone radiation, family history of osteoporosis, tobacco or alcohol abuse, renal disease and prolonged corticosteroid use.
Laboratory data obtained as part of clinical care included markers of bone resorption and formation, inflammatory markers, serum calcium and phosphorus, parathyroid hormone, and 25-hydroxy-vitamin D. Radiologic imaging data were obtained as clinically indicated. Where available, pathologic data from bone biopsies were reviewed. Patients with preexisting pathologic fracture(s), metabolic bone disease, osteoporosis, inflammatory arthritis or other autoimmune diseases were excluded.
Results
Six patients with skeletal irAEs were identified -three with new osteoporotic fractures and three with focal bone resorptive lesions. Patient features are summarized in Table 1. Patients were 51-75-years old at the time of development of the skeletal event, and five were male. Cancer diagnoses included: metastatic melanoma (n = 4), renal cell carcinoma (n = 1), and non-small cell lung cancer (n = 1). Four patients were treated with anti-PD-1 monotherapy, while two patients received ipilimumab (anti-CTLA-4) plus nivolumab combination ICIs. Only one of the six patients experienced additional irAE not related to the musculoskeletal system. All patients with fractures (n = 3) experienced the skeletal event within the vertebral column; one had additional rib and pelvic fractures. No patient with fractures had osteoporosis by dual -energy X-ray absorptiometry (DXA) definition (i.e., all T-scores > − 2.5). All three patients with destructive or resorptive bone lesions had concomitant inflammatory arthritis in separate joints that developed while receiving ICI therapy. No patient in this series developed both skeletal phenomena.
Spontaneous fractures / new onset osteoporosis
Three patients without a prior diagnosis of osteoporosis were identified with new vertebral compression fractures (Fig. 1a), prompting discontinuation of pembrolizumab after 18-months of therapy. The patient's biochemical workup was unremarkable. His degree of active bone resorption, as measured by C-telopeptide levels (CTX, Table 2) were elevated despite three-weeks of alendronate use prior to appointment. Bone density at the hip (lumbar spine excluded in the setting of fracture) demonstrated osteopenia only. Histomorphometry from transiliac bone biopsy ( Fig. 1b) revealed bone resorption (increased eroded surface, osteoclast surface) and bone loss (reduced trabecular and cortical parameters). Given the patient's continued bone loss on oral bisphosphonate, he received one infusion of intravenous bisphosphonate (zoledronic acid), underwent multiple kyphoplasty procedures, and permanently discontinued pembrolizumab. At present, his melanoma continues to be in complete remission 35-months after commencement of pembrolizumab, and after therapy has been held for 20-months. The patient continues to receive IV bisphosphonate yearly in the form of zoledronic acid.
Patient 2
Patient 2 is a 52-year old male who was originally diagnosed in 2011 with a localized BRAF V600E-melanoma of the left flank, and was treated with wide local excision (Breslow thickness: 2.8 mm) and adjuvant interferon alpha. Unfortunately he developed recurrent disease in 2014 with new lung metastases, and was treated with high-dose interleukin-2 (IL-2). His disease progressed through this therapy, with the development of new osseous metastases in the axial and appendicular skeleton.
He was subsequently treated with nivolumab in combination with IL-21 on a prospective clinical trial for 8 cycles of combination therapy, followed by nivolumab monotherapy. He went on to have a near complete response to ICI therapy by RECIST 1.1, with his known osseous metastases in the ribs, pelvis, femur, humerus and vertebral bodies L3 / L4 showing sclerotic change consistent with treatment response. No skeletal radiation was administered. Given his near complete response, ICI therapy was discontinued. Seven months following the cessation of therapy, the patient developed new brain metastases, pulmonary metastases, and a paraspinal metastasis at S3. The patient was treated with stereotactic radiosurgery (SRS) of the paraspinal mass and brain and was initiated on second-line dabrafenib and trametinib. After 8-months, there was an interval increase in size of the S3 paraspinal mass, and nivolumab was re-challenged. The patient went on to receive 9-months of additional ICI therapy at which time the first vertebral fracturenot associated with a metastatic lesionwas detected. The patient's cancer was deemed to be stable is at all known sites of disease at that time. Specifically, on surveillance CT imaging, compression deformities of T2-5 were identified with new compression fractures noted at T6-12 and L1 at the time of clinic visit and vertebral fracture assessment. There was only one sclerotic lesion in the thoracic spine (T7) identified as a metastatic focus of disease; the remaining compression fractures developed in the absence of skeletal metastases. The patient's biochemical evaluation was unremarkable.
Bone density testing showed only osteopenia at the femoral neck. For treatment, he received denosumab injections every 6-months. At that time, he commenced third-line ipilimumab /nivolumab combination therapy. While the patient did not suffer additional fractures, his melanoma progressed, and he passed away 7-years after initial diagnosis.
Patient 3
Patient 3 is a 58-year old male diagnosed with stage IV, BRAF-negative melanoma of the left ear in 2014 (stage 0) with progression to metastatic disease of the lung in 2016. The patient received first-line therapy with single agent pembrolizumab for 10-months with excellent response at which time a restaging CT indicated abnormalities of the thoracic and lumbar vertebral bodies. He carried no prior history of fracture, and no spinal metastases were identified. A comprehensive review of his outside imaging revealed an age-indeterminate T12 compression fracture sustained prior to ICI with an adjacent T11 compression deformity appearing after approximately 10 months of pembrolizumab therapy. Increased prominence of biconcave deformities of the vertebral bodies were also noted during therapy, indicating osteopenia (Fig. 1b) [17]. Given the patient's response to therapy, pembrolizumab was discontinued after 12-months, though he was referred to the Metabolic Bone Center for continued skeletal evaluation and management. At the time of evaluation, his laboratory testing showed calcium and vitamin D deficiency. Markers of bone formation and resorption were considered normal for the patient's sex and age and not suggestive of a high bone loss state. Bone density testing revealed only low bone density at the hip, but no frank osteoporosis. Following optimization of calcium and vitamin D status through diet and supplement, the patient retuned to clinic with updated laboratory testing. His biochemical profile indicated improved calcium and vitamin D indices as well as stable markers of bone formation and resorption. Repeat bone density testing also revealed no significant change of bone density in the hip or spine. Extensive discussion was had with the patient involving the risks and benefits of antiresorptive medications (oral / parenteral bisphosphonate vs. denosumab) in patients with vertebral fracture. He has elected to defer management beyond calcium, vitamin D and lifestyle optimization given that he is no longer taking pembrolizumab and his skeletal condition has been stable 1-year after ICI cessation. He will return to the metabolic bone center yearly for ongoing surveillance with biochemical testing and DXA imaging. In the event of ongoing bone loss or new fracture, we will revisit initiation of antiresorptive therapy. Oncologically, the patient has stable disease and has experienced neither a complete response nor progressive disease (non-CR, non-PD).
Resorptive bone lesions
Three patients had new destructive or resorptive appearing bony lesions that were not consistent with metastases. Two patients [4,5] had their lesion discovered due to pain and/or swelling in the affected area prompting subsequent imaging, while patient 6 had the lesions discovered incidentally on PET scan. All three patients had concomitant inflammatory arthritis affecting separate joints with signs of systemic inflammation. Rheumatoid factor, anti-cyclic citrullinated peptide antibodies, and anti-nuclear antibodies were negative in patients 4 and 5; patient 6 did not have autoantibody testing. All patients had increased bone resorption markers (CTX, Table 2). Patient 4 is a 59-year old male diagnosed with stage IV melanoma involving the liver only. He was treated with the first-line ipilimumab and nivolumab combination and experienced two irAEs (hypophysitis after 2-months of ICI,pneumonitis after 3-months of ICI therapy, with a second pneumonitis episode 5-months after ICI start). Eight months after ICI start, the patient developed progressive symptoms of shoulder discomfort and impaired mobility. Imaging showed a destructive lesion with surrounding bone marrow edema affecting the humeral head and the glenoid (Fig. 2a). He had extensive evaluation of his destructive shoulder lesion for potential infection or metastasis. Two separate bone biopsies showed only a mixed inflammatory infiltrate; he was started on a corticosteroid taper by his oncologist. Upon evaluation by rheumatology, his inflammatory markers were elevated; he had synovitis in the small joints of the hands and wrist, consistent with inflammatory arthritis. Based on his inflammatory arthritis, bone biopsies showing sterile inflammation and elevated inflammatory markers, he was started on therapy with adalimumab, a TNF-inhibitor. No new bony lesions developed after discontinuation of immunotherapy, and his arthritis and shoulder pain improved with adalimumab therapy. His melanoma remains in remission after 16 months of TNF-inhibitor therapy.
Patient 5
Patient 5 is a 60-year old female who was diagnosed with stage IV clear-cell renal cell carcinoma with metastases to the lungs, brain, and bones (vertebrae, forearm). The patient was treated with first-line nivolumab and received whole brain radiation therapy, with stable disease by RECIST 1.1 after 6 doses of therapy. After 18 months of nivolumab therapy, she developed new onset right wrist swelling and stiffness. Symptoms of pain and stiffness were not severe, but a radiograph of the right wrist and hand showed resorption of two entire carpal bones and changes typical of inflammatory arthritis, namely periarticular osteopenia of metacarpophalangeal and proximal interphalangeal joints (Fig. 2b). She was briefly started on a clinical trial of nivolumab and an anti-LAG3 agent, but this was discontinued due to disease progression. Her first evaluation at our center was after starting this clinical trial. At that point, she was found to have inflammatory arthritis involving the knees, metacarpophalangeal and proximal interphalangeal joints. Per the patient's recall, the inflammatory arthritis symptoms in joints other than the wrists started 2-months after the initial wrist swelling. Prednisone 10 mg daily was started with improvement in joint swelling, but the patient developed worsening brain metastases and entered hospice care. Further evaluation and management were not pursued in light of her progressive disease and the decision was made to transition to palliative care.
Patient 6
Patient 6 is a 51 year old maletreated with neoadjuvant ipilimumab and nivolumab for stage II lung adenocarcinoma as part of a trial for early stage NSCLC. A pre-operative PET-CT demonstrated a new FDG avid lesion in the distal right clavicle. An MRI of this area showed bone marrow edema, joint effusion, subchondral cysts in the distal clavicle without a discrete skeletal lesion. One month later, he developed swelling and pain in his metacarpophalangeal and proximal interphalangeal joints consistent with inflammatory arthritis. He concurrently developed progressive immobility in the left elbow over a period of 6-months, leading to a fixed flexion deformity. CT imaging with IV contrast showed no metastasis, but significant joint space narrowing, Fig. 2 a Patient 4, MRI left shoulder with erosive changes of the glenohumeral articulation (arrow). b Patient 5, X-ray of the right hand with hamate and capitate resorption (arrow) sclerosis, osteophyte formation and subchondral cysts in the ulnohumeral and radio-patellar joints. Ultrasound examination of the same joint showed Doppler-positive synovitis surrounding the areas of bony hypertrophy. Unfortunately at the time of surgery he was found to have pleural metastases. The patient was subsequently treated with systemic chemotherapy for recurrent NSCLC.
Discussion
This series describes two different skeletal phenomena observed in patients undergoing anti-PD-1 or anti-PD-1/ CTLA-4 therapy for malignancy: new onset fractures and resorptive bone lesions. At first, the processes may seem discreet; however, both conditions speak to the potential influence that immune activation has on bone metabolism.
To date, there is little within the literature to explain how ICIs influence bone metabolism. Investigations into pro-inflammatory disease states confirm a direct relationship between the immune system and bone metabolism. From other studies of rheumatoid arthritis (RA), postmenopausal osteoporosis, periodontal disease and immune reconstitution with anti-retroviral therapy for the treatment of HIV, it is recognized that activated T-cells are intimately involved in skeletal remodeling [11,18,19]. In pro-inflammatory states such as RA that are associated with bone loss, cytokines including IFN-γ, TNF-α and IL-6 levels are increased, favoring osteoclast formation and maturation over osteoblastogenesis. In these inflammatory diseases, T-cells also produce Receptor Activator of Nuclear Factor-κB Ligand (RANK-L) and its physiological inhibitor, osteoprotegerin (OPG), the balance of which regulates osteoclastogenesis [20,21].
Following T-cell activation, RANK-L expression is up-regulated leading to a greater ratio of RANK-L/OPG, promoting bone loss [22,23].
Like the aforementioned conditions, ICI therapy promotes a pro-inflammatory state. In the setting of ICIs, activated T-cells secrete cytokines that assist in tumor cell destruction [24]. Cytokines implicated in promoting an anti-tumor effect include TNF-α, IL-1, − 4, − 6, IL-17 and IFN-γ. These same factors have been implicated in unfavorable skeletal remodeling states, as discussed above, where bone-resorbing osteoclasts overwhelm bone-building osteoblasts. The events described in this series represent the possible adverse effects of the pro-inflammatory environment and T-cell activation on bone metabolism due to the use of ICIs. The relevance of TNF-α, IL-6, and RANK-L in similar disease states suggests the potential utility of targeting these molecules in therapy for bone disease due to ICIs; approved drugs exist that target TNF-α, the IL-6 receptor, and RANK-L. Additionally, bisphosphonate therapywidely used anti-osteoclastic drugs in cases of systemic osteoporosis or bone loss attributable to other disease statescould also be considered. Ultimately, additional research is needed to identify the mechanism of ICI-mediated bone loss and skeletal resorption; the pathophysiology this disease process will then drive pharmacologic intervention.
Immune checkpoint inhibitor-induced inflammatory arthritis has been described in several studies and shares some features with RA and/or spondyloarthritis (SpA) [6,25]. Erosive disease in affected joints can occur in RA or SpA, as can generalized osteoporosis [26]. There may be parallels in the pro-inflammatory state of traditional forms of inflammatory arthritis like RA and the skeletal irAEs of immunotherapy. That inhibitors of TNF, IL1, IL6, and IL17 have all been shown to decrease the progression of bony lesions in RA and SpA further implicate these effector pathways in a proinflammatory state favoring osteoclast activation. One proposed etiology of the resorptive lesions described in this report is localized inflammation, like a sterile osteitis, leading to osteoclast activation and bone resorption.
While our study describes thought-provoking findings, it is limited by its retrospective nature and limited sample size. Included patients were only those referred to physicians in endocrinology or rheumatology for skeletal events identified due to symptomatic presentation or opportunistic imaging. Patients with asymptomatic bone loss, occult and progressive fractures, or mild lesions may not have prompted endocrinology or rheumatology referral. The included patients had different demographic profiles, tumor types, varying stages of disease and were receiving different therapeutic ICI regimens at the time of their skeletal event, presenting challenges in drawing clear associations between these factors the development of skeletal irAEs. Because laboratory and imaging studies were obtained for the purposes of clinical care rather than a research protocol, not all patients had the same evaluation. For example, in cases of fracture detected in CT surveillance imaging for routine oncologic management, DXA evaluation followed for a general assessment of BMD; vertebral fracture assessment was obtained, where appropriate, to best characterize compromised vertebral bodies. In cases of focal skeletal resorption with localized bone and joint symptoms, MRI was the imaging modality of choice for diagnostic assessment. We anticipate that as additional cases of skeletal AEs arise in the setting of ICI, such findings will drive specific imaging and biochemical work-up protocols tailored to the presenting symptoms and degree of suspicion for fracture or resorptive lesion.
Conclusion
Despite the limited numbers, our observations support the identification of a new class of skeletal-related irAEs. Future areas of study for these newly appreciated clinical phenomena will also include assessment of risk factors for development of skeletal irAEs. Such risk factors might include pre-existing osteoporosis/osteopenia, fragility fractures, concomitant inflammatory arthritis or other autoimmune sequelae. Genetic or environmental factors and their role in the skeletal sequelae of immunotherapy may also bear consideration. In addition, specification of generalized bone loss leading to fracture vs. focal bone resorption will need to be undertaken. As our clinical experience with ICIs expands and irAE prevalence increases, additional bony manifestations may be identified, though additional research is required. Availability of data and materials All data and material used in this case report has been included in this manuscript.
Authors' contributions KM and LC drafted the manuscript and figures. KM, LC, PF, EL, WS, GG and JN provided clinical care for the patients. All authors were involved with manuscript editing and revision, and all authors approved the final version.
Ethics approval and consent to participate Not applicable.
Consent for publication
Clinical, imaging and laboratory data obtained under an IRB-approved protocol at Johns Hopkins, IRB00144013.
|
2018-10-12T08:15:00.739Z
|
2018-10-11T00:00:00.000
|
{
"year": 2018,
"sha1": "6a4b5a0a99153b46d24997229f237a83a1ad1037",
"oa_license": "CCBY",
"oa_url": "https://jitc.biomedcentral.com/track/pdf/10.1186/s40425-018-0417-8",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6a4b5a0a99153b46d24997229f237a83a1ad1037",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
257857320
|
pes2o/s2orc
|
v3-fos-license
|
Minimum acceptable diet and its associated factors among children aged 6–23 months in Lalibela, northeast Ethiopia: a community-based cross-sectional study
The first 2 years of life are a critical window of opportunity for ensuring optimal child growth and development. In Ethiopia, the magnitude of the minimum acceptable diet ranges from 7 to 74⋅6 %. The evidence revealed the variation and unrelated data on the prevalence of minimum acceptable diet. Therefore, the present study aimed to assess the minimum acceptable diet and its associated factors among children aged 6–23 months in Lalibela town administration, northeast Ethiopia. A community-based cross-sectional study was conducted in Lalibela town administration, northeast Ethiopia among 387 mothers/caregivers with children aged 6–23 months from May 1 to 30, 2022. The data were entered by Epidata version 3.1 and analysed by SPSS version 25.0. A multivariable binary logistic regression model was fitted to identify factors associated with minimum acceptable diet. The degrees of association were assessed using an adjusted odds ratio with a 95 % confidence interval and P-value of 0⋅05. The magnitude of minimum acceptable diet in the study area was 16⋅7 % (95 % confidence interval: 12⋅8–20⋅6 %). Sex of child, getting infant and young child feeding counselling at antenatal care, infant feeding practice-related knowledge and childhood illness are the variables that were found to be an independent predictor of minimum acceptable diet. Health facilities should strengthen infant feeding counselling starting from antenatal care visits during pregnancy for the recommended minimum acceptable diet is crucial.
Background
Minimum acceptable diet (MAD) is the proportion of children aged 6-23 months who had at least minimum meal frequency and minimum diversified diet during the previous day (1,2) . Infants and young children (IYC) are vulnerable to malnutrition because of their high nutritional requirements for growth and development and they are particularly vulnerable during the transition period when complementary feeding begins, at 6 months (3) . Despite some improvements in selected nutrition indicators, progress is insufficient to meet the 2025 global nutrition targets (4) .
Adequate nutrition during infancy and early childhood is fundamental to the development of each child's full human potential. It is well recognised that the period from birth to 2 years of age is a 'critical window' for the promotion of optimal growth, health and behavioural development (5) .
More than two-thirds of malnutrition-related child deaths are associated with inappropriate feeding practices during the first 2 years of life (17) . Globally, ensuring optimal complementary feeding can avert a substantial proportion of childhood deaths (18) . An undernourished child has a nine times higher risk of mortality as compared with an optimally nourished child. Almost half (45 %) of all child death resulted from the effect of malnutrition, and the highest figure was in Africa (19) .
Previous studies conducted elsewhere on factors associated with inappropriate complementary feeding practices of children aged 6-23 months show higher maternal and paternal education, better household wealth, exposure to media, adequate antenatal and post-natal contacts, child's sex and age, institutional delivery, low parity, maternal occupation, urban residence, knowledge and frequency of complementary feeding and receiving feeding advice in immunisation as determinant factors for appropriate complementary feeding (20)(21)(22)(23)(24)(25)(26)(27)(28) .
The Ethiopian government also set targets to improve the nutritional status of children and to end child malnutrition by 2030 through implementing different programmes and strategies such as the National Nutrition Program (NNP) (29) , Health Sector Transformation Plan (HSTP) (30) , Health Extension Program (HEP) (31) , Sustainable Undernutrition Reduction in Ethiopia (SURE) (32) and Seqota declaration (33) . Despite efforts done by the Ethiopian Government and other stakeholders, only 7 % of children aged 6-23 months have met the MAD (34) .
Even though studies were conducted about the determinants of the optimal complementary feeding practices in Ethiopia, inadequate efficient information was documented about MAD practice and its associated factors independently and most of the studies were not representative especially for rural communities (1,(14)(15)(16) . Moreover, as far as the researcher's knowledge is concerned, no documented data were accessible specifically in the study area. Therefore, the present study aimed to assess MAD practice and its associated factors among children aged 6-23 months in Lalibela, northeast Ethiopia.
Study area, design and period
A community-based cross-sectional study was conducted in Lalibela town administration, northeast Ethiopia from May 1 to 30, 2022. Lalibela is found in the North Wollo zone 300 km away from Bahir Dar town which is the capital city of the region and 701 km from Addis Ababa. The town has a total population size of 42 975 people of which 21 057 are males, 21 918 are females and about 1528 estimated numbers of children 6-23 months age. There are 5 Kebeles in the town based on the Lalibela town health office 2021 report. The town has one general hospital, one health centre and six health posts.
Lalibela and its surrounding called Lasta remains in use by the Ethiopian Orthodox Christian Church to this day, and it remains an important place of pilgrimage for Ethiopian Orthodox worshipers, as a home to clergy, which increasingly brings together religious adherents. In the study area, there is household's food insecurity and their coping strategies were temporal and might not systematically link to the ever-increasing climate change and other related hazards, which were more vulnerable to food insecurity. And also, up to today, it is one site of long-term development interventions through the Government of Ethiopia-led Productive Safety Net Program (PSNP), which aims to reduce chronic food insecurity (35) .
Source population and study population
All mothers/caregivers with children aged 6-23 months in Lalibela town administration were the source population. Mothers/caregivers with children aged 6-23 months who lived in the randomly selected Kebeles were the study population. Individual mothers/caregivers having children aged 6-23 months lived in the randomly selected Kebeles and participated in the actual data collection interviewee was the study unit. Mothers/caregivers with children aged 6-23 months on therapeutic feeding were excluded from the study.
Sample size determination
The sample size was calculated for the first objective using single population proportion formula; by considering the proportion of MAD in Mareka District, southern Ethiopia 35⋅5 % of the children aged 6-23 months met the recommended MAD (15) , a confidence level of 95 % 1⋅96, and a 0⋅05 margin of error the sample size became 352. By adding a 10 % nonresponse rate, finally became 387.
Sampling technique
There are 5 Kebeles in the Lalibela city administration, out of them 2 Kebeles (with estimated 720 study participants) were randomly selected by a lottery method. To give equal chance in the selection of mother-child pairs, a proportional allocation technique was employed across each selected Kebeles. Finally, systematic sampling techniques were applied every other lactating mother to select the study participants. If there has been more than one mother-child pair in one household unit, one mother with the youngest children is selected. In the case of twin newborns, the lottery method was used to select the study participants.
Study variables
Dependent variables: Minimum acceptable diet.
Independent variables: Socio-demographic and economic characteristics, obstetrics and health service utilisation and maternal knowledge.
Operational definition
• Complementary feeding is described as the introduction of safe and nutritionally balanced solid, semi-solid or soft foods in addition to breast milk for children aged 6-23 months (18) .
• Minimum acceptable diet: Proportion of children aged 6-23 months who had at least minimum meal frequency and minimum diversified diet during the previous day (1,2) . • Minimum meal frequency: Proportion of breast-feeding and non-breast-feeding children aged 6-23 months who receive soft, solid and semi-solid foods (but also including milk feeds for non-breast-feed children) in the last 24 h. These food groups used for this indicator are breast milk, grains, roots and tubers; legumes and nuts; dairy products (milk, yogurt); Flesh foods (meat, fish, poultry and liver/organ meats); eggs; vitamin A-rich fruits and vegetables; and other fruits and vegetables. Quality and quantity of any amount from those groups were considered sufficient to count. Breast-feed infants aged 6-8 months two times in the last 24 h; and breast-feed infants and young children aged 9-23 months three times in the last 24 h. For non-breast-feeding infants and young children aged 6-23 months at least four times in the last 24 h (1,36) . • Minimum dietary diversity: Proportion of children aged 6-23 months who receive five or more food groups out of the eight food groups in the last 24 h (2) . • Maternal knowledge on IYCF practice: Mothers who score above the mean of ten knowledge questions related to infant and child feeding practice were categorised as knowledgeable and those who score below the mean were categorised as not knowledgeable (14) .
Data collection tools and procedure
Data were collected by structured questionnaires using a face-to-face interview which consist the socio-demographic characteristics of mothers, history of antenatal care (ANC) visit and other variables that use to assess child-feeding practice. The questionnaire was developed by adapting from different literature (20,(22)(23)(24)(25)(26)(27)(28)37,(38)(39)(40)(41) . The data collectors were two nurses and two midwives who trained for 2 days regarding the purpose of the study and the procedures to be followed for data collection.
Data quality control
The structured questionnaire was translated and prepared in Amharic and back to English by other persons to check its consistency. Two days of rigorous and extensive training were given for data collectors and supervisors on methods of obtaining consent, study objectives, contents of the questionnaire and interviewing technique prior to pre-test. A pretest was conducted on 5 % of the total sample size on lactating women living in an area other than the study site. Overall, data collection was monitored daily and the questionnaire was checked for completeness and consistency at the end of the data collection date.
Data processing and analysis
Data consistency and completeness were checked throughout the data collection. Data were entered using Epidata version 3.1 and exported to SPSS version 25.0 for analysis. Descriptive statistics was utilised to summarise data on respondent characteristics and presented in narrative and graphs, charts and tables. For numerical variables such as age, number of children, number of ANC mean and standard deviation were calculated for normally distributed data.
Principal component analysis (PCA) was conducted to identify variables that explained high variability among household wealth response ranked into tertile. Bartilett's test of sphericity was checked and taken P < 0⋅05 as significant. Sampling adequacy for principal component analysis was checked with Kaiser-Meyer-Olkin (KMO) and the measurement accepted if it was >0⋅5. Varimax rotation is employed during factor extraction to minimise cross-loading of items on many factors.
Binary logistic regression analysis was done to check the association between dependent and independent variables. All variables that had a significant association with P-value <0⋅25 in the bivariable analysis were selected as a candidate for multivariable logistic regression. The multivariable binary logistic regression model was fitted to identify factors affecting the MAD. Adjusted odds ratio with 95 % confidence interval and P-value less than 0⋅05 were considered statistically significant.
The results of multivariable logistic regression with the backward method after checking of model fitness test by the Hosmer and Lemeshow test were tested and P-value 0⋅196 which is greater than 0⋅05 considered as the model good fit to the data. And also, multicollinearity was checked by using a variance inflation factor: a maximum variance inflation factor (VIF) of 1⋅43 which was less than ten considered as there were no threat of multicollinearity.
Ethical clearance
This study was conducted according to the guidelines laid down in the Declaration of Helsinki and all procedures involving human subjects/patients were approved by the ethical committee of the Zemen Postgraduate College, Department of Public Health with a reference number of ZPC/000703/ 2014. Written permission letter was also obtained from the Lalibela town administrative and health office. The participants enrolled in the study were informed about the study objectives, expected outcomes, benefits and the risks associated with it. Finally, verbal and written informed consent was taken and formally recorded from the participants before the interview. Furthermore, the confidentiality of participant's information was assured and information recorded anonymously. Those who are practicing inappropriate complementary feeding were advised to correct their complementary feeding practice.
Socio-demographic and economic characteristics
A total of 359 (93 % response rate) participants were interviewed in the study period. The mean age of the infants was 13⋅31 ± 2⋅07 (SD) months and 181 (50⋅4 %) were females. More than half (54⋅3 %) of the mothers were aged between 25-34 years old and more than three-quarters (85⋅5 %) were married. More than one-thirds (39⋅6 %) mothers and 37⋅9 % of fathers attended college degree. More than two-thirds (69⋅4 %) of the respondents had a family size of ≤4 and 33⋅4 % had a poor wealth index (Table 1).
Obstetrics and health service utilisation
With respect to obstetrics and maternal service utilisation, the majority (76⋅6 %) of the mothers had been birth ≤2, (93⋅9 %) attended ANC during pregnancy of the index child and (59⋅6 %) received IYCF counselling during the ANC visits. Almost all mothers (90 %) attended PNC and 62⋅4 % had received IYCF counselling during their PNC visits. Almost all (92⋅2 %) were delivered in health facility and the majority (59⋅4 %) had no media access. With respect to comorbidity, 52⋅4 % had history of mother and the childhood illness ( Table 2).
Factors associated with MAD
Based on the bivariable logistic regression analysis results (P-values <0⋅25), the variables selected for inclusion in the multiple regression model were child age, sex, maternal education level, getting IYCF counselling at ANC, place of delivery, IYCF practice counselling at PNC, infant feeding practice-related knowledge and childhood illness ( Table 3).
Discussion
To reduce malnutrition in a developing country like Ethiopia, adequate and safe infant, and young child feeding practice is crucial. The present study demonstrated that the magnitude of MAD was 16⋅7 %. Sex of child, getting IYCF counselling at ANC, infant feeding practice-related knowledge and childhood illness are the variables that were found to be an independent predictor of MAD.
This magnitude of MAD was lower compared with a study in Mareka District, southern Ethiopia 35⋅5 % (15) , Addis Ababa, 74⋅6 % (16) and other countries like Ghana, Uganda and Kenya in which 29⋅9, 23⋅9 and 48⋅5 % of children received recommended MAD, respectively (6)(7)(8) and the 2020 global nutrition report (18⋅9 %). The variation might be due to the present study was conducted in the dry season what we call 'winter' in which the nutritional availability of most fruits and vegetables might be low compared to seasons especially 'summer' a period in which this study was conducted. Moreover, the countries' level differences might be due to differences in study design, sample size, study period and difference in socio-demographic characteristics.
In contrary, the result was higher compared with the EDHS report of 2016, only 7 % of children aged 6-23 months received a MAD (41) , northwest Ethiopia, Dembecha, 8⋅6 % (9) , rural community of Goncha District, Amhara Region, Ethiopia, 12⋅6 % (1) , Ethiopia multilevel analysis report of EDHS 2016 (6⋅1 %) and other countries like Malawi (8⋅36 %), Nigeria (7⋅3 %) and Philippines (6⋅7 %) of children aged 6-23 months received the recommended MAD (9)(10)(11)(12)(13) . The difference might be due to EDHS was conducted on a culturally different population, which may underrate child feeding practices while this study was conducted on an almost culturally homogeneous population with similar feeding practices. Furthermore, the reason for a high percentage of feeding practice compared to other countries might be due to variations in study design and data collection period.
The odds of achieving MAD among male children were two times higher than females. The finding was supported with the study done in Addis Ababa (16) , West Guji Zone, Oromia, Ethiopia (42) and Sodo Zuria District (43) . This might be due to the cultural and/or traditional perceptions in Ethiopia mostly giving high priority to the male baby than females. This implies that female children should get attention to their requirement of dietary intake like male.
Children from mother/care who received IYCF counselling at ANC were four times more likely to achieve the MAD as compared with those who received counselling at ANC. The result was consistent with another study done in Semera logia, northern Ethiopia (44) and Gondar, northwest Ethiopia (45) . This might be due to the fact that counselling and health education provides better health knowledge, attitude and practice towards the timely introduction of complementary feeding. This implied that ANC check-up is an appropriate time to provide essential messages about proper infant feeding practices.
Mothers who had good knowledge towards complementary feeding practices had three times higher odds of feeding MAD than those who have poor. The supporting systematic review study was found in East Africa (46) . This might be due to the fact that improving caregivers' knowledge of general nutrition had a positive impact on IYCF and child health (21) .
The achievement of MAD among those who got sick was six times higher than the counterpart. The supporting study was found in Addis Ababa (47) and Sodo town, southern Ethiopia (48) . This might be due to the exposure of parents to health facilities and getting health education and counselling, this leads the parent best take care of their children.
Due to the retrospective nature of the study, there might be a recall bias, especially for pregnancy-related responses like ANC visit frequency, and getting IYCF counselling at ANC. In addition, a self-reported study might not give the exact figure of the minimum dietary diversity practice (social desirability bias).
Conclusion
The magnitude of MAD among children in the study area was unacceptably low, almost one of six children meet the recommended minimum criteria. Sex of child, getting IYCF counselling at ANC, infant feeding practice-related knowledge and childhood illness are the variables that were found to be factors associated with MAD. Health administrative should strengthen infant feeding counselling specifically about the benefit of appropriate complementary feeding and recommended MAD are important to improve the IYCF in the study area. It is important to encourage mothers for ANC and PNC check-ups to take appropriate complementary practice counselling.
|
2023-04-01T13:04:38.446Z
|
2023-03-31T00:00:00.000
|
{
"year": 2023,
"sha1": "50d94ceb484f3f820cd63a44a98a9311a65e3f3d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Cambridge",
"pdf_hash": "50d94ceb484f3f820cd63a44a98a9311a65e3f3d",
"s2fieldsofstudy": [
"Medicine",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
269259371
|
pes2o/s2orc
|
v3-fos-license
|
Improving access to cancer clinical research in Brazil: recent advances and new opportunities. Expert opinions from the 4th CURA meeting, São Paulo, 2023
Clinical research is the cornerstone of improvements in cancer care. However, it has been conducted predominantly in high-income countries with few clinical trials available in Brazil and other low-and-middle-income countries (LMIC). Of note, less than one-third of registered clinical trials addressing some of the most commonly diagnosed cancers (breast, lung and cervical) recruited patients from LMIC in the last years. The Institute Project CURA promoted the fourth CURA meeting, discussing barriers to cancer clinical research and proposing potential solutions. A meeting was held in São Paulo, Brazil, in June 2023 with representatives from different sectors: Brazilian Health Regulatory Agency (Anvisa), National Commission of Ethics in Research (CONEP), non-governmental organisations, such as the Latin American Cooperative Oncology Group, the Brazilian Society of Clinical Oncology (SBOC), Contract Research Organisations, pharmaceutical companies and investigators. A total of 16 experts pointed out achievements as shortening the time of regulatory processes involving Anvisa and CONEP, development of staff training programs, maintenance of the National Program of Oncological Attention (PRONON), and the foundation of qualified centres in North and Northeast Brazilian regions. Participants also highlighted the need to be more competitive in the field, which requires optimising ongoing policies and implementing new strategies as decentralisation of clinical research centres, public awareness campaigns, community-centered approaches, collaborations and partnerships, expansion of physicians-directed policies, exploring the role of the steering committee. Active and consistent reporting of the initiatives might help to propagate ongoing advances, increasing Brazilian participation in clinical cancer research. Engagement of all players is crucial to maintain continuous progress with further improvements in critical points including regulatory timelines and increments in qualified human resources which aligned with new educational initiatives focused on physicians and the general population will expand access to cancer clinical trials in Brazil.
The worldwide imbalance between burden cancer and cancer clinical trials distribution: how is Brazil positioned?
Clinical and epidemiological research has been the cornerstone of improvements in cancer care [1], and it has been predominantly conducted in high-income countries (HIC), with few studies conducted in Latin America (LATAM) and other low-and-middle-income countries (LMIC) [2,3].Of note, less than one-third of registered clinical trials addressing some of the most commonly diagnosed cancers (breast, lung and cervical) recruited patients from LMIC in 2010-2017 [4].This imbalance regarding cancer clinical trial distribution, aligns poorly with the global burden of cancer which widens disparities in cancer care by concentrating cancer knowledge generation, application, and infrastructure within HICs [5].Also, there is a recent recommendation by the American Society of Clinical Oncology (ASCO) and the Association of Community Cancer Centers that a commitment across research stakeholders is necessary to increase equity, diversity and inclusion, and clinical trials are an integral component of high-quality cancer care, so every person with cancer should have the opportunity to participate [6].
Brazil is the LATAM country with the highest participation in cancer clinical research, however, it is far lower than expected.Brazil is positioned worldwide as the seventh pharmaceutical market and sixth in population [7], having ethnic diversity [8] and a high number of cancer cases [9].All of these features should bring more opportunities for cancer clinical research participation.Promoting major access to clinical trials is vital to optimising cancer care [10], modifying clinical practice, bringing more treatment opportunities, increasing qualified human resources, and ultimately generating academic research initiatives that can bring secondary gains by conducting trials that certainly will address local relevant questions [11].Therefore, the diagnosis of the different barriers to cancer clinical trial access in Brazil is crucial.
One important issue, regarding the current scenario is a long approval process which is partially explained by the relatively recent participation of LATAM countries in global clinical trials.From 1996 when the Brazilian resolution CNS 196/96 came into force [12], conceptual and structural grounds of ethics regulation in Brazil have been consistently implemented.Parallelly, the Brazilian Health Regulatory Agency (Anvisa) has obtained international recognition for its work and became an International Conference of Harmonisation member in 2016 [13], and manager member in 2021.Since their creation, the National Commission of Ethics in Research (CONEP) and Anvisa have worked to improve regulatory approvals in Brazil, but timelines remain longer than timelines in HIC, requiring continued efforts.
Other barriers that hamper the major Brazilian participation in cancer clinical research are the paucity of clinical trials available, centralisation of research centres with adequate infrastructure in the capitals and big cities, low engagement of physicians, lack of qualified human resources, scarcity of clinical research awareness by patients and general population.Of note, to modify several points in this landscape, efforts by society, government, policymakers, non-governmental institutions, investigators, and pharmaceutical companies are required.
The CURA Project Institute, a nonprofit organisation, established in 2016, is one of few institutions in LATAM whose objectives are to raise attention among the general population on the importance of clinical research.CURA also promotes scientific events aiming to foster the Brazilian regulatory environment, encourage health professionals to become researchers and create a philanthropic culture in favour of academic research for the control and cure of cancer.Since 2021, the Institute has promoted the 'CURA meetings', which have brought the opportunity to gather some players of regulatory processes, assistance, pharmaceutical companies, and medical societies to discuss the current scenario and suggest potential solutions.The fourth CURA meeting was held in São Paulo in June 2023, setting together several experts in the field.During this meeting, three speakers presented the up-to-date situation concerning cancer clinical research in Brazil followed by a discussion with the experts from different sectors, highlighting the main barriers, pointing out the achievements in recent years and suggesting strategies to face the present challenges.The information provided by the experts was used to build the narrative below.
Recent advances in cancer clinical research scenario in Brazil
Currently, Brazil ranks 20th worldwide in cancer clinical research [7], and the meeting participants agreed that the country is moving forward concerning opportunities, participation, and positive aspects in the national scenario (Figure 1).Of note, this meeting represented a milestone in cancer clinical research in Brazil as it brought players from different sectors, including Anvisa and CONEP, pharmaceutical companies, non-governmental organisations representing patients, Brazilian Society of Clinical Oncology (SBOC), Latin America Cooperative Oncology Group (LACOG) and investigators, covering rich discussion focused on barriers in each specific sector.
The timeline of changes in the clinical research environment in Brazil points to determining factors that have altered the flow of patients from only assistance pathway to assistance plus clinical research pathway.These factors include advances in regulatory processes, the participation of cooperative oncology group, the expansion of high-complexity centres, progress in research education and government investments in the area.
Regulatory approvals
The last decade's adjustments in the regulatory processes, involving Anvisa and CONEP, have allowed Brazil to become more competitive.Anvisa has conducted its actions within a strategic plan aiming to align regulatory processes with best international practices, achieving greater predictability and shortening the timelines.Some of these actions are incorporating collaborative practices in the regulatory process through the Collegiate Board Resolution Resolução da Diretoria Colegiada (RDC) 741 published on 10 August 2022 [14], which established general rules.Reliance is a practice endorsed by the World Health Organisation which permits one national regulatory authority to consider previous evaluations by regulatory authorities from other countries (Autoridades Regulatórias Estrangeiras Equivalentes) intending to substantiate its own decisions [15].Such procedure has optimised internal practices, allowing shortening of evaluation times.The clinical research department within Anvisa elaborated two specific Resolutions of the Collegiate Board of Directors (RCDs), 573 in 2021 (573/2021) and 601 in 2022 (601/2022) [16,17] both documenting reliance applied to clinical research.Regarding complex or exception clinical trials, represented by national development of products, biologic products, and phase I and II trials, the RDC 573/2021 established a maximum period of 120 days for Anvisa to manifest.Regarding clinical trials not classified as an exception, RDC 601/2022 does not establish a maximum period for Anvisa manifestation but specifies simplified and optimised rules according to reliance which has allowed a shorter approval time.
Improvements have also been noticed regarding ethical approvals involving the Institutional Review Board (IRB) in Portuguese Comitê de Ética em Pesquisa (CEP) and CONEP, known as the CEP-CONEP system, which nowadays is based on a triple protocol analysis, involving the nominated coordinator IRB, CONEP and each participating site local IRB [18].CONEP preconizes the need for approval by the coordinator and local IRB, as well as CONEP, as a strategy to ensure the rights of the participants.However, CONEP agrees that for expediting these approval processes periodic training programs for IRBs must be adopted [19], and in addition the accreditation of new IRBs is mandatory [20].Despite the fact the current approval times are long even compared to other LATAM countries, they are smaller than 2 years ago [21,22].
Cooperative oncology groups
The creation of Academic Cancer Research Groups is a recognised strategy to increase participation in clinical research [23].The LACOG is a multicentre collaborative cancer group launched in 2009, with most members from Brazil, but also from other LATAM countries, and a coordinating office located in Porto Alegre, Brazil [24].LACOG has presented expressive growth in the last years, of note, a rise of scientific publications of around 95% in the last decade.LACOG has assisted investigators in the study concept, protocol development and management, monitoring, data management, pharmacovigilance, statistical analysis and the publication of the results.Also, LACOG has developed its own research projects which are sponsored by pharmaceutical companies and diverse grants, including governmental.Presently, LACOG manages tumour groups (breast, gastrointestinal, genitourinary, geriatric, gynaecological, head and neck, lung, neuro, radiation, sarcoma and digital health) responsible for educational and retrospective, translational or clinical research initiatives [25].
Reinforcing the role of staff training, LACOG has qualified investigators, nurses, and study coordinators for 14 years now, since its creation [25].These persons are distributed in cancer research centres across Brazil.
High-quality centres
Brazil has many cancer research sites with adequate infrastructure and personnel [22] which have received Food and Drug Administration (FDA) inspections and pharmaceutical companies auditing with no major findings [26].However, they are usually associated with universities, institutes, or academic groups in cities such as São Paulo, Rio de Janeiro, and Porto Alegre.For instance, the Pontifícia Universidade Católica do Rio Grande do Sul research centre in Porto Alegre, has conducted cancer clinical research for 20 years [27], with more than 300 studies, 2,500 included patients [28] denoting that Brazil already has a long history with meaningful achievements.The COVID-19 pandemic has also demonstrated the local potential in clinical research development.The world health crisis shed light on Brazil´s role in the vaccine research and clinical trials enterprise.The widespread contagion, a deep bench of active scientists, and a manufacturing infrastructure have made Brazil an important player in the tracking to find a vaccine [29,30].
The number of sponsored and non-sponsored trials increased substantially in response to the need for solutions to control the pandemic.In line with urgent requirements, regulatory agencies readjusted processes to speed up protocol analysis.So, regardless of all the research limitations, Brazil has trained investigators who can conduct and propose clinical research with some qualified centres to support the initiatives.It was a demonstration that Brazilian investigators and centres are able to develop high-quality work in research that can prosper with the correct incentives.
Educational-oncologist-centred programs
One barrier that inhibits major Brazilian participation in clinical trials is the low engagement of physicians [31], highlighting the need for educational-oncologist-centred programs.The SBOC has around 3,000 members.According to the Medical Demography 2023 [32], Brazil has 4,730 physicians registered at the Federal Council of Medicine somehow working in the field of clinical oncology (including not only the medical oncologists by training).Therefore, SBOC has a representative number of members in the field and represents an important space for training in different topics, including clinical research in oncology.Among training actions are the creation of the Society Clinical Research Committee [33] which plays an active role in preparing the scientific program for the research section at the SBOC Annual Congress and the development of a training program for oncologists in cancer clinical research.Initially conducted in Ijuí (a middle size city located in the state of Rio Grande do Sul), which is outside the Rio-São Paulo axis and has an outstanding active research centre, currently, this program has many other involved centres.The program is focused on oncologists working outside large centres intending to implement a Clinical Research Center in their region and consists of an immersion period at a high-quality research centre.Following the training steps the oncologists are connected to a clinical oncology research network and receive support in their initial projects.In the beginning, only three oncologists were trained per year.As of 2022, the training centres have been expanded and currently, 15 oncologists have been trained yearly.
Recently, SBOC created an annual research fund Fundo de Incentivo à Pesquisa (FIP) that aims to finance research projects with resources from the SBOC focusing on reducing disparities and seeking equal access to cancer treatment [34].The grant will be allocated to projects approved by a judging committee, composed of members appointed by the SBOC.The SBOC hopes to motivate Brazilian researchers working with cancer, including young oncologists, to create research projects that can address local issues.
Governmental investments
The Brazilian government's policy to support clinical research is still incipient, however, there are few programs that have brought great opportunities.One of the most important strategies is an online platform called 'Plataforma Brasil' [35] which has modified the regulatory environment once it is used for the entire process of ethics appraisal.This platform brought traceability, organisation, and speed to the registered protocols in Brazil.
Another worthy government program, which is also a source of funding is the National Program of Oncological Attention Programa Nacional de Apoio à Atenção Oncológica (PRONON), created in 2014 [36] and recently renewed [37], representing an opportunity for investigators and cooperative groups to propose and conduct academic research.A great example of how PRONON can foster clinical research initiatives in oncology is the NEOSAMBA project [38].It is academic research with the expenses covered by PRONON, that investigates the better sequencing of two chemotherapy protocols used in neoadjuvant treatment in breast cancer.In summary, PRONON helps cancer patients by answering relevant questions, promoting more clinical research opportunities, and training and retaining human resources in Brazilian research centres.
New opportunities to improve cancer research scenario in Brazil
Even though many achievements have been made, there is an agreement that Brazil needs improvement in many other points (Figure 2) in addition to the maintenance of the continued efforts in topics that already have been conquered.Also, there was a consensus about the complexity of this landscape and the necessity of the involvement of all players through this journey (Table 1).
The road to improving the current scenario of clinical cancer research in Brazil is based on the need to improve current policies and implement new strategies as the decentralisation of clinical research centres, public awareness campaigns, community-centred approaches, new collaborations and partnerships and the expansion of physician-directed policies.
Decentralisation of clinical research centres
The concentration of clinical research centres in high-developed regions in Brazil is only an additional topic of the major centralisation of health care services, which hinders accessibility [39].Accessibility is an important factor associated with variations in the use of health systems, thus poor geographic accessibility to healthcare services contributes to low utilisation, which in turn gives rise to poorer health outcomes [40].In Brazil, 77% of the clinical research centres are located in the South and Southwest regions [22].The qualification of new centres in North, Northwest and Central-West regions, aiming for decentralisation of clinical research which is concentrated in South and Southwest regions and capital cities, is an urgent action.LACOG has worked for 2 years in partnership with Instituto Vencer o Câncer [41,42], training teams in new research centres.In 2023, six new sites in North and Northwest regions received support, and three of them have ongoing studies.The next efforts will be applied to maintain the qualification programs as new sites need careful attention on the staff capacitation and infrastructure once the learning curve may be long [43,44].Notably, the partnering may engage sponsors to prioritize actions such as capacitation programs for new research sites in regions where no clinical trials are found.
The principal investigator with a consistent clinical research background is critical to lead new sites.Therefore, institutions should cooperate with initiatives of this type together with sponsors or study promoters that can allocate studies according to the infrastructure availability, complexity, and patient population.It is expected that new sites require extra oversight from all parties.This includes an experienced clinical research associate to monitor and advise the sites, a proficient study manager to handle risks and extra support from the institution.
The main challenge is to expand the number of centres throughout Brazil to increase the density of protocols per habitant and organise training programs with sites to initiate low-complexity studies, with high-quality services that in the future can manage more complex studies.
Public awareness campaigns and community-centred approaches
Participation in clinical trials in oncology worldwide has remained low, between 2% and 8% of adults with cancer, although most of the patients in cancer clinical trials report favourable experiences [6].One important reason is overly restrictive eligibility criteria, which have been revisited by the ASCO and FDA, trying to modernise and broad inclusion criteria that will permit more generalisability of data [45].However, this low accrual represents a multifactorial scenario including, local culture, resource barriers, misperception regarding clinical research, and lack of interest by patients and physicians [46].Participants of clinical trials tend to be younger, healthier and represent a less diverse population in terms of race, ethnicity and geographical distribution than people in daily clinical practice [47].Raising awareness among the general public concerning the importance of clinical research and its potential benefits can foster a culture of research participation.Increasing community engagement represents a strategy to face this complex landscape and facilitate the recruitment of participants by dispelling misconceptions and fears surrounding clinical trials.Utmost, in an engaged community, more individuals may be encouraged to volunteer for studies.Also, it can target marginalised groups given widespread distrust stemming from long-standing racism and discrimination which also ensures diversity among participants.The community-centred strategies promote the dissemination of clinical trials, raising transparency across cancer research.Funding agencies are increasingly recognising the importance of community engagement in the research process, and it has become a benchmark for large research programs funded by the National Institutes of Health [48].LACOG has worked together with CURA to promote actions having a focus on the community [49].These actions include, for instance, educational meetings, workshops, and lives for the naïve audience regarding clinical research.Recently, LACOG and CURA are convinced of the need to create a new platform where the potential participants will be able to find clinical research opportunities across the country, with friendly and clear language.
Collaborations and partnerships
Strengthening collaborations and partnerships will be necessary for increasing Brazilian participation in cancer research programs.Facilitating alliances among stakeholders such as academic and private institutions, pharmaceutical companies, and international organisations can promote knowledge exchange and innovation in the clinical research field.On the other hand, partnering between those stakeholders and patient-representative organisations to prioritise common objectives is crucial.Regarding collaboration among stakeholders, it would be meritorious for instance, for qualified and knowledgeable centres to offer training and support for new centres in their initial studies, pharmaceutical companies sponsoring training programs to new centres offering within these programs the opportunity to qualify for the first clinical trial, centres collaborating one each other by conducting academic studies, partnering public-private centres which would permit using the structure of both.The first step may be to connect stakeholders to raise awareness of each one about their roles.
Expansion of physicians-directed policies
Physicians have enormous importance on patient enrolment in clinical trials, however, several barriers have hindered the referrals.Of note, a recent Brazilian survey described that one-third of the oncologists refer only 1% of their patients to clinical trials [31].In opposite, a metaanalysis evaluating clinical trial participation in the United States described that 55% of invited patients accepted to participate, standing out that patients are willing to take part in the clinical research as long as they are invited [50].Among reasons that might explain this low rate of referral by Brazilian physicians are the paucity of available clinical trials, the lack of a unified and updated platform managed by a reliable institution with available trials, the need for referral patients to distant centres, competing patient care demands in public hospitals with scarce resources, clinician biases which make them judge the patient unwilling or unable to comply with trial protocols [51].These barriers may be particularly acute at hospitals where oncologists are not affiliated with research networks.Once again, LACOG has developed expressive engagement of oncologists qualifying and giving the opportunity of new centres to get their first participation in academic or low-complexity-pharma-sponsored trials.The number of available studies will gather and motivate new oncologists, ultimately modifying the rate of referral in the country.Creating more opportunities for participation of new centres certainly will bring more training programs, which are necessary to improve physicians' capacity to enrol patients.Many studies demonstrated that initial communication with patients is highly variable, and many researchers lack training in how to talk with potential participants about clinical trials [52].
Another crucial issue related to physicians is the remuneration model.The wages of the investigators are usually inadequate in LATAM countries, leading to the preference for clinical practice rather than a career in clinical research [53].The medical doctors who work on clinical research activities and maintain clinical practice are often overwhelmed with clinical duties and are not provided with adequate protected time for conducting research [2].Thus, policies clarifying the need for an appropriate remuneration model in clinical research probably will attract more physicians.
Conclusion
Brazil has presented expressive gains in regulatory processes and educational strategies, standing out leadership by the LACOG and the SBOC.Also, some governmental initiatives regarding research funding have fostered academic research.However, continued efforts are necessary, mainly to become Brazil more competitive which requires shorter and more predictable timelines, unified, updated and free access clinical trials platform, continuing educational program physician-directed and public awareness campaigns.The landscape is excessively complex and needs further engagement of all policymakers, pharmaceutical companies, investigators, cooperative groups, medical societies, non-governmental organisations, government and general society to transform it into a more inclusive scenario.
Figure 1 .
Figure 1.Positive changes in Brazil's clinical research environment.
Figure 2 .
Figure 2. New opportunities to improve cancer research scenario in Brazil.
Table 1 . Clinical research landscape in Brazil. Recent advances New opportunities What has been done How it has improved the scenario What else should be done How it would improve the scenario
Policy ecancer 2024, 18:1698; www.ecancer.org;DOI: https://doi.org/10.3332/ecancer.2024.16987
|
2024-04-21T15:41:43.828Z
|
2024-04-18T00:00:00.000
|
{
"year": 2024,
"sha1": "91a2c83d20ba5403e5740426d1d62dc849540512",
"oa_license": "CCBY",
"oa_url": "https://ecancer.org/en/journal/article/1698-improving-access-to-cancer-clinical-research-in-brazil-recent-advances-and-new-opportunities-expert-opinions-from-the-4th-cura-meeting-so-paulo-2023/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "62a1edc304253a3d94a821042f4b0705413793f0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
239120453
|
pes2o/s2orc
|
v3-fos-license
|
Effects of Selenium-enriched Bacillus Subtilis on Growth Inammatory and Intestinal Microbes of Common Carp Induced by Mercury
Mercury (Hg) is a global pollutant that affects the health of humans and ecosystems. Selenium (Se) is an essential trace element for many organisms including humans. Bacillus subtilis is widely distributed in nature, is one of the main probiotics used in aquaculture, and has a certain adsorption effect on heavy metals. The interaction between Hg and Se was rigorously studied, especially due to the observation of the protective effect of Se on Hg toxicity. The common carp was exposed to Hg (0.03 mg/L), and 10 5 cfu/g Se-rich B. subtilis was added to the feed. After 30 days of feeding, samples were taken to evaluate the growth performance, serological response, inammatory response, and intestinal microbial changes. In this study, when sh were exposed to Hg, the growth performance of the Se-rich B. subtilis plus 0.03 mg/L Hg sh group was lower than that of the control group and higher than 0.03 mg/L Hg; The levels of LZM and IgM decreased, but after supplementation with Se-rich B. subtilis, the levels of LZM and IgM increased; Hg treatment signicantly up-regulated the mRNA expression of IL-1β, IL-8, TNF-α and NF-κB P65, but down-regulated the mRNA expression of IL-10, TGF-β and IkBα. However, compared with the Hg group, the Se-rich B. subtilis plus Hg group can signicantly increase the mRNA expression levels of IL-1β, IL-8, TNF-α and NF-κB P65, but reduce the regulation of IL-10, TGF-β and IkBα expression. At the genus level, the abundance of Aeromonas in the intestines of common carp in the Hg treatment group increased, and Se-rich B. subtilis could reduce the abundance of Aeromonas (pathogenic bacteria). Through the analysis of the species, we found that the Hg group was mainly composed of Aeromonas sobria and Aeromonas hydrophila. However, in the Se-rich B. subtilis treatment group, we found that Aeromonas sobria was signicantly less than the Hg group. Because Aeromonas (pathogenic bacteria) is harmful to the sh, it can induce inammation in the sh and make the sh sick. Through microbiological analysis, it is found that Se-rich B. subtilis improves Hg-induced intestinal microbial changes, alleviates the abundance of Aeromonas, and alleviates the inammation of the sh.
Introduction
Heavy metals have become serious pollutants in the aquatic environment due to their persistence to the environment and the ability to be accumulated by aquatic organisms (Veena et al. 1997). Mercury (Hg) is a global pollutant that has been associated with kidney immune and genetic damage to animals and humans, as well as microbial diversity and function (Liu et al., 2018a;Liu et al., 2018b). Exposure to Hg can cause various diseases of the organ system (Rice et al., 2014). Fish are exposed to Hg due to pollution in inland waters, which will lead to deterioration of sh health, thereby reducing sh quality and sh production (Begam and Sengupta, 2015). the pro-in ammatory transcription factor NF-kB p65 is often a central mediator of the immune and in ammatory response; Studies have found that mercury can signi cantly induce the up-regulation of the pro-in ammatory transcription factor NF-kB p65 ( For a long time, it has been observed that Se protects animals from the toxicity of inorganic mercury and methylmercury. Parízek and Ostádalová reported one of the earliest studies on the protective effect of Se. This experiment Se protects rats from inorganic mercury-induced kidney poisoning (Parízek and Ostádalová, 1976). Subsequent studies found that this is the absorption and interaction of mercury and Se in Pseudomonas uorescens to achieve the detoxi cation of Se and mercury (Belzile et al., 2006).
Se is an essential micronutrient element that has a variety of complex effects on human health. Se is essential to human life and health, which is mainly due to its antioxidant, anti-in ammatory and antiviral properties (Wrobel et al., 2016). Lin et al reported that Se de ciency can reduce the growth performance of the kidney, spleen and skin of the young grass carp head and impair its immune function (Zheng et al., 2018). At the same time, Se supplementation can alleviate the upregulation of nuclear factor NF-KB induced by Microcystin-leucine arginine, and the upregulation of in ammatory cytokines IL-6, TNF-α, IL-1β, and TGF-β1 in cells (Adegoke et al., 2018).
Probiotics are living microorganisms that provide health bene ts to the host when supplied in su cient volumes (W.H. Organization, 2001). According to many recent studies, probiotics derived from the host's intestinal tract increase the growth rate of the host by hydrolyzing the complex polysaccharides in the host's nutrients. As a live microbial feed supplement, it is bene cial to the development of the host.
Preparing Se-rich B. subtilis and Preparing diet
Commercial feed as a basic diet. Se-rich B. subtilis is added to the basic feed For the detailed steps of preparing Se-enriched B. subtilis, please refer to Xinchi S et al (Shang et al., 2021). The mercury content in 20 ml water samples from different aquariums was collected. Table 1 displays the actual mercury concentration. The probiotics are diluted with sterile normal saline, fully homogenized and added to the basic feed according to the needs of the experiment ( nal dose of bacteria: 105 cfu/g feed; nal concentration of Se: 0.5 ppm) (Shang et al., 2021). The same volume of sterile saline was added to the basic diet to prepare a control. Store all feed in a refrigerator at 4℃.
Feed and experimental design
Common carp (6.2 ± 0.1 g) was purchased from an aquatic fry farm (Jilin Province, China) and transported to the laboratory. We randomly divided 360 sh into four groups and divided these groups evenly into 12 tanks (80 L; 3 replicates per group; 30 sh per tank). After the experimental sh were domesticated and reared for 2 weeks, the healthy common carps were randomly divided into 4 treatment groups (Se-rich B. subtilis, control group, Se-rich B. subtilis plus 0.03 mg/L Hg and 0.03 mg/L Hg). After the experiment started, they were fed twice a day at 8 o'clock and 18 o'clock, according to the sh body mass accounting for (1-2)% of the daily feeding amount. 80L aerated tap water in the water tank, the daily water exchange rate is 1/2 of the total.
Growth performance
Observe the development performance of common carp after one month of breeding. The calculation of coe cients was made below: Rate of survival (SR, %) = 100 × (ultimate quantity of sh/initial amount of sh), Weight gain proportion (WGR, %) = 100 × [(ultimate body weight − primary body weight)/primary body weight], given rate of increase (SGR, %/day) = 100 × [(ln ultimate body weight -ln primary body weight)/days].
Serum immunological test
Elisa kit (Nanjing Jiancheng Institute of Biological Engineering, Nanjing, Jiangsu) is used to determine serum immunoglobulin M (IgM) levels and lysozyme (LZM) activity.
Reverse-transcriptase real-time PCR (RT-PCR)
At the end of the exposure test, the expression levels of immune-related genes in the spleen and kidney tissues were measured. The Trizol tool (Takara, Dalian, China) was used to extract total RNA from the spleen and kidney. Use RT-PCR cDNA tool (Takara, Dalian, China) to synthesize clean RNA with OD260/OD280 absorption ratio 1.8-2.0 as a template (Wang, et al., 2021). The primers were synthesized by Kumei Biotechnology Co., Ltd., Jilin. RT-PCR is used to quantify the expression levels of 7 immune response-related genes (IL-8, NF-KB P65, IkBα, IL-1β, TNF-α, IL-10, and TGF-β). Using housekeeping gene β-actin as an internal control (Yin, et al., 2018). Table 3 shows the sequence of the given primers used in this study. The RT-PCR reaction takes a total volume of 20 uL, including 1 uL cDNA, 2 uL each primer, 7 uL treated DEPC water and 10 uL SYBR Premix Ex Taq Master Mix. The thermal reaction conditions are as follows: 95°C for 5 minutes, 95°C for 5 seconds, 60°C for 30 seconds, 72°C for 30 seconds, cycle 30 times. The RT-PCR reaction is repeated 3 times for each sample. Convert the data to Ct values after each reaction. The relative gene expression is determined by 2 −△△CT . (QIIME) tool (version 1.17) to analyze the raw readings. UPARSE is used to cluster OTU, with an analogy cutoff rate of 97%, and UCHIME is used to identify and remove chimeric sequences.Using the RDP classi er against the SILVA (SSU115) 16S rRNA database, with a con dence threshold of 70%, it is used to analyze the classi cation of each 16S rRNA gene sequence. SPSS 20.0 (SPSS, Chicago, IL, USA) was used for statistical analysis. Information was shown as mean ± standard deviation (S.D.) for every group. The whole test was made for three times. One-way exploration of variance (ANOVA) was adopted for the determination of the signi cance variations among the groups, which was followed by Tukey's various contrast experiment. The signi cance level was set at P < 0.05.
Statistical exploration
3 Results There was no signi cant difference between the control group and the Seenriched B. subtilis group, while the 0.03 mg/L Hg group was signi cantly reduced compared to the control group (P < 0.05). The growth performance of the se-rich B. subtilis plus Hg sh group of 0.03 mg/L was less than that of the control group and more than 0.03 mg/L Hg (P < 0.05).
Serum non-speci c immune responses
Hg is known to cause disturbances in the immune response. LZM and IgM levels for both treatment and control groups were determined (Fig. 1). When sh were exposed to Hg, LZM and IgM levels decreased. However, LZM and IgM levels increased after supplementation with Se-rich B. subtilis. The LZM and IgM extents of the Se-rich B. subtilis group grew greatly by comparing with the control group (P < 0.05; Fig. 1).
Immune-associated gene expression
Hg exposure greatly up-regulated the mRNA expression of IL-8, IL-1β, TNF-α and NF-kB P65 but downregulated the mRNA expression of IL-10, TGF-β and IkBα (Fig. 2). Nevertheless, the co-treatment with Hg and Se-enriched B. subtilis greatly increased the mRNA expression levels of IL-8, IL-1β, NF-kB P65 and TNF-α compared with the group exposed to Hg and not supplemented with dietary supplements. Downregulate the mRNA expression of IL-10, TGF-β and IkBα (P < 0.05). Compared with the control, IL-1β, TNFα, IL-8 and NF-kB P65 were up-regulated by exposure to mercury, while the consumption of Se-rich B. subtilis alleviated IL-1β, TNF-α, IL-8 and NF-kB P65 were up-regulated, and IL-10, TGF-β and IkBα were down-regulated (P < 0.05).
3.4 DNA extraction and 16S rRNA gene exploration
Statistical exploration of sequencing data
The dilution curve directly shows the rationality of the amount of sequencing data and indirectly shows the abundance of species in the sample. If the curve tends to be at, it indicates that the amount of sequencing data is gradually reasonable. In this study, after a month of feeding trials, we found that the end of the thinning curve (Fig. 3A) was attened. Therefore, we conclude that the amount of sequencing data is reasonable for our analysis.
For clarifying the effect of Hg in the intestinal ora of common carp, we performed PCoA analysis. The control group, the Se-rich B. subtilis group, the Se-rich B. subtilis plus Hg group, and the Hg group were combined and analyzed. The PCoA results showed that the microbial composition of the four groups of different diets was signi cantly different (Fig. 3B) (P < 0.05).
The chao1 index (the number of species included in the community) between the four groups found that the control group was relatively high, but the difference was not signi cant (Fig. 3C). The Shannon index (the diversity of gut microbes) found that there was no signi cant difference between different diets and groups (Fig. 3D). The results showed that the species richness and uniformity of each group of different diets did not change much.
Comparison at the genus levels
All sequences were identi ed at the genus level. We selected thirty data from the genus for analysis. The ve main genera in the control group were Verrucomicrobiaceae, Cetobacterium, Pseudorhodobacter. Gemmobacter and Aeromonas.The most common genera in the Hg group include Verrucomicrobiaceae, Gemmobacter, Cetobacterium, Aeromonas and Pseudomonas. After Hg exposure, the abundance of Aeromonas and Roseomonas increased signi cantly (P < 0.05). At the same time, after Hg exposure, the abundance of Pseudorhodobacter and Verrucomicrobiaceae was signi cantly reduced (P < 0.05). However, in the Hg treatment group, we found that the increase of Aeromonas and Rosemonas was reduced. The decrease of Pseudomonas and Verrucomicrobiaceae was suppressed (P < 0.05). In addition, we also found that Verrucomicrobiaceae in the Se-rich B. subtilis group also signi cantly decreased (Fig. 4A, B) (P < 0.05).
Comparison at the Species level
Similarly, we selected 30 data from the species for analysis. The most important species of intestinal microbes in the Hg group are Verrucomicrobiaceae_unclassi e, Aeromonas_sobria, Aeromonas hydrophila and Aeromonas spp. The most microbes in the control group were Cetobacterium_somerae, Gemmobacter_sp._yp3 and Pseudomonas_poae. Compared with the control group, after Hg exposure, the abundance of Cetobacterium_somerae, Pseudomonas_poae, Verrucomicrobiaceae_unclassi ed and Gemmobacter_sp._yp3 were signi cantly reduced, and Aeromonas sobria, Aeromonas hydrophila and Aeromonas hydrophila were signi cantly increased (P < 0.05). At the same time, in the Hg treatment group, it was found that the increase of Aeromonas sobria, Aeromonas hydrophila and Aeromonas spp was suppressed, while the decrease of Verrucomicrobiaceae_unclassi e was suppressed. Pseudomonas_poae and Cetobacterium_somerae increased signi cantly (Fig. 4C, D) (P < 0.05).
Discussion
Probiotics improve animal health and nutrition by improving feed value and enzymatic effects, and play a very important role in improving animal health, nutrition and activating immune response (Dawood et . These results may indicate that anti-in ammatory cytokines effectively suppressed the pro-in ammatory immune response, which is consistent with the up-regulation of IL-10 observed in this study. In addition, the up-regulation of IL-10 in the liver may represent an aspect of the homeostatic mechanism that controls the Hg-induced in ammatory response. Gao et al. reported that the reduction of TGF-β will aggravate the in ammatory damage of liver tissue, but the lack of Se will inhibit the expression of TGF-β and promote the production of TNF-α, IL-1β and IL-6, which may cause carp liver tissue In ammation, but Se supplementation can prevent the decrease of TGF-β .The intake of Se-rich B. subtilis will not only increase the Se content in the body, but also B. subtilis will absorb Hg and alleviate the damage of the sh (Shang et al., 2021). In this study, there may be such a mechanism. Hg intake reduced the expression of TGF-β, while the Se-rich B. subtilis plus Hg group alleviated the decrease of TGF-β. The transcription factor NF-κB controls the expression of in ammatory cytokine genes (Taro and Shizuo, 2007). It controls the expression of pro-in ammatory genes and is also a key target for regulating in ammatory diseases (Xu et al., 2005;Yang et al., 2007). Study demonstrated that by catalyzing the degradation of IkBα, NF-κB can be activated by IKK (including IKKα, IKKβ and IKKγ), which plays an important role in regulating human pro-in ammatory cytokines (Jobin and Sartor, 2000;Bollrath and Greten, 2009). In this study, we found that the expression of IkBα in the liver and spleen decreased, and the corresponding NF-κB p65 expression increased, and this phenomenon was alleviated in the Se-rich B. subtilis treatment group. So there may be such a mechanism, Se-rich B. subtilis may be involved in the regulation of the IkBα/NF-κB signaling pathway. When the body consumes too much Hg, it leads to insu cient Se content in the body and triggers the in ammatory response and activates the IkBα/NF-κB signaling pathway. After feeding Se-rich B. subtilis to supplement Se, Se inhibits the upregulation of pro-in ammatory cytokines in the cells and promotes the expression of anti-in ammatory cytokines, thereby reducing the harm of Hg to the sh The intestine is a complex ecosystem, and the intestinal ora has an important role in this ecosystem.
Intestinal ora can assist the digestion and absorption of food and promote nutrient metabolism (Sommer and Backhed, 2013). Changes in the intestinal ora can lead to disorders of the body's normal physiological functions, leading to diseases (Nicholson et al., 2012). Through previous studies, we found that Hg signi cantly reduced the activity of enzymes such as CAT and GSH-PX and triggered in ammation (Shang et al., 2021). This experiment uses Illumina high-throughput sequencing technology to explain how the composition and diversity of carp intestinal microbial communities change under Hg exposure conditions, and provide a theoretical basis for sh intestinal health and normal human growth and development. In this study, the levels of Aeromonas sobria and Aeromonas hydrophila in the intestine of common carp after Hg treatment were higher than those in the control group. Many studies have shown that changes in the diversity of intestinal ora can cause diseases such as enteritis, in ammatory diseases and obesity. (Chassaing and Gewirtz, 2017;Beaz-Hidalgo and Figueras, 2013). Therefore, Hginduced changes in intestinal ora may affect the health of common carp.
In this study, our results indicate that Verrucomicrobiaceae, Cetobacterium, Pseudorhodobacter, Gemmobacter and Aeromonas are the most important bacterial groups in common carp. The main ora in the intestines after Hg exposure are Verrucomicrobiaceae, Gemmobacter, Cetobacterium, Aeromonas and Pseudomonas. Hg exposure caused changes in the intestinal ora, and it was found that the abundance of Aeromonas in the Hg treatment group was much higher than that of the control group. Aeromonas can colonize and infect the host, and can cause diseases such as sepsis and fungal infections. The extracellular products (hemolysin, lipase and protease) produced by Aeromonas can cause soft tissue, hepatobiliary system, respiratory system and arthritis disease (Elorza et al., 2020; Lian et al., 2020). In this study, Hg exposure increased the proportion of Aeromonas in the intestines of sh. However, in the Se-rich B. subtilis plus Hg group, we found that the abundance of Aeromonas was reduced, which indicates that feeding the Se-rich B. subtilis can change the intestinal microbes of the sh and reduce the abundance of Aeromonas. Aeromonas sobria can cause oxidation in sh bodies to change superoxide dismutase, glutathione peroxidase, and up-regulate immunoglobulins IgM and TNF-α (Harikrishnan et al., 2020).Aeromonas hydrophila can cause Catla catla immune response and increase IL-1β and TNF-α (Harikrishnan et al., 2021). In this study, it was found that Aeromonas sobria and Aeromonas hydrophila were signi cantly increased, which may be another cause of the disease. Hg induction will change the Aeromonas in the common carp intestine, and increase the Aeromonas sobria and Aeromonas hydrophila in the Aeromonas, which leads to an in ammatory response in the sh. Serich B. subtilis through the action of Se and the probiotic B. subtilis, regulates the IKBα/NF-κB signaling pathway and reduces the in ammatory response. The composition of the intestinal ora was detected by 16S rRNA gene sequencing, and this phenomenon may be that the Se-rich B. subtilis improved the intestinal ora and reduced the abundance of Aeromonas, thereby reducing the in ammatory response.
Conclusions
In conclusion, our results reported the effect of Se-rich B. subtilis on common carp exposed to mercury. This provides insightful insights for the Se-rich B. subtilis to reduce mercury poisoning in common carp. In this study, Se-rich B. subtilis alleviates mercury-induced effects on common carp growth performance and in ammation by changing the changes of intestinal microbes.
Declarations
Compliance with ethics requirements All experimental and animal handling procedures were conducted according to the research protocols approved by the Institutional Animal Care and Use Committee, Jilin Agricultural University, Jilin Province, China.
Declaration of competing interest
All authors declare that they have no con ict of interest and agree to publish this article to Fish Physiol Biochem. All the data in the article is actually available.
|
2021-10-19T16:30:18.500Z
|
2021-09-21T00:00:00.000
|
{
"year": 2021,
"sha1": "67b320524a966b1742a45f37dd0778151bf8215f",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-838759/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "2a7b4327486526c7391b2e0761c45fbaffef4232",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": []
}
|
220962380
|
pes2o/s2orc
|
v3-fos-license
|
A Logistic-Harvest Model with Allee Effect under Multiplicative Noise
This work is devoted to the study of a stochastic logistic growth model with and without the Allee effect. Such a model describes the evolution of a population under environmental stochastic fluctuations and is in the form of a stochastic differential equation driven by multiplicative Gaussian noise. With the help of the associated Fokker-Planck equation, we analyze the population extinction probability and the probability of reaching a large population size before reaching a small one. We further study the impact of the harvest rate, noise intensity, and the Allee effect on population evolution. The analysis and numerical experiments show that if the noise intensity and harvest rate are small, the population grows exponentially, and upon reaching the carrying capacity, the population size fluctuates around it. In the stochastic logistic-harvest model without the Allee effect, when noise intensity becomes small (or goes to zero), the stationary probability density becomes more acute and its maximum point approaches one. However, for large noise intensity and harvest rate, the population size fluctuates wildly and does not grow exponentially to the carrying capacity. So as far as biological meanings are concerned, we must catch at small values of noise intensity and harvest rate. Finally, we discuss the biological implications of our results.
Introduction
A group of individuals of the same species living in a limited place is called a population [24]. The dynamical process of population growth and decline is a function of factors that are intrinsic to a population and the environmental conditions.
The well known logistic growth model describes the growth of population, followed by a reduction, and bound by the maximum population size (carrying capacity). This model is a nonlinear differential equation where r > 0 is the growth rate and X t is the population size at time t and K is the carrying capacity. This model was first introduced by Verhust [12]. When X t is very small, the equation in (1.1) becomes dX t dt = rX t and dX t dt = 0, when X t nears the carrying capacity K. Equation (1.1) has a unique solution given by X t = K 1+Ae −rt , where A = ( K x 0 − 1). The population size attains its maximum when t → ∞.
Allee effect was studied widely in a biology book [13]. In this book, the authors cited many papers dealing with the Allee effect. Allee [30] suggested that per capita birth rate declines at a low population size ( densities). In this case, the population may go to extinction. The logistic growth model with the Allee effect is one of the most important models in mathematical ecology owing to its theoretical and practical significance. An Allee effect shows a non-negative association between reproduction and population size, and survival of individuals. There are two distinct variations of the Allee effect. Namely, strong Allee effect and weak Allee effect. Strong Allee effect introduces a population threshold [25] that the population must exceed in order to grow, while the weak Allee effect does not admit any threshold. For more details about this model see [34,27,25] and the reference therein.
The classic general logistic growth model with Allee effect [32] is given by where X t is the population size at time t in a given area or place, r > 0 is the population growth rate, K > 0 is the carrying capacity, and S refers to the threshold population (Allee threshold) which is the minimum population that is necessary for the species to survive with values 0 < S < K. Extinction occurs whenever the population decreases below the Allee threshold value S . Here the initial population size X 0 must be greater than the threshold value S . Equation (1.2) has two stable equilibrium solutions at X 1 (t) = 0 and X 2 (t) = K, and an unstable equilibrium solution at X 3 (t) = S . Based on the resources available to the system, the population should reach the carrying capacity K. If the initial population is below the critical threshold S , then it approaches extinction as time goes on. Thus the threshold population is useful to biologists in order to determine whether a given species should be placed on the endangered list so that the survival of the species will then be given due attention and necessary protection.
Fishing has a lot of benefits to human beings and it has also a great impact on the socio-economic and infrastructure development of a country. For example, it serves as food, generates income, and creates job opportunities. Many scientists [3,33,40] devised strategies to prevent the extinction of renewable resources such as fish by harvesting, and they agreed on the importance. See [15, 28? ] for further explanation on harvesting strategies. The logistic growth model, with and without the Allee effect, and with harvesting has been used to study the fishery farming [28]. Harvesting is an interesting research area in a population study. The most important input for the successful management of harvested populations is a sustainable strategy. Harvesting strategy should not lead to instabilities or extinctions.
In this paper, we focus on proportional harvesting which removes a fixed proportionality of individuals each time t ( year). In other words, if the population increases, the harvested also increases, and if the population decreases the quantity harvested decreases. Now let us consider the mathematical model of the relative-rate harvesting on logistic growth model [12] in Eq. (1.3) and logistic growth with Allee effect Eq. (1.4), respectively. and where again r is the population growth rate, K > 0 is the carrying capacity, and S refers to the Allee threshold, 0 < S < K and λ is harvest rate. Equilibria points of Eq. (1.3) lie at X t = 0, and For λ = 0, the function V(x) becomes the potential function of equation (1.1).
Model (1.4) has equilibria points at X t = 0, and at the solutions of λ = r X t S − 1 1 − X t K . The maximum of the parabola is at X t = (S +K) 2 , where we have a saddle-node bifurcation at Many researchers [15, 23, 26? , 29] considered the deterministic model of logistic growth with and without Allee effect under harvesting factor and studied the behavior of the deterministic model free of any stochastic element. Even though deterministic models are much easier to analyze than their corresponding stochastic models, they neglect of random influences on the growth process. stochastic differential equations may be regarded as more adequate models for the development of a population. Since random events affect population dynamics.
In our paper, we focus on both deterministic and stochastic model. Biological populations exhibit some form of stochastic behavior and that environmental noise should thus be an integral component of any dynamic population model [25]. Population ecology deals with demographic and environmental stochasticity. In this work, we consider environmental stochasticity.
Several factors affect the environment population resides [42]. To model environmental effects, one possibility is to explicitly include additional variables, for example, chemical agents, food supply, rainfall, and average temperature into differential equation (1.3) and (1.4). On the other hand, population systems are often subject to environmental noise. Thus it is important to reveal how the noise affects the population systems.
According to Equation (12.20) in [25], a stochastic fishing model is given by a stochastic differential equation (SDE) where H(X t ) is natural growth rate of harvested population, and λ, ǫ are constants. The drift coefficient and diffusion coefficient of this SDE are f (X t ) = H(X t )X t − λX t and g(X t ) = ǫ 2 X 2 t , respectively. This stochastic differential equation has a unique solution [25], and the solution is a homogenous diffusion process. Here and where B t is a one-dimensional Brownian motion and ǫ is the Gaussian noise intensity with 0 < ǫ < 1. The objective of this work is to investigate the behavior of the logistic-harvest without or with Allee effect, driven by multiplicative Gaussian noise. In other words, we will combine the theory of population biology with that of stochastic differential equations. According to Drake and Lodge [17], there are three statistics most commonly used to evaluate the population helpful in studying stochastic population models. These quantities are the extinction probability, the first passage probability, and the mean time to extinction. In our study, we focus on the extinction probability.
In this paper, we first review the deterministic logistic-harvest model with and without the Allee effect, and then we investigate their stochastic counterpart. We further discuss the extinction probability of the stochastic models. To gain some insight into the logistic-harvest mechanism and consequently about the underlying biological phenomenon, we apply the Euler-Maruyama scheme to approximate the sample solution paths of the stochastic logistic-harvesting model. Finally, we present a short discussion on the comparison between the deterministic models and stochastic models as parameter x 0 , λ and ǫ vary.
This paper is arranged as follows: After recalling basic facts about Brownian motion and stochastic differential equations in section 2, we review and discuss the behavior of the equilibrium solution of the deterministic of the logistic-harvest model without the Allee effect (1.3) and analyze its corresponding stochastic model (section 3). We drive the exact solution of model (1.6) and explain the effect of the harvest rate λ, noise intensity ǫ and initial value x 0 on the stationary density function of the Fokker-Plank equation for the SDE in (1.6). In section 4, we review the deterministic logistic-harvest model with Allee effect (1.4). We discuss the effect of the harvest rate λ, noise intensity ǫ, and initial value x 0 on the stationary density function of the Fokker-Plank equation for the SDE in (1.6). The Euler-Maruyama approximation is then used to approximate the solution of the stochastic model. In section 5, we summarize numerical experiments to reveal the sample path behaviors of the deterministic and stochastic models. Finally, in section 6, we present a short conclusion about our findings.
Preliminaries
In this section, we recall some basic facts about Brownian motion and a stochastic differential equations. Assume (Ω, F, {F t } t>0 , P) is a complete probability space with a filtration {F t } t>0 satisfying the usual conditions, i.e. {F t } t>0 is increasing and continuous while F contains all P−null sets. Brownian motion B t is an abstract of random walk process [20] defined on the filtered probability space (Ω, F, {F t } t>0 , P) which satisfies the following properties: • Stationary and normal increments: B t − B s , for s < t is normally distributed with mean is equal to zero and variance is equal to t − s, • Independence of increments: B t − B s , for s < t, is independent of the past, • Continuity of paths: B t is a continuous function of t, almost surely.
• The process starts at origin: • Brownian motion is nowhere differentiable, almost surely.
Stochastic differential equations [6] are often used in modeling biological phenomena, by taking the intrinsic random effects into account. Intrinsic forcing induced SDE models are considered in population dynamics, epidemics, genetics, and oncogenesis. Consider a stochastic differential equation driven by Gaussian noise If both drift f and noise intensity g satisfy a local Lipschitz condition, a growth condition or a priori estimate on the solution, then stochastic differential equation (2.1) has a unique continuous solution X t on t ∈ (0, ∞). [1,7,16,21].
Deterministic logistic-harvest model without Allee effect
Consider a population X t with dynamic according to the logistic growth model without Allee effect. The idea is how to guarantee maximum stable yield in a resource population harvested at rate λX t members per unit time. The harvested population in model (1.3) can be written as: Equilibria points or constant solutions of (3.1), are X u = 0 which is the trivial equilibrium point, and if the harvesting effort is very large, the population will die out. In this case X u = 0 is the only realistic steady state (equilibrium point) which is stable. The non-trivial equilibrium point X s is an asymptotic growth value of the harvest population model. Since 1 < K for r > λ, this implies that the asymptotic values of harvesting population lower than the non-harvesting population; (See Fig 1).
The function F(x) in model (3.1) is autonomous function, because it is independent of t and it is continuously differentiable ( class of C 1 ). Thus it has a unique solution and its non-trivial solution to the initial value problem is [19? ] The non-trivial solution (3.2) goes to the asymptotic value K 1 as time goes to infinity, i.e., lim t→∞ X t = K 1 , for any X 0 > 0. Hence X u = 0 is unstable because small perturbations increasing X makes dX t /dt > 0, which further increases X t and the population rises towards K 1 which is asymptotically stable. When x 0 > K 1 , dX t /dt < 0 the population decline towards K 1 . The function F(x) has maximum value at r 1 K 1 4 which is obtained by substituting X = K 1 2 in Eq. (3.1). The deterministic model (3.1) can be written as where V is the potential function defined by The potential function has a local minimum corresponding to the stable equilibrium and a local maximum The system has only one stable equilibrium, so it is called monostable. For r − λ > 0, the population converges to the stable equilibrium X s = K 1 − λ r , and the yield at the stable equilibrium, called the sustainable yield isK = λX s = λK 1 − λ r . From this we can calculate the fishing effort that maximize is λ MS Y = r 2 which is called maximum sustainable yield (MSY) isλ = r 1 K 1 4 , and the corresponding stable equilibrium is K max = K 1 2 . From Figure 1b, we can observe that when λ = 0 and x 0 < K 2 , the phase point moves faster and faster until it reaches K 2 , and dX dt reaches its maximum value rK 4 . While the phase point approaches to wards carrying capacity K if K 2 < x 0 < K and x 0 > K.
(b) λ and initial value vary. Here we can see that as λ increases, the population size X t goes to zero and X t has S − shape when When λ 0 and the initial value below half of the asymptotic value ( K 1 2 ), the phase point moves faster and faster until it reaches K 1 2 , and dX dt reaches its maximum value In a biological view, this tells us that the population initially growth faster and faster [5] and the graph of X t is concave up. But dX dt starts to decrease if the initial value passes half of carrying capacity K or half of asymptotic value K 1 . In this case, X t has concave down shape. For initial value below half of carrying capacity K 2 or half of asymptotic value K 1 2 , X t has S -shaped; ( see Figure 1b ).
Stochastic logistic-harvest model without Allee effect
We will consider stochastic perturbation of the logistic-harvest model without Allee effect (1.6).
Eq. (3.4) can be transformed into the form of the SDE as in our previous paper [39] and rewritten as Since this model has four parameters, we non-dimensionalize by rescaling the population size (variable) and time. Then the new model (or SDE) will have fewer parameters. Because studying the qualitative behaviour of a SDE (or model) with many parameters is difficult. Define The new model becomes where ǫ is a positive constant representing random growth effects ( 0 < ǫ < 1), and r > λ. B τ is a Brownian motion which has independent and stationary increments with stochastically continuous sample paths.
The solution of the model 3.6 is a homogenous diffusion process with the drift coefficient µ(t, y) = y(1 − y) and diffusion term υ(t, y) = ǫ 2 r−λ y 2 . Finding the exact solution of the nonlinear SDE in (3.6) is similar with [[41], Section 9.3]. Set a new variable Z = 1 Y and apply Itô formula [11,41]. Our goal is that to reduce the nonlinear SDE in terms of Y in to a linear SDE in Z, which we then able to solve. Thus we get a new linear SDE According [36] and Since Y = 1 Z , we obtain the unique, strong solution of equation (3.6) From equation (3.8), we observe that the solution exists for all τ > 0 and if y 0 > 0, then Y > 0 a.s. If τ τ goes to −∞ as time τ → ∞. According to the strong law of large numbers, we apply B τ τ = 0 as τ → ∞. From this we have Y → 0 as τ → ∞.
When the value of Gaussian noise intensity ǫ is small, the solution in (3.8) become a solution of the deterministic model in (1.3), i. e., The Euler-Maruyama method was implemented [10] in order to give an approximation for the sample paths solution of the stochastic model. Some sample solution paths are plotted in Figure 2. We observe that the sample solution paths are positive.
Extinction probability
This subsection deals with the transition density function p(y, τ) for the process Y = {Y τ , τ > 0} which satisfies the following theorem. The stationary density gives important long time information about the probabilistic behaviour of the solution of a given SDE. (2019) [36], Theorem 5.4]. The probability density p(x, t) of the solution of the SDE in (3.6) solves the partial differential equation
When γ 2 ≥ 2 the diffusion process (SDE) in (3.6) has no stationary density. That means population becomes extinct, but we have a noise-induced transition for 0 < γ < √ 2. In this case extinction can not occurs; ( Figure 3). In fact The next step is to show how to find the maximum point y max of p(y). Using p ′ (y) = 0 [41]we can easily find y max , so we have y max = 1 − γ 2 . When γ becomes small, y max = 1. In this case the stationary density become more acute.
Deterministic logistic-harvest model with Allee effect
Now let's nondimensionalize the Allee effect model (1.4) which helps to rescale variables such that the rescaled model has fewer parameters. Let's rescale population size X t by expressing it relative to the carrying capacity K (scaling by S would work as well). Setting Y t = X t K and β = K S . The new differential equation has the following form: where y 0 = x 0 K . Our new model has just three parameter, which makes the bifurcation analyses, computation of equilibria, etc. more transparent.
It is clear that if λ = 0, then the logistic-harvesting model with Allee effect in Eq. Fig. 4c).
The non-trivial equilibrium point Y 3 is an asymptotic growth value of the harvest model. Since Y 3 < K for λ < m 1 , this implies that the asymptotic values of harvesting fish population lower than the nonharvesting fish population; ( See Fig. 4d ).
In Figure 4b, we plot the graph of the potential function V(x) for values of λ = 0.15, defined by In term of V(x), Eq. (1.4) can be written as: The phase line diagram for Eq. (1.4) is shown in Figure 4a. Denoting the stable equilibrium point by Y 3 and the unstable equilibrium point by Y 2 , the separation between the two equilibrium points is Y 3 − Y 2 .
If λ < m 1 , Figure 4b shows the potential function V(x) has two local minima corresponding to the stable equilibrium Y 1 and Y 3 and one local maximum at Y 2 which is an unstable equilibrium. The function V(x) is called a double-well potential, because the two stable equilibrium Y 1 and Y 3 separated by an unstable equilibrium Y 2 .
From the biological point of view, it is meaningful to choose β > 1, and 0 < λ < r (β+1) 2 4β − 1 ( or 0 < λ < m 1 ), and the state Y 1 represents to the population free state that means it is the state of population extinction, in this case no population are present. The state Y 3 implies the state of stable population, where the population density does not increase but stays at a constant level.
The number of equilibrium points depends on the sign of m, where m = (β + 1) 2 − 4β(1 + λ/r). When m < 0, i.e. λ > m 1 the population will go extinct as t → ∞. As far as biological meaning is concerned, we must catch at a harvest rate λ < r (β+1) 2 4β − 1 . So in this case the model in (4.1) has two equilibria, one stable Y 3 and one unstable Y 2 with Y 2 < Y 3 . In Figure 4c shows the phase line plots for Eq. (4.1), dY t dt = rY t (βY t − 1)(1 − Y t ) − λY t for increasing λ. If λ is less than the critical point m 1 , there are two stable equilibrium solutions and one unstable equilibrium solution. As λ increases beyond m 1 , there is one stable equilibrium solution.
Using r = 0.1 and β = 100, Figure 5 shows plots of harvest yield λY 3 (λ) versus eort λ and separation Y 3 (λ) − Y 2 (λ) versus λ. Note that from Figure 5 we get maximum yield when λ = 2.18. For the value of λ, the separation between the two equilibrium solutions is Y 3 (λ) − Y 2 (λ) = 0.3283. If there is noise in the system, we have to think that the expected time to extinction is not very long. This is a problem because while we want to maximize harvest yield, we do not want the stable and unstable equilibrium points to be close together because the expected time to extinction may be too short. Harvesting to maximize yield while driving the population to extinction is not a good harvesting strategy. It would be interesting to think about rational harvesting strategies that do not put the population in danger of extinction.
Stochastic logistic-harvest model with Allee effect
We consider the dimensionaless stochastic perturbation of the logistic-harvest model with Allee effect (1.7) by setting a new variable Y t = X t K .
where β = K S > 1, K is the carrying capacity and S is the Allee parameter with 0 < S < K. B t is a Brownian motion with stochastically continuous sample paths, as well as independent and stationary increments. The stochastic perturbation of the logistic-harvest model with Allee effort is discussed in [1,7,16,21]. The non-trivial solution of the SDE in (4.3 can be found as follows [34,27,9]. Having in mind that S < Y(0) < K, let's define C 2 −function Z t : R + → R + as Z t = log(Y t ), and apply Itô formula to Z t , the system in (4.3) is converted to a SDE with additive noise (to remove any state or level-dependent noise from these trajectories): This shows the equilibrium point of the deterministic term of the additive noise system in (4.4) is affected by the Gaussian noise intensity ǫ. Now we will show that Y t is the solution of the SDE in (4.3). Since Y t = e Z t , apply Itô formula to have This solution is strong, continuous and positive, for S < Y(0) and 0 < S < K. The numerical simulation (solution paths) of the stochastic differential equation in (4.3) is shown in Figure 6 with various initial values. To plot this we use the Euler-Maruyama method. As we can see in Figure 6, the sample path are positive and approaching to the carrying capacity Y 3 when 0 < λ < 1 3 . While it goes to extinction when λ ≥ 1 3 . From Figure 6c we observe that all trajectories, except x = Y 2 ( unstable equilibrium point) fall in to a potential pit ( x = 0 and x = Y 3 ) the stable equilibrium points.
The Euler-Maryuama approximation was used to approximate the solution of stochastic model. Different values of the constant in the drift coefficient λ were applied.
Next we prove that sample paths of X t of SDE (1.7) are uniformly continuous for a.e. t ≥ 0. To show this consider the following integral where f (X s ) = rX s X s S − 1 1 − X s K − λX s , g(X s ) = ǫX s and 0 < S < X(0) < K. Suppose 0 < a < b < ∞, b − a ≤ 1, and p > 2. By applying the well known Hölder inequality and moment inequality for Itô integrals (4.5), we have Since, The equation in (4.6) can be estimated by . According to Kolmogorov-Centsov theorem on the continuity of a stochastic process [27], we know that almost every sample path of X t is locally but uniformly Hölder-continuous with exponent 0 < γ < p−2 2p . Therefore the SDE in (1.7) has uniformly continuous solution on t ≥ 0. All solutions of this model goes to zero as t → ∞. Since Y t = X t K , so the SDE in (4.3) has uniformly continuous solution on t ≥ 0 and its solution also approaches to zero as t → ∞.
Extinction probability and first passage probability
This subsection explains where the probability that the population extinct will happens, and the probability of reaching a large population size L before reaching a small one. Trajectories that start in the potential well on the right will eventually jump into the potential well on the left, even though it may take a very long time. Once there, they rapidly move to the region around x = 0 near the bottom of that well. Once there, they exit at zero with probability one. To see this, for small values of Y t , Eq. (4.3) can be approximated by (4.7) Then the boundary value problem for the probability P(y) of exit at 0 before exit at L is which has the solution p(y) = 1 − y 1+(r+λ)/ǫ 2 L 1+(r+λ)/ǫ 2 Note that P(y) → 1 as L → ∞, even though we are using small y approximations for values of y that are not small. We would obtain the same result even if we solved the probability of exit problem corresponding to Eq. (4.3).
According to Theorem 1, the Fokker-Planck equation corresponding to Eq. (4.3) is In other words, all populations eventually become extinct. However, it is reasonable, or realistic, values of the parameters, if y 0 is in the potential well on the right in Figure 4b, it will take a very long time before the trajectory jumps across the potential barrier into the potential well on the left. In this case it makes sense to look at a quasi-stationary density [41], say q(y), that is obtained by solving
Numerical experiments
We summarize our numerical findings about the impact of parameters x 0 , λ and ǫ on the solution of the deterministic and stochastic models of logistic-harvest with and without Allee effect.
Here we apply the Euler-Maruyama (EM) method following [8] to Eq. (3.6). To apply this method in the SDE (3.6) over time [0, T ], we first need discretize the interval. For any positive n assume ∆t = T/n, and s j = jt, for j = 1, 2, ..., n. The numerical approximation to the solution X(s j ) is denoted by X j . As in [8], the EM method has the following form:
Numerical results and biological implications of logistic-harvest model without Allee effect
The phase line and trajectories of dX t dt = rX t (1 − X t /K) − λX t is plotted in Figure 1. Parameters r = 1, K = 3 and 0 ≤ λ ≤ r. In Fig. 1a as the value of harvesting effort is sufficiently large ( overfishing ), the population extinction occurs. Here the value of X u becomes small as λ increases. In Fig. 1b the solution of model (1.6) for different values of λ and x 0 . In this figure, we can observe that as λ increases, the population size X t decreases i.e. X t goes to zero and it has S −shape when x 0 < K 1 2 . While K 1 2 < x 0 < K 1 and x 0 > K 1 , the population size approaches to K 1 as t → ∞. When λ 0 and x 0 < K 1 2 , the phase point moves faster and faster until it reaches K 1 2 , and dX dt reaches its maximum value r 1 K 1 4 . While if K 1 2 < x 0 < K 1 and x 0 > K 1 , the phase point goes to wards K 1 . The biological implications of this result tells us that the population initially grows faster and faster [5] and the graph of X t is concave up. But dX dt starts to decrease if x 0 > K 2 or x 0 > K 2 . In this case, the shape of X t is concave down. Figure 2 shows the numerical simulation of model dX t = [rX t (1 − X t K ) − λX t ]dt + ǫX t dB t with fixed parameters r = 1, K = 3. For λ = 0.2, ǫ = 0.0 ( no noise) and initial value x 0 vary is plotted in Fig 2a. When x 0 ∈ (0, K) and x 0 > K, the population approaches to its maximum size. In both deterministic and stochastic models the behaviour of the solution of the models are almost similar. In other words, for any positive initial value x 0 , X t goes to K 1 as t → ∞.
The analysis of the stationary density of model (1.6) which varies under a proportional increase in Gaussian noise ǫ and λ is drawn in Figure 3 for 0 < ǫ √ r−λ < 2 and r = 1. In this case we have noiseinduced transition. In Fig. 3a the value of λ is 0.75 and ǫ = 0.125, 0.25, 0.375, 0.5. This tells us that proportional increase in linear multiplicative noise can qualitatively change the behavior of the system. When ǫ becomes smaller and smaller then stationary density p(y) becomes more and more acute, and the maximum point y max of p(y) tends to one. From Fig. 3b we can see that the probabilistic qualitative behaviour of the stationary densities is similar with Fig. 3a with fixed ǫ = 0.2 but for different value of harvest rate λ (λ = 0.36, 0.84, 0.93, 0.96). Here also for small value of ǫ and λ ( if both go zero), then stationary density p(y) becomes more acute, and the maximum point of p(y) tends to y max = 1. If , the population dynamic system in (1.6) has no stationary densities. This shows that the solutions converge to zero (extinction). From this graph we know that the values of the extrema of stationary density depend on the noise intensity ǫ and harvest rate λ.
Numerical results and biological implications of logistic-harvest model with Allee effect
We fixed the value of the parameters r = 1, S = 1, K = 3. Fig. 4a shows us deterministic model (1.4) has three equilibrium solution at X t = 0 ( stable), X t = Y 2 (unstable) and X t = Y 3 ( stable). In Fig. 4b, the area below Y m is an absorbing zone. For fixed λ = 0.2, Fig. 4c shows model (1.3) is always positive while model (1.4) is negative when the population X t < S . The phase line diagram of model (1.4) is plotted in Fig. 4d for λ vary (or 0 < λ < m 1 ). We observe that the value of Y 3 is smaller than the value of K, but Y 2 > S . Figure 5 plots the harvest yield λY 3 (λ) versus eort λ and separation Y 3 (λ) − Y 2 (λ) versus λ. From this Figure, we obtain maximum yield when λ = 2.18, r = 0.1 and β = 100. For the value of λ, the separation between the two equilibrium solutions is Y 3 (λ) − Y 2 (λ) = 0.3283.
The numerical simulation of stochastic logistic-harvest with Allee effect model is given in Figure 6. Fig. 6a shows the solution of (4.3) with fixed value harvest rate λ = 0.2, ǫ = 0 ( no noise) and initial value x 0 vary. In Fig. 6b plots the numerical solution of (4.3) with noise (ǫ = 0.02) and λ = 0.2 and x 0 vary. Clearly seen that the population extinct occurs when x 0 < S . While for x 0 > S and x 0 > K, the population size approaches to its maximum size K.
The graph in Figure 7 presents the quasi-stationary density of model (4.3). Here p(y) → 1 as L → ∞. In other words the probability of reaching 0 (lower size) before reaching L (maximum size), when considered as a function of initial population size, p(y) [2] has an inection point at deterministic unstable equilibrium Y 2 . In this Figure inection point is Y m .
Conclusion
We have studied the logistic-harvest model with and without Allee effect driven by multiplicative Gaussian noise. For the stochastic logistic-harvest model without Allee effect we obtained exact solution, but for stochastic logistic-harvest model with Allee effect we proved the stability of the solution process. We analyzed the stationary density and the probability of reaching a large population size before reaching a small one, for the stochastic models (1.6) and (1.7).
Our numerical experiments demonstrated that the stochastic models in population growth are different from the deterministic models. The main result of our study is that the stochastic model under Gaussian noise perturbation is asymptotically stable. This matches with an important result of the fishery theory.
In the case of the logistic-harvest model without Allee effect, when the harvesting rate λ is less than the growth rate r, we observe that there exists two equilibrium solutions X u = 0 and X s , which are less than the carrying capacity of the population K. However, if the harvesting rate λ is greater than r (overfishing), there is no fixed point at all and therefore no equilibrium solution.
In the case of the logistic-harvest model with the Allee effect, if the harvesting rate λ is equal to the critical threshold m 1 = r (1+β) 4β − 1 , we find that there exists only one nonzero equilibrium state. This equilibrium population is less than the carry capacity K. However, if the harvest rate λ is less than m 1 , we have two positive equilibrium solutions, both the stable and unstable equilibrium solutions are lower than the carrying capacity K and the unstable Y 2 is less than the stable equilibrium solution Y 3 . If the harvesting rate λ is greater than the critical threshold m 1 , there is no fixed point at all and therefore no equilibrium solution.
As far as biological meaning is concerned, we have to catch at a harvest rate λ less than the growth rate r in the case of the logistic-harvest model (1.3), and less than the critical point in the case of the logisticharvest model (1.4).
|
2020-08-05T01:01:30.345Z
|
2020-08-04T00:00:00.000
|
{
"year": 2021,
"sha1": "12153a8fd65770f44a9c4e8b13470ddcf83946f6",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2008.01692",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "12153a8fd65770f44a9c4e8b13470ddcf83946f6",
"s2fieldsofstudy": [
"Mathematics",
"Environmental Science"
],
"extfieldsofstudy": [
"Mathematics",
"Biology"
]
}
|
3241175
|
pes2o/s2orc
|
v3-fos-license
|
Effects of social disruption in elephants persist decades after culling
Background Multi-level fission-fusion societies, characteristic of a number of large brained mammal species including some primates, cetaceans and elephants, are among the most complex and cognitively demanding animal social systems. Many free-ranging populations of these highly social mammals already face severe human disturbance, which is set to accelerate with projected anthropogenic environmental change. Despite this, our understanding of how such disruption affects core aspects of social functioning is still very limited. Results We now use novel playback experiments to assess decision-making abilities integral to operating successfully within complex societies, and provide the first systematic evidence that fundamental social skills may be significantly impaired by anthropogenic disruption. African elephants (Loxodonta africana) that had experienced separation from family members and translocation during culling operations decades previously performed poorly on systematic tests of their social knowledge, failing to distinguish between callers on the basis of social familiarity. Moreover, elephants from the disrupted population showed no evidence of discriminating between callers when age-related cues simulated individuals on an increasing scale of social dominance, in sharp contrast to the undisturbed population where this core social ability was well developed. Conclusions Key decision-making abilities that are fundamental to living in complex societies could be significantly altered in the long-term through exposure to severely disruptive events (e.g. culling and translocation). There is an assumption that wildlife responds to increasing pressure from human societies only in terms of demography, however our study demonstrates that the effects may be considerably more pervasive. These findings highlight the potential long-term negative consequences of acute social disruption in cognitively advanced species that live in close-knit kin-based societies, and alter our perspective on the health and functioning of populations that have been subjected to anthropogenic disturbance.
Background
While we know that sociality evolves when the net benefits of close (often kin-based) associations with conspecifics outweigh the costs, there is still a lack of detailed information on how sociality translates into fitness consequences and the role of normative social structure in mediating these effects [1,2]. Nowhere is this issue more pertinent than in cognitively advanced social mammals such as some non-human primates, cetaceans and elephants which live in complex social systems where intricate social relationships develop over long lifespans and may involve cultural transmission of knowledge between generations [3][4][5]. Moreover, many free-ranging populations of these highly social mammals currently face extreme disturbance through human activities [6][7][8] that impacts directly on social structure, yet a proper understanding of how this "anthropogenic disruption" might affect core aspects of social functioning is lacking. Recent studies have started to highlight the significant long-term effects of disruptive events on physiological stress levels and broad behavioural patterns [9][10][11][12], but we still know very little of how fundamental skills of communication and cognitive abilities that are at the basis of such societies might be affected.
Anthropogenic disturbance of free-ranging populations can occur through processes such as illegal and legal hunting/culling, translocation and habitat fragmentation [7][8][9]13]. All of these are likely to be exacerbated further by increasing pressures on natural resources and climate change [14] and in extreme cases such impacts may result in significant loss of individuals. Disrupted populations typically experience two specific effects that are likely to impact on their social functioning -initial trauma that may accompany the disruptive event (which can involve survivors observing the killing of individuals around them) and the subsequent loss of opportunities for interacting with older group members that could act as appropriate role models or repositories of knowledge [3][4][5]15].
With regard to the first of these impacts, it is now becoming clear that, in animals as well as humans, social trauma experienced early in life may have very significant effects on physiological development and adult behaviour patterns [16][17][18]. For instance, in highly social and cognitively advanced species such as primates and elephants, where neurological development is strongly mediated by exposure to complex social information, a severely disruptive event can result in the expression of one or more non-normative behaviours during later life, including persistent fear, hyper-aggression and infant abandonment [19,20]. Dramatic consequences of social disruption have been documented in two protected areas in South Africa, where orphaned male elephants exhibited abnormal hyper-aggressive behaviour that resulted in the killing of 107 rhinoceroses over a period of 10-years [19,21,22]. Crucially, such traumatic events are also predicted to have more subtle effects on learning, in particular interfering with abilities to gauge appropriate responses to social and environmental stimuli [16][17][18].
The second major impact, namely a loss of opportunities for exposure to appropriate older role models, is likely to accompany any direct effects of social disruption on knowledge acquisition and decision-making. This is particularly relevant in long-lived and cognitively advanced species where older individuals play a key leadership role and co-ordinate decision-making in the context of social and ecological threats [3][4][5]. Where these experienced individuals are absent, younger group members may be presented with fewer opportunities to learn the most appropriate response in dangerous situations [3,4,23,24]. In addition, any abnormal behavioural patterns that have arisen from socially disruptive events have the potential to be passed between the generations and may persist in the long term.
By applying our previously successful playback techniques in two contrasting populations of African elephants we were able to assess directly effects of disruption on decision-making abilities integral to operating successfully within complex societies [3,4]. Our natural study population in Amboseli National Park, Kenya is relatively undisturbed in comparison with the population in Pilanesberg National Park, South Africa that was founded from young orphaned elephants introduced during the early 1980s and 1990s, following management culls of adult and older juvenile animals in the Kruger National Park [21,22,25]. These actions resulted in the young elephants being exposed to a significant traumatic event (the selective killing of all of their older family members followed by translocation to an unfamiliar environment), as well as the severe long-term damage to the core social unit -the family group -in this highly social species [16,19]. If social disruption impacts decision-making processes central to social functioning, we would predict deficits in abilities of the Pilanesberg elephants to respond appropriately to social threat.
Playback experiments
Family units in both populations were presented with two complementary experimental paradigms involving standardised playbacks of female contact calls broadcast from a fieldwork vehicle located 100 m from the subjects (detailed in Methods). In the first experiment, we compared social knowledge directly in the two populations on the basis of subjects' reactions to callers from three distinct social categories (high and low association index callers within the same population, constituting familiar versus unfamiliar associates, and alien callers from a separate population -Pilanesberg elephants in the case of Amboseli and vice versa: see Methods). The second experiment contrasted the responses of family groups in both populations to callers where age-related acoustic cues in re-synthesised calls simulated unknown individuals on an increasing scale of social dominance. Female elephants live in fission-fusion populations where social hierarchy is primarily based upon age, with older and larger individuals being more socially dominant than younger females, both within their respective groups [26] and during inter-group encounters [27,28]. The acoustic characteristics of five caller exemplars from each population (N = 10) were each systematically resynthesised to simulate five different age classes of callers (15,25,35,45 and 55 years), producing a set of 50 calls in total [see Methods & Additional file 1: Supplementary experimental procedures, and Additional file 2: Figure S1 & Table S1]. Amboseli elephants were only played caller exemplars from Pilanesberg (unknown individuals) and vice versa.
Four key behaviours (bunching, bunching intensity, prolonged listening and investigative smellingsee Methods for definitions) were used to test the responses of the elephant groups during the playback experiments. The reactions of all individuals within the family were recorded on video and systematically coded after the playback for analysis using generalised linear mixed models (GLMMs) in the R statistical program (Methods); results were confirmed with blind double coding by two independent observers (Methods). If subjects were able to discriminate effectively between callers in playbacks, we predicted that they should remain relatively relaxed when played calls that conveyed low levels of social threatfamiliar or young individuals -and bunch into defensive formation and show heightened attentiveness when played calls representing high levels of social threat -unfamiliar or older individuals [3,4]. The ability to make these important distinctions should allow individual matriarchs to direct the overall group response most appropriately, and with the lowest cost and risk in relation to the specific threat at hand.
Results
The first series of experiments demonstrated that elephants in the undisturbed Amboseli population distinguish between callers on the basis of their social category, focusing their defensive bunching on alien callers (GLMM analysis: Table 1A & Figure 1A). Our bunching intensity ( Figure 1C), and prolonged listening measures also showed corresponding increases in response to alien callers, but in these cases the simpler null models were selected using Akaike's information criterion adjusted for small sample sizes [AICc: see Additional file 3: Table S2A], indicating that this was a relatively weak response. By contrast, in Pilanesberg there was no evidence that any of the behavioural response variables significantly differed according to the social familiarity of the caller, and null models provided the best fit for the data in all cases (Table 1A; Figure 1B & D; Additional file 3: Table S2B). These results suggested poor abilities for social contextualisation among the Pilanesberg elephants [see also Additional file 4: Supplementary results].
However, the possibility remained that the contrasting pattern of responses described above could also be driven by differences in social attitudes between the populations. In particular, lack of opportunity to form bonds with kin when the Pilanesberg population was founded may conceivably have led to greater acceptance of unknown individuals [11,29]. Crucially therefore, our second series of experiments systematically tested for a core social skill that has direct functional relevance in both populations -the ability to discriminate between unknown callers on the basis of their social dominance [26][27][28]. Responding appropriately to more dominant individuals within the social hierarchy, and thus avoiding escalated interactions, is fundamental to emerging as successful within complex fission-fusion societies where individuals may come into contact with hundreds of others in the population as they move and feed [3,[26][27][28]. Re-synthesis allowed us to manipulate fundamental (F0) and formant frequencies in the calls independently, while leaving other acoustic parameters unchanged, thereby Table 1 Results of GLMMs investigating the behavioural responses of elephant family groups to playbacks of contact calls that varied in social affiliation (experiment 1) and social dominance (experiment 2) For experiment 1, the social affiliation parameter was categorical and the model generated results for the alien and unfamiliar playbacks using the familiar category as a reference. See also Additional file 3: Table S2 & Table S3.
creating standardised stimuli that were representative of callers of the five different ages (see Methods & Additional file 1: Supplementary experimental procedures and Additional file 2: Figure S1 & Table S1).
In this main set of experiments our results clearly demonstrated that, while the Amboseli elephants discriminated between callers simulating different age classes and were most defensive to the oldest callers representing more socially dominant individuals (Table 1B; Figure 2A & C, Additional file 3: Table S3A), there were no such differences in discrimination abilities evident in the Pilanesberg population (Table 1B; Figure 2B & D, Additional file 3: Table S3B). In particular, there were marked contrasts in defensive bunching and bunching intensity in relation to age of caller in Amboseli, with the oldest callers (simulating more dominant individuals) eliciting more frequent and stronger defensive bunching reactions (Table 1B; Figure 2A & C). These results are also borne out in a direct comparison of the populations that revealed a significant difference in the sensitivity of the defensive bunching response of Amboseli elephants to the age of caller in our playbacks compared with subjects in Pilanesberg (GLMM: population × age of caller: Estimate = −0.066, Standard Error = 0.028, Z value = −2.333, P = 0.02). Furthermore, prolonged listening and investigative smelling reactions, both indicating attempts to gather additional information on the caller, increased significantly with caller age in Amboseli, as would be predicted if older callers were recognised as representing a greater threat. However, there was no evidence of an ability to make these same key distinctions in the Pilanesberg elephants (Table 1B).
It is important to note that while the lower maximum age of matriarchs in Pilanesberg (age range: 24-47 versus 23-70 in Amboseli) may have contributed to the poor social discrimination abilities evident here [3,4], it does not appear to have driven the results. In the basic social discrimination tests used in the current study there were no significant interactions between matriarch age and either social relationship with caller (experiment 1), or age of caller (experiment 2), in the best models for either of our study populations (see Additional file 3: Table S2 & Table S3). Moreover, when the oldest matriarchs (48 years and over) were removed from the Amboseli dataset for our main analyses, the results remained statistically significant [see Additional file 4: Supplementary results].
Discussion
The ability to maintain important social relationships is believed to have direct fitness benefits for individuals, allowing them to maximise survival and reproductive success in constantly changing socio-ecological environments [1,2,30]. This is particularly apparent in largebrained, social species where information is accumulated over long life spans [1,[3][4][5]27,31]. However, extremely disruptive events, including culling, poaching and translocation to new areas or capture for captivity can ultimately lead to serious disruption of the intricate social networks that underpin social structure in these species, with severe impacts on each individual's close social bonds and opportunities for learning from older group members [9,11,16,19]. Furthermore, such disruption appears capable of driving aberrant behaviours in social animals that are akin to the post-traumatic stress disorder experienced by humans following extremely traumatic events [16,19]. While elephants in the wild can appear to exhibit short-term resilience following social disruption, apparently forming stable and reproductively active family groups (but see 9), the results presented here suggest that important decision-making abilities that are likely to impact on fundamental aspects of the elephant's complex social behaviour may be significantly altered in the long-term.
Our work provides an unusual opportunity to examine directly links between social structure and inherent social skills that are at the basis of individual and grouplevel interactions in cognitively advanced mammals [1,2]. Cognition encompasses the mechanisms by which animals acquire, process, store and act on information from the environment, including perception, learning, memory and decision-making [32]. Responses in our two playback experiments suggest that functionally important decision-making abilities may be significantly altered by disruption of the natural structure of kinbased social relationships. Contrasting patterns of responses to socially unfamiliar elephants in our initial tests of social knowledge could conceivably be driven by differences in social attitudes, if lack of opportunities to bond with kin in the original Pilanesberg population resulted in greater acceptance of unknown individuals [11,29]. However, it is important to note that the Pilanesberg elephants did not show lower levels of defensive bunching overall -instead they simply failed to focus their defensive bunching on the most socially threatening individuals. Moreover, our main series of experiments subsequently tested for a social skill with direct functional relevance in both populations, the ability to assess age-related social dominance [26][27][28]. Here again, Pilanesberg elephants were apparently unable to distinguish between the level of social threat presented by older versus younger callers.
Previous studies have documented that a single traumatic event is sufficient to impact the neurological development of the mammalian brain [17,18,33,34], and the large hippocampus of the African elephant, which mediates social memory, is thought to be particularly susceptible during growth to adolescence [19]. The relative importance that such neurological changes might have in generating impaired decision-making versus the consequences of lack of exposure to older more experienced group members in the years following the traumatic event is hard to assess, but both may be important in driving our results. Exposure to older more experienced individuals has been shown to facilitate the development of functionally important skills in a range of mammals see [23,24] for reviews, and non-human primates deprived of appropriate role models acquire a smaller set of learned skills [23,35]. Although social learning has not been definitively demonstrated in wild African elephants, there is evidence that knowledge transfer does occur between experienced and naïve individuals [36] in common with many other large brained, socially complex species [23,24,37]. Further studies are now required to partition out these potential effects, and to assess their generality across populations that have experienced differing levels of disturbance.
Understanding the impacts of disrupting social bonds can both provide crucial insights into processes central to social evolution and also throw light on the functioning of advanced mammal societies that have been radically impacted by human disturbance. Our findings suggest that the health and social functioning of wild populations of long-lived and highly social species could be significantly impacted in the long-term by elevated levels of anthropogenic disturbance, which may compromise the ability of surviving individuals to respond appropriately to their conspecifics. Impairments to decision-making processes about threat may also contribute to the development of abnormally aggressive behaviour in response to other species, such as the killing of humans by female elephants in five populations established from translocated individuals that were the survivors of culls [38].
Although recent empirical evidence has highlighted the value of conserving functioning kin-based family groups, this remains an important issue that is often overlooked by wildlife practitioners in favour of population level management approaches that focus primarily on abundance [39]. In particular, while the recovery of populations from human-induced depletion is often assessed on the basis of numbers, it is now becoming clear that abnormal social structure may be a more persistent effect with very significant consequences [9,11,13,40,41]. These issues are currently very relevant, as translocation of mammal groups to new areas is becoming an increasingly common response in dealing with situations of animal-human conflict [29], whilst the escalation of poaching is having a dramatic effect on the structure of many populations [42]. Furthermore, in future years increasing demands on natural resources and ecosystem services from human societies is likely to intensify social disruption and conflict [14,43,44]. There is an assumption that wildlife responds to such pressures only in terms of demography, however our study demonstrates that cognitively advanced species such as elephants that live in complex societies may suffer more profound effects.
Conclusions
By using playback experiments to systematically assess social discrimination skills in relation to developmental history, we provide the first direct evidence that abilities to process information on social identity and age-related dominance are severely compromised among African elephants that had experienced separation from family members and translocation decades previously. Longlived species such as elephants, cetaceans and nonhuman-primates naturally exist in complex societies where behaviour and fitness is strongly affected by social relationships and exposure to older individuals is likely to influence knowledge acquisition by younger group members [1][2][3][4][5]. These critical facets of social living are often compromised in wild populations subjected to human disruption [9,11,40], and missing in the majority of captive environments [45]. Of particular concern, given the longevity of such species, is that the marked effects of these disruptions persist in the long-term.
Methods
This work complies with the Association for the Study of Animal Behaviour/Animal Behaviour Society guidelines for the use of animals in research, and received approval from the Ethical Review Committee at the University of Sussex. We are grateful to the Kenyan Office of the President and to Kenya Wildlife Services for permission to conduct the research in Amboseli National Park, and to North West Parks and Tourism Board for permission to undertake this study in Pilanesberg National Park.
Study populations
Fieldwork was conducted in Amboseli National Park, Kenya and Pilanesberg National Park, South Africa between February 2007 and November 2010. The elephant population in Amboseli numbered approximately 1500 individuals (including 58 family groups); in Pilanesberg there were approximately 200 individuals (including 16 family groups). The Amboseli Elephant Research Project has long-term demographic and behavioural data on the entire population, including detailed ages for all elephants born after 1971. The Pilanesberg population has been studied since 2000, with data available for the composition of each family group as well as ages for all of the adult females. Ages were estimated using criteria that are accepted as a standard in studies of African elephants [46].
Sound recording and natural playback stimuli
Contact calls of adult female elephants (at least 11 years old) were used as playback stimuli for both experimental paradigms. These calls were recorded on digital audiotape using equipment specialized for low-frequency recording: a Sennheiser MKH 110 microphone linked to either a SonyTCD D10 DAT recorder (with DC modification) or a HHb PortaDAT PDR 1000 DAT recorder, through an Audio Engineering Ltd power supply (which incorporated a 5-Hz high-pass filter). With this equipment, the frequency response for recording was flat (±1 dB) down to at least 10 Hz. All contact calls used as stimuli were recorded in conditions of low air turbulence, at a distances of 30 m or less from particular known individual females, often calling in situations when they were separated from the rest of the group; calls were only included if the identity of the caller was completely unambiguous (see also [47,48]). The playback system used custom-built loudspeakers designed and constructed by Aylestone Ltd, Cambridge, UK and Bowers & Wilkins, Steyning, UK. The Aylestone system was composed of a custom-built sixth-order bass box loudspeaker with two sound ports linked to either a Kenwood KAC PS 400 M, Kenwood KAC923 or Kicker Impulse 1252 xi power amplifier and a HHb PortaDAT PDR 1000 DAT or Sony TCD D10 recorder (with DC modification), while the Bowser & Wilkins loudspeaker was powered by Alpine PDX-1.1000 and MRP-T222 amplifiers, linked to a Tascam HD-P2 digital audio recorder. Both playback systems had a lower frequency limit of 10 Hz and a response that is flat ±4 dB from approximately 15 Hz.
Social categories in experiment 1
Prior to the playback experiments being carried out, individual family groups were assigned a contact call for each social category of caller (familiar, unfamiliar and alien), based on the observed level of association [see Additional file 1: Supplementary experimental procedures]. Callers from outside the population were categorised as alien as these individuals were unknown to the target family, while the callers from within each population were ranked from highest level of affiliation to the lowest using the association indices. The mean association index value was then calculated across these playbacks and used as a cut-off to categorise familiar (≥ mean level of association) and unfamiliar (< mean level of association) playback presentations for analysis.
Re-synthesis of contact calls for experiment 2
Five individual contact calls were selected from each study population for re-synthesis, providing ten exemplars. Each of these exemplars was then re-synthesised with respect to age-related acoustic cues (fundamental frequency and formant frequencies) to produce five distinct contact calls per exemplar, simulating each female caller at 15, 25, 35, 45 & 55 years of age [see Additional file 1: Supplementary experimental procedures]. In this way, when presenting contact calls in playbacks, we controlled for individually distinctive acoustic characteristics of callers while systematically varying cues to their age and dominance. The 'change gender' function in PRAAT [49] was used to generate the appropriate new pitch median and the formant ratio shift (calculated by dividing the second formant frequency for the new re-synthesised age category by the frequency of the exemplar's original second formant). This procedure was performed five times (number of age categories) for each of the ten exemplars. The spectrograms of the re-synthesised calls were viewed in PRAAT [49] to ensure that the pitch and formant frequencies had been adjusted correctly. Subjects were played stimuli from callers that are unknown to them (Amboseli elephants were exposed to stimuli from Pilanesberg and vice versa), so as to prevent any confounding effects resulting from recognition.
Playback procedure
A total of 165 playbacks (experiment 1 n = 84, experiment 2 n = 81) were conducted in Amboseli and 109 (experiment 1 n = 57, experiment 2 n = 52) in Pilanesberg. An opportunistic approach was taken in selecting elephant family groups for inclusion in each experiment, which depended upon encountering the family within their home range in a relaxed behavioural state (e.g. foraging or resting). In Amboseli 39 families were selected for experiment 1 and 32 for experiment 2, while in Pilanesberg 14 families were selected for experiment 1 and 13 for experiment 2. Each family group was systematically played contact calls selected from the three categories of social affiliation (familiar, unfamiliar and alien), and the five resynthesised age classes (15-55 years of age from the same exemplar) in randomised order. Each contact call was broadcast to the subjects from a fieldwork vehicle that was located 100 m from the periphery of the family group. The vehicle was positioned at right angles to the direct line of sight to the elephants, and the contact calls were played through the rear door from custom-built loudspeakers (see above). With this set-up the research vehicle, to which the elephants were habituated, acted as an effective visual barrier. Elephants have poor eyesight in comparison with their auditory and olfactory senses and typically respond to playbacks by listening and smelling in the direction of playback rather than trying to visually locate the caller [50]. Moreover, previous experiments in which the calling elephant was a relative revealed that the searching behaviour of subjects was consistent with them expecting the caller to be located in the area beyond the vehicle [47,48]. The peak sound pressure levels of the contact calls were standardised to 105 dB at 1 m (corresponding to the natural volume of a medium loud contact call). Sound pressure levels were measured with a CEL-414/3 sound level meter. A minimum period of seven days was left between playbacks to avoid habituation. Playbacks were not given to groups with calves of less than 1 month, as our previous work had indicated that the presence of such very young calves might result in abnormally high sensitivity to perceived threat [3].
The behavioural responses of the elephants to playback were observed through binoculars and recorded on a Canon XM2 video camera alongside live commentary. From video analysis we assessed five key behavioural measures that described the responses of the family group following playback (developed from [3,4]): (1)Bunching: Defensive response to perceived threat by adult females and their young, which resulted in the diameter of the family group decreasing after the broadcast of a playback experiment (calculated in terms of elephant body lengths). (2)Bunching intensity: The rate at which a defensive bunch of adult females and their young occurred. This measure classifies the overall level of threat response, scoring bunching intensity on a four-point scale as follows: 0 no bunching occurred 1 subtle reduction in diameter of the group, elephants remained relaxed and continue with pre-playback behaviours (> 3 min for bunch formation) 2 group formed a coordinated bunch, pre-playback behaviours such as feeding interrupted (1-3 min for bunch formation) 3 fast and sudden reduction in diameter of the group, elephants very alert (< 1 min for bunch formation) (3)Prolonged listening: Adult female(s) continued to exhibit evidence of listening response for more than 3 minutes after playback, where ears are held in a stiff extended position, often with the head slightly raised. (4)Investigative smelling: Adult female(s) engaged in either up trunk or down trunk smelling to gather olfactory information on the caller's identity.
In the case of measures (3) and (4), each behaviour was scored as occurring if any adult female in the group engaged in that behaviour.
Two independent observers who did not have access to the live video commentary, and were blind to the playback sequence, second coded 25% of the video records comprising 68 videos (34 each); an overall agreement of 90% was achieved on the binary response variables (defensive bunching 96%, prolonged listening 90%, investigative smelling 85%) and the spearman's ρ correlation on the scores for matriarch bunching intensity was 0.90 (p < 0.0001). It is important to note that the blind observers obtained this high level of agreement despite the fact that they were not able to score group behaviour that occasionally occurred off camera or some instances of smelling when a lowered trunk was obscured in the video (behaviours that were voiced on to the live commentary).
Statistical analyses
The playback datasets were analysed separately for each elephant population using generalised linear mixed models (GLMMs) in the R statistical package [51]. The level of association with the caller (familiar, unfamiliar or alien) was used as the explanatory variable in the first experimental paradigm, while age of the call broadcast to the family group was used in the second. Four GLMM analyses were conducted, one for each of the key response behaviours (see above) that were selected as the dependent variables, while family group identity was entered as a random factor to account for repeated measures in the experimental design. Null models, which did not include any explanatory variables, were generated for each behavioural measure along with more complex models that investigated the additive and interactive effects of matriarch age and the number of adult females in the family group (variables used in our previous research as predictors of group-decision making 12 & 13see Additional file 4: Supplementary results). Model selection was performed using Akaike's information criterion adjusted for small sample sizes (AICc) with lower AICc scores indicating better models; however, a more complex model with more degrees of freedom was only selected over a simpler model when the AICc differed by 2 or more [52].
Additional files
Additional file 1: Supplementary experimental procedures.
Additional file 2: Age of caller and acoustic characteristics. Table S1. Standard acoustic characteristics for setting the five different age categories of caller for the resynthesis experiment. Figure S1. Associated regression plots demonstrating the relationship between age of caller and two key acoustic parameters, A) fundamental frequency and B) the frequency of the second formant.
Additional file 3: Model selection using AICc. Table S2. Model selection results of Generalised Linear Mixed Models (GLMMs) for the four key response behaviours of elephant matriarchs to playbacks of callers in different social categories. Table S3. Model selection results of Generalised Linear Mixed Models (GLMMs) for the four key response behaviours of elephant matriarchs to playbacks simulating callers of different levels of social dominance on the basis of distinct age/size classes.
|
2016-10-26T03:31:20.546Z
|
2013-10-23T00:00:00.000
|
{
"year": 2013,
"sha1": "8bd982b2801df1d4e347fb36953beab268f8b172",
"oa_license": "CCBY",
"oa_url": "https://frontiersinzoology.biomedcentral.com/track/pdf/10.1186/1742-9994-10-62",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b0cf0587c50892b11f463ea92fa956427853d6ac",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Psychology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
237787663
|
pes2o/s2orc
|
v3-fos-license
|
Highly efficient and selective supramolecular hydrogel sensor based on rhodamine 6G derivatives
A mercury ion sensitive fluorescent functional monomer was synthesized based on rhodamine 6G, and two highly-effective approaches about the research and development of novel macroscopic hydrogel sensor were reported. The monomer was utilized to synthesize hydrogel sensors by free radical polymerization and guest–host interaction. Hydrogel sensors have prominent selectivity to Hg2+ and can be tailored and reused, which are capable of detecting Hg2+ sensitively in flowing and standing water environment with satisfactory performance. This work is expected to open an avenue to construct novel fluorescent analysis method for Hg2+ detection.
Introduction
Due to its high sensitivity, excellent selectivity and convenient preparation, supramolecular hydrogel, an outstanding and promising smart polymer, has been exploited to escalate the functionality of traditional sensing materials. 1 In comparison with natural hydrogel, proteins and other supramolecular materials, 2 synthetic hydrogels function in a more convenient and better way during synthesis and can be used more widely among sensing materials. 3 Currently, responsive hydrogels functioning as the detector and recognizer of peculiar samples are of remarkable potentials for the research of micro actuators for micromanipulation in the microenvironment polluted by heavy metals. 4 Mercury(II), one kind of the most toxic heavy metal ion, is contaminating extensively various natural environments and impacting human activities. It has raised extensive public concerns due to its adverse effect on environmental safety and human health. 5 More seriously, through the biological accumulation in human body, mercury ions may potentially affect the nervous system, causing malign illness. Accordingly, it is essential to develop a rapid, convenient and efficient Hg 2+ detection method. Therefore, the design and fabrication of hydrogel systems with Hg 2+ responsive characteristic as the micro actuator for micromanipulation in Hg 2+ polluted microenvironment are highly required, which are of profound and signicant value in scientic research and technological development. 6 Currently, a multielement composite materials synthesized under the basis of macroscopic macromolecular hydrogels were conducted into full investigation with the purpose of optically detecting and monitoring diverse metal ions. 7 By hydrogel sensors, the detection of metal ions in natural environment is regarded as an easily manipulated and low-cost analytical method with simple process and prominent sensitivity. Without spectroscopic equipment, it is still feasible to directly analyze target ions by naked eyes through researches. [8][9][10][11] Therefore multifarious synthetic strategies of hydrogel sensors have been researched, designed and developed throughout their functionalization procedures, performance control and potential applications. Peculiarly, the sensor systems through colorimetric and uorometric detection by naked eyes without equipment necessity will be of great and extensive utility, for these hydrogel sensors are easy to operate and contain eminent sensitivity and high signal-noise-response ratio. [12][13][14][15][16] Furthermore, these low-costed materials can be easily prepared and are of prominent convenience to be adapted in diverse conditions.
In order to overcome the shortcomings of traditional uorescence detection method such as monitoring limitation, detection condition and human errors, hydrogel immobilized sensors with response signals which are available for current living environment make the detection operation more convenient and produce more convincing and urgently needed detection results. 17 Therefore, developing hydrogel sensor to detect Hg 2+ with colorimetric and uorescence dual signals is of great importance. The successful synthesis of hydrogel sensors will derive widespread opportunities for practical applications in luminescent patterning, manufacturing of subaqueous uorescent devices, sensors and bioengineering. 18 In consideration of the law of nature, applying so materials on the research and development of manual articial coloring will boost the diversity and universality of practical utility. [19][20][21] In this study, rhodamine 6G-based derivative was selected as the uorophore by structural modications and the thiophene was selected as the recognition site of Hg 2+ . Accordingly, the novel uorescence probe R6GS was designed and synthesized and R6GS performed ideal properties in Hg 2+ detection. Two types of optical hydrogel sensors with uorescence responsiveness in sensing Hg 2+ were synthesized by exploiting this probe. When hydrogel sensors were exposed to Hg 2+ aqueous solution, the hydrogels shied from achromatic color to red and the intensity as well as Hg 2+ concentration remained positively correlated. In principle, the hydrogel sensor featured selective and integral detection and monitoring of Hg 2+ . The effective approach developed two novel macroscopic hydrogel sensors, which formed an easily manipulated, intelligent and aqueous environment adaptive sensing devices.
Material and instrumentation
Rhodamine 6G was purchased from Aladdin Co. Ltd. All chemical reagents satised the requirements of analytical grade or reached the highest purity availability, which were utilized without the necessity of further purication. The preparation of metal ions solutions was in deionized water. By using DMSO-d 6 as the solvent and tetramethylsilane (TMS) as the internal reference, Bruker-AVANCE 400 NMR Spectrometer was utilized to record 1 H and 13 C NMR spectra. FT-IR spectra were observed on Thermo sher Nicolet 6700 FT-IR spectrophotometer. High Resolution Mass Spectra (HRMS) were measured on Accurate-Mass Q-TOF LC/MS system. Shimadzu UV-2600 spectrophotometer was used to record UV-Vis absorption spectra. Fluorescence spectra were gauged on Hitachi F-4600 which functions as the measurement of uorescent intensity.
General processes of spectroscopic analysis of R6GS
The standard stock solution of metal ions (Na + , K + , Ca 2+ , Fe 2+ , Fe 3+ , Co 2+ , Ni 2+ , Cu 2+ , Zn 2+ , Pb 2+ , Cd 2+ , Mn 2+ , Ba 2+ , Mg 2+ and Hg 2+ ) was diluted with deionized water to generate 10 À3 M solutions for optical properties experiments. 1 M of ethylenediamine (EDA) solvent was added in puried water. The preparation of stock solution (10 À3 M) was conducted by dissolving R6GS in DMSO, and the mixed solution of DMSO/H 2 O was used to dilute R6GS solution in order to obtain the analytical solution (DMSO/H 2 O ¼ 9/1, v/v). The pH scope of solutions was allocated and adjusted by 4 M hydrochloric acid aqueous solution or 4 M sodium hydroxide aqueous solution.
Synthesis of hydrogel sensors
Hydrogel sensors were synthesized by means of free radical polymerization (Fig. 1). In DMSO solution, AAm, MMA, R6GS and MBA (0.3% of monomer content) were rstly dissolved and mingled with each other. Under high vacuum environment, the mixture was stirred continuously at 15 C until the homogeneous solution was formed. In the subsequent step, the mixture was added by 100 mL TEMED and initiator (APS, 2% monomer) which were mixed and dissolved in 2.0 mL of DMSO. The mixture was stirred constantly until the homogeneous solution was witnessed in the reaction, at which time the solution was poured into a cylindrical mould and maintained at 50 C for 12 hours accordingly. Hydrogel sensors were hereby extracted from the mould and washed by DMSO for 36 hours and then by distilled water for 36 hours. At this moment, one piece of the hydrogel was separated and dried in an oven at 50 C for the research of swelling kinetics of hydrogel sensors.
In addition, synthesized supramolecular hydrogels can be used to fabricate hydrogel sensors by guest-host interaction (Fig. 2). At rst, AAm, b-CD and MBA (0.3% of monomer content) were commonly stirred in dimethylsulfoxide. The mixed solvent was fully dispersed under vacuum at 5 C. In the following step, the mixture was added by 100 mL of TEMED and initiator (APS, 4% of monomer content). The mixture was stirred constantly until the homogeneous solution was witnessed in the reaction, at which time the solution was poured into a cylindrical mould and was maintained at 50 C for 12 hours accordingly. The hydrogel was thereby extracted from the mould and washed by DMSO for 36 hours. Then, on the basis of the host-guest interaction, AAm-co-b-CD hydrogel was immersed in 10 À2 M R6GS DMSO solution for 10 hours to assemble hydrogel sensor.
No obvious uorescence emission was shown from the standard solution of R6GS in DMSO/H 2 O (9/1, v/v), however the addition of Hg 2+ enhanced a signicant uorescence display with the emission maximum at 581 nm. Accordingly, the blocked photoinduced electron transfer (PET) and the chelationenhanced uorescence (CHEF) process acted as the inducement of these consequences. With regard to experimental comparison, no obvious change was shown on the uorescence spectra aer adding other metal ions. Thus the result demonstrated prominent uorescent selectivity of R6GS toward Hg 2+ .
Sensitivity and selectivity of R6GS
The rapid detection and response of R6GS to Hg 2+ as well as the notable intensity of the uorescence display could be attributed to the blocked PET process and CHEF. On account of quenching by rhodamine 6G hydrazide derivatives through PET, R6GS displayed a very weak uorescence band at 581 nm. Meanwhile the bonding process between Hg 2+ and probe R6GS and the complex opening procedure of spirolactam ring were witnessed, accompanying with the blocking of the PET process. The rigidity of sensor molecules triggered CHEF effect by stable complexation while suppressing the PET process, which resulted in the intensication of uorescence display. The Job's plot was shown in Fig. 5d. The generation of 1 : 1 complex between R6GS and Hg 2+ was demonstrated from the result and the Detection Limit (DL) was therefore conrmed (Fig. 5b). As shown in Fig. 5c, the DL of Hg 2+ was 25.7 nm. The results thereby demonstrated that R6GS was highly sensitive to Hg 2+ and was available to synthesize uorescent sensing materials (Fig. 5a).
Detection mechanism of R6GS
Based upon Job's plots and the investigation of FT-IR spectra, the proposed binding mechanism was shown in Scheme 2. On account of the PET process, the uorescence display in free solution of R6GS was weak. However, when Hg 2+ was added, the PET process was hindered owing to the ring opening of lactam, which strongly intensied uorescence display and triggered noticeable color variation. The proposed hydrogel sensor binding mechanism was similar to R6GS's.
Through FT-IR spectra, the binding modes of R6GS-Hg 2+ were established and studied. The FT-IR spectra of R6GS and R6GS-Hg 2+ were shown in Fig. 6a. Fig. 6b showed the FT-IR spectra of hydrogel sensor and hydrogel sensor-Hg 2+ . Through research and observation, the characteristic of N-H bands of the compound reached at 3419 and 2926 cm À1 and the C]O stretching frequencies of the compound reached at 1647 cm À1 for lactam. Aer R6GS chelated with Hg 2+ , the peak of the aromatic C]C stretching band signicantly changed and the C]O peak shied to 1537 cm À1 . Through research and observation, C]N peak obviously shied to 1383 cm À1 , which promoted the ring opening process of the N-(rhodamine 6G) lactam in R6GS. Meanwhile 1 H NMR spectra of R6GS and R6GS-Hg 2+ were observed and shown in Fig. 6c (400 MHz, DMSO-d 6 ). Within d 6.0-8.0 ppm, R6GS performed 8 sorts of signals. Aer Hg 2+ was added to R6GS, the aromatic and methyl protons of xanthene underwent pronounced downeld chemical shis, which proved the delocalization of xanthene and supported that the large uorescence enhancement could be attributed to the spirolactam ring opening.
The research of R6GS reversibility
Through investigation, adding EDA solution (Fig. 7a) promoted the reversibility of the R6GS detection process. In R6GS and Hg 2+ mixture, the diminution of uorescence intensity and the recovery of uorescence indication of R6GS could be ascribed to the addition of EDA solution. Meanwhile, the change of solution color from uorescent red to transparent demonstrated the regeneration of the free sensor R6GS, and Hg 2+ could still be responded by the regenerated sensor. Fig. 7b indicated that R6GS can be used as the reversible colorimetric sensor for Hg 2+ in aqueous solution. Characteristics such as the reversibility and the regeneration of the sensor are of profound and vital signicance for the development of effective sensor for Hg 2+ in aqueous environment.
The pH effect
The inuence of the probe R6GS to Hg 2+ in different pH environments was investigated. As shown in Fig. 8, in the range of pH 3.0-4.0, the probe could not respond visibly to Hg 2+ . In the range of pH 5.0-8.0, the Hg 2+ could respond effectively to Hg 2+ without dramatic pH inuence. The uorescence intensity of the response of the probe to Hg 2+ at 581 nm was gradually weakened as the pH value improved to the range of 8.0-9.0. This phenomenon could be attributed to the uorescence enhancement that was caused by the disruption of the spirolactam of R6GS in acidic environment (pH < 4), while the bonding process between Hg 2+ and R6GS was inhibited, no sensing reaction was caused in alkaline environment (pH $ 9). And thus the results indicated that the probe R6GS displayed well uorescent response to Hg 2+ in weak acidic to weak alkaline environment.
The fabrication of hydrogel sensor
In this work, 1 H NMR conspicuously demonstrated the formation of b-CD-R6GS inclusion complex, and the work preliminarily elucidated and clearly illustrated the inclusion process. During the absence and presence of R6GS, Fig. 9 presented the 1 H NMR spectra of b-CD. The integral intensities of the signals in the 1 H NMR spectra were analyzed. The inclusion of R6GS exhibited slight changes in b-CD as well as profoundly inuenced the chemical shi of b-CD protons. Furthermore, the chemical shi of b-CD protons uctuated obviously (Table 1), which illustrated that these protons mainly effected the inclusion of R6GS. 1 H NMR studies produced more supporting evidences for the inclusion of R6GS into the central cavity of b-CD. In comparison of free b-CD and R6GS, the mutual interaction between R6GS and b-CD was illustrated by the appearance of the chemical shis of the inclusion complex.
Detection performance of hydrogel sensors
Optical properties experiments were carried out (Fig. 10) to study the selectivity of the conjugation of b-CD + R6GS. In consideration of experimental comparison, the complex of b-CD-R6GS did not affect the recognition and sensing performance of the probe molecules, thus the result demonstrated that aer bonding with b-CD, R6GS still performed superior selectivity toward Hg 2+ .
On the basis of PET and CHEF, hydrogel sensors obtained by two preparation methods demonstrated their prominent responsiveness, selectivity and sensitivity for Hg 2+ . Aiming at the conrmation of the capability of hydrogel sensor, the determination of hydrogels detection limits (DL) was nished in different concentrations (from 10 À3 M to 10 À8 M) of Hg 2+ solutions. When the concentration of Hg 2+ reached 10 À7 M, the color of hydrogel sensor could still be observed notably. With regard to the response time of detection and conrmation, color change of the hydrogel began in 20 min with 3 mm of thickness, among which the concentration of Hg 2+ was 10 À6 M. Meanwhile the hydrogel color changed when the concentration of Hg 2+ was 10 À3 M, 10 À4 M and 10 À5 M in 40 s, 2 min and 10 min respectively, as shown in Fig. 11 and 12. In aqueous solution environment, the hydrogel sensor showed visible red light (same with R6GS). Under 365 nm UV light, the color of hydrogel sensor changed more intensively in aqueous solution. Therefore, hydrogel sensors can be utilized as an easily manipulated platform with prominent detection performance to Hg 2+ .
Universality of hydrogel sensors
According to the performance of hydrogel sensors in Hg 2+ solution, the respective Hg 2+ responses of hydrogel sensors in owing water environment and standing water environment were further investigated. The hydrogel sensor colored from transparent to light aubergine (about 6 min) aer being exposed to the owing Hg 2+ solution (10 À4 M, 80-120 mL min À1 ). The color changed to sharp aubergine aer 10 min of exposure. In a 50 L glass jar, the hydrogel sensor was immersed in the standing Hg 2+ solution, which resulted in an inspiring fact that when the gel was steeped in 10 À8 M Hg 2+ solution for 3 h, the color of the hydrogel sensor changed. This was beyond our expectation because in the detection performance experiment, hydrogel sensor showed no response to the small amount of Hg 2+ solution whose concentration is under 10 À8 M. From the experimental phenomenon, the summary is that the cumulative effect in hydrogel sensor may be generated, which can reduce the detection limit of probe to some extent. Thus this sensing performance of the hydrogel sensor may provide an approach for developing a kind of substantial, sustainable and fastresponsive test strip for the accurate detection of Hg 2+ through naked-eye observation, which is particularly important in real-time Hg 2+ monitoring in natural water environment and the liquid environment of chemical plants zone. Meanwhile, the naked-eye observation of color changes makes the detection of Hg 2+ much faster and more convenient. Hence the technological proposal is a reliable alternative for the detection of Hg 2+ in water environment.
Conclusion
In summary, two easily manipulated and widely applicable synthesis strategies of Hg 2+ responsive hydrogel sensor that is characterized by uorescent off-on switch and color convertibility were demonstrated. In water samples, hydrogel sensors have been successfully applied for tracking and determining the amounts of Hg 2+ , which demonstrated that hydrogel sensors are suitable for common people and scientists under practical requirement. In addition, the method can be applied in a wide range of environments to build monitoring detection module and system, including owing water environment detection system, standing water environment detection system, and so on. The method is expected to be capable of providing novel sights to explore smarter strategies that couple with synergistic functions of visual detection and efficient sensing.
Conflicts of interest
There are no conicts to declare.
|
2021-08-27T17:14:33.957Z
|
2021-06-21T00:00:00.000
|
{
"year": 2021,
"sha1": "26939b2b4f70171b3b9492708ed23e1357018cb6",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2021/ra/d0ra10890a",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8f6ee1e6bc5c6401520b58d71984220f6a03a466",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
233456072
|
pes2o/s2orc
|
v3-fos-license
|
A Novel Circular RNA circCSPP1 Promotes Liver Cancer Progression by Sponging miR-1182
Introduction Aberrant circular RNA (circRNA) expression has been extensively discovered for its involvement in both the initiation and progression of various cancers. Through screening circRNA profile, we identified a novel circRNA has_circ_0001806, which is termed as circCSPP1 in liver cancer. In the present study, we aim to investigate the role of circCSPP1 in the progression of liver cancer. Methods Fluorescence in situ hybridization (FISH) was used to detect the location of circCSPP1. Function studies including MTT, colony formation assay, transwell assay and flow cytometry were carried out to detect the malignant behaviour of circCSPP1 on liver cancer cells. Luciferase assay and RNA pull down were used to detect the interaction between miR-1182 and circCSPP1 as well as RAB15. Quantitative realtime (qPCR) and Western blot were performed to evaluate the RNA and protein expression, respectively. Results CircCSPP1 knockdown inhibited the proliferation, migration and invasion while promoted apoptosis of liver cancer cells. Mechanically, we predicted and verified the target miR of circCSPP1 which is miR-1182. miR-1182 was capable of reversing the effect of circCSPP1 on liver cancer cells. Moreover, miR-1182 was found to also target RAB15 to participate in the regulation of cell phenotype. Discussion Taken together, circCSPP1 promoted progression of liver cancer cells via sponging miR-1182 which may serve as a novel prognostic and therapeutic target for liver cancer.
Introduction
Liver cancer has become the third leading cause of cancer-associated mortality worldwide. 1 The main therapeutic methods for liver cancer include radical resection and chemotherapy. The treatment of liver cancer has made significant progress in the past few years. However, the prognosis of liver cancer remains poor. 2,3 The current 5-year survival rate of liver cancer is estimated to be <25% due to the frequent recurrence and metastasis. 4 Therefore, it is urgent to extend our understanding of the molecular mechanisms underlying liver cancer development.
Circular RNAs (circRNAs) are a class of non-coding RNAs characterized by a covalently closed loop without 5ʹ to 3ʹ polyadenylated tails. These molecules are formed by back-splicing events. 5 Currently, circRNAs have been shown to be highly conserved and are expressed with high stability across different species. 6 circRNAs are differentially expressed in various types of cancer, including prostate, 7 breast 8 and hepatocellular 9 cancer. In addition, they are also involved in several physiological and pathophysiological processes, such as adsorption of microRNAs (miRNAs/miRs), regulation of alternative splicing, protein-RNA interaction and DNA and transcription factor expression, and ultimately, alteration of the cell phenotype. 10,11 miRNAs are small endogenous non-coding RNAs, which can regulate fundamental cell processes by binding to the specific 3ʹuntranslated regions of target genes. 12 This binding can inhibit protein translation or induce degradation of target mRNAs. circRNAs mainly sponge miRNAs to influence gene expression and biological functions. 13,14 In the present study, the circRNA circCSPP1 was identified as an upregulated circRNA in liver cancer tissues and cell lines. Loss-and gain-of-function studies were performed to determine the function of circCSPP1 in liver tumorigenesis. The data predicted and verified the target miRNA of circCSPP1, which was miR-1182, and further elucidated its underlying mechanism in liver cancer. In summary, circ-CSPP1 functioned as an oncogene in liver cancer and may be a potential therapeutic target for this disease.
Methods and Materials Patients and Specimens
The cohort study was conducted on samples collected between January 2017 and January 2018. Patients with a confirmed diagnosis of liver cancer were included. Patients who had received previous chemotherapy, radiotherapy or biological medication were excluded from the study. A total of 55 pairs of liver cancer and normal tissues were obtained from Cangzhou Central Hospital. The distance between tumor and adjacent tissue was 2-3 cm. The tissues were frozen in liquid nitrogen immediately after excision and subsequently stored at −80°C. All experimental procedures were approved by the Ethics Committee of Cangzhou Central Hospital and were performed in accordance with the Declaration of Helsinki of the World Medical Association. Written informed consent was obtained from each patient.
Cell Counting Kit (CCK)-8
Following transfection, the cells were collected and resuspended in culture medium, followed by seeding into 96well plates at a density of 1x10 4 cells/well. The cells were subsequently treated with 15 μL CCK-8 solution for an additional 2 h at 37°C. The absorbance was detected at 480 nm using a microplate reader (BioRad Laboratories, Inc., CA, USA). Each experiment was performed at least 3 times.
Transwell Experiment
Following transfection, 1x10 5 cells were suspended in 200 μL RPMI-1640 medium and were seeded on the upper Transwell chamber (Corning, Inc., NY, USA). Following 24 h of incubation at 37°C, the cells were fixed on the surfaces of lower chambers in 20% methanol followed by staining with 1% crystal violet (Bicobio, Tianjin, China). Finally, the cell colonies were imaged and counted.
Colony Formation Assay
Following transfection, the cells were seeded in 12-well plates at a density of 100 cells/well. The cells were incubated for 2 weeks, fixed in 10% formaldehyde and stained http://doi.org/10.2147/OTT.S292320
DovePress
OncoTargets and Therapy 2021:14 with 1% crystal violet (Beyotime Institute of Biotechnology, Shanghai, China). The images were photographed using a microscope (Leica Microsystems GmbH, Germany) and the colonies that consisted of >50 cells were counted.
Western Blot Analysis
The total protein was extracted and the concentration of the collected protein was determined in a Nanodrop 2000 system (Thermo Fisher Scientific, Inc., USA). SDS-PAGE was performed and 40 μg protein was loaded for electrophoresis, followed by transfer to a PVDF membrane. The blots were subsequently blocked in 5% skimmed milk at room temperature for 2 h, incubated with the primary antibody (1:1000) overnight at 4°C followed by further incubation with the secondary antibody (1:1000) at room temperature for an additional 2 h. Subsequently, the blots were visualized using an ECL chemiluminescence kit and the gray value of each band was analyzed with QuantityOne software.
RNA-Fluorescence in situ Hybridization
Hybridization of Alexa Fluor 555-labeled circCSPP1 was performed according to the manufacturer's protocol (Shanghai GenePharma Co., Ltd., Shanghai, China). FISH assay was performed using a fluorescence in situ hybridization kit according to the manufacturer's instructions (Guangzhou RiboBio Co., Ltd., Guangzhou, China). DAPI was used to stain the cell nucleus. The subcellular distribution of circCSPP1 was observed with a confocal laser scanning microscope (Olympus FV1000, Olympus Corporation, Japan).
RNA Pull-Down Assay
The biotinylated probe of miR-1182 and the control probe were obtained from Hanbio Biotechnology Co., Ltd. (Shanghai, China). The streptavidin-coated beads were initially coated with the probe. The cells were lysed to extract the total protein. Following pretreatment with magnetic beads, RNA and beads were mixed. Following washing for 5 times, the RNA in the pull-down complex was extracted using the TRIzol ® reagent and analyzed by qPCR.
Dual Luciferase Reporter Gene Assay
The luciferase assay was performed using the dualluciferase reporting system psiCHECKTM (Thermo Fisher Scientific, Inc.). CircCSPP1 or RAB15 wild-type (WT) and its mutant sequence (mut) were cloned into plasmid psiCHECK2. 293T cells (4x10 4 cells/well) were cultured overnight in 24-well plates. The psiCHECK2 and the Renilla luciferase expression plasmids were transfected into 293T cells using Lipofectamine 2000 ® . Following 24 h of cell incubation, luciferase activity was measured by the Luciferase reporter gene assay system (Promega Corporation).
Xenograft Model
HepG2 and Huh7 cells were collected and resuspended in RPMI-1640medium. A total of 3 × 10 6 HepG2 cells (100 μL) were injected into the posterior flank of 24 mice subcutaneously. The length (L) and width (W) of the tumors were monitored every 3 days. Following a 35-day period of growth, significant differences were identified with regard to the tumor volume between the two groups. All the mice were euthanized by intraperitoneal injection of an overdose of pentobarbital sodium (160 mg/kg). Subsequently, the tumors were removed and weighed. All animal procedures were approved by the Animal Research Committee of Cangzhou Central Hospital (approval no. CZCH-20190858). People's Republic of China National Standard (GB/T35892-2018) was followed for the welfare of the laboratory animals.
circCSPP1 is Highly Expressed in HSC Tissues and Cells
To identify the dysregulated circRNAs, the GSE97332 and GSE94508 datasets were analyzed in the GEO database using the GEO2R tool. hsa_circ_0001806 expression was dysregulated in both datasets ( Figure 1A). It is well known that circRNAs are more stable due to their looped structure. First, sanger sequence was performed to check the exist of circCSPP1. As Figure 1B showed, the sequencing confirmed the existence of circCSPP1. To confirm the stability of circCSPP1, RNase R was employed in the experiments. The results indicated that, following RNase R treatment, the levels of the linear forms of CSPP1 were notably decreased, while no significant changes were observed in the levels of circCSPP1, which indicated the optimal stability of circCSPP1 ( Figure 1C). Furthermore, liver cancer cells were treated with actinomycin D to inhibit transcription. The half-lives of circCSPP1 and linear CSPP1 were subsequently evaluated. The results indicated that circCSPP1 had a longer half-life than linear CSPP1 ( Figure 1D). In addition, specific probes were DovePress designed to carry out FISH analysis. Figure 1E revealed that circCSPP1 was located mainly in the cytoplasm. Then, the expression levels of circCSPP1 in the cancer tissues or the paired normal tissues were assessed using qPCR. The 55 cases were divided into two groups: a low circCSPP1 group (n=31) and a high circCSPP1 group n=24). The correlation of clinical parameters from these 55 cases was analyzed and shown in Table 1. It was found that circCSPP1 expression was correlated with histological grade and tumor size ( Table 1). The results demonstrated that circCSPP1 expression was markedly upregulated in liver cancer tissues and cells compared with that in normal tissues and in human liver epithelial-2 cells ( Figure 1F and G).
Knockdown of circCSPP1 Inhibits the Proliferation and Migration of Liver Cancer Cells
Due to the altered expression of circCSPP1 in liver cancer tissues and cell lines, it was hypothesized that circCSPP1 played a critical role in the progression of liver cancer. Loss of function studies were carried out using siRNA of circCSPP1. qPCR analysis verified the efficiency of the siRNA knockdown for circCSPP1 (Figure 2A and B). MTT results indicated that knockdown of circCSPP1 expression inhibited the proliferation of liver cancer cells ( Figure 2C and D). Colony formation assay demonstrated that circCSPP1 knockdown attenuated the colony formation ability of liver cancer cells ( Figure 2E). Flow cytometry was subsequently performed to evaluate the induction of apoptosis and the cell cycle distribution of liver cancer cells. The data indicated that circCSPP1 knockdown induced a significant increase in cell apoptosis ( Figure 2F) and a concomitant G0/G1 arrest of liver cancer cells ( Figure 2G and H). In addition, transwell assays were performed to evaluate the migratory and invasive abilities of liver cancer cells. The results indicated that circCSPP1 knockdown reduced the number of migrating ( Figure 3A-C) and invading cells ( Figure 3D-F).
circCSPP1 Exerts Its Effects by Sponging miR-1182
It is well known that circRNAs sponge miRNAs in order to regulate expression of downstream genes, which participate in several cell physiological processes. In the current study, bioinformatics analysis was performed using the specific software Circinteractome to predict the potential miRNA targets of circCSPP1. The cells were transfected with different types of miRNA mimics. Luciferase activity was assessed. Control mimics was transfected into the scramble mimic cell group. It was found that miR-1182 and miR-486-3p inhibited luciferase activity ( Figure 4A) in the cells. Figure 4B demonstrates that miR-1182 expression was downregulated in liver cancer cells. In addition, RNA pull-down assay indicated that miR-1182 interacted with circCSPP1 directly (Figure 4C and D). The complementary base sequences between circCSPP1 and miR-1182 are shown in Figure 4E. The qPCR results indicated that the miR-1182 mimic notably promoted the expression levels of miR-1182 ( Figure 4F). The luciferase activity assay was conducted to verify the binding of circCSPP1 and miR-1182. The data indicated that miR-1182 decreased the luciferase activity levels of the wildtype reporter for circCSPP1, whereas this effect was not observed with the mutant-type reporter plasmid, which confirmed that miR-1182 was a sponge target of circCSPP1 ( Figure 4G).
miR-1182 Targets RAB15 in Liver Cancer Cells
To elucidate the precise mechanism underlying the function of circCSPP1, the downstream targets of miR-1182 were investigated. Figure 5A indicates the complementary base sequences between RAB15 and miR-1182. Luciferase activity assay was conducted to verify the binding of RAB15 and miR-1182. miR-1182 decreased the luciferase activity of the WT reporter for RAB15 but not that of the MUT-type reporter, which confirmed miR-1182 as a sponge target of RAB15 ( Figure 5B). Western blot analysis was used to evaluate the expression levels of RAB15 following miR-1182 overexpression and knockdown. The data indicated that miR-1182 significantly inhibited the expression levels of RAB15, while miR-1182 knockdown promoted RAB15 expression ( Figure 5C). Figure 5D demonstrated that RAB15 expression levels were upregulated in liver cancer cells. qPCR results indicated that miR-1182 reduced the expression levels of RAB15, while circCSPP1 overexpression reversed the effects of miR-1182, demonstrating the competing endogenous (ce)RNA association between RAB15 and circCSPP1 ( Figure 5E). Pearson's analysis indicated a positive correlation between RAB15 and circCSPP1 ( Figure 5F).
RAB15 Reverses the Function of circCSPP1 Knockdown in Liver Cancer
As a ceRNA of circCSPP1, it was predicted that RAB15 may also be involved in the biological processes of liver cancer cells. Co-transfection of si-circCSPP1 and RAB15 overexpressing vector were performed. Transfection of the cells with the RAB15 overexpressing vector caused a significant elevation in the levels of RAB15. Co-transfection of the si-circCSPP1 and the RAB15 overexpressing vector into the cells notably increased the levels of RAB15 compared to those of the si-circCSPP1 single-transfection group ( Figure 6A). By using functional studies, it was observed that RAB15 overexpression reversed the effect of circCSPP1 knockdown on cell proliferation ( Figure 6B and C), colony formation ( Figure 6D), migration ( Figure 6E) and invasion ( Figure 6F).
circCSPP1 Promotes the Growth of Liver Cancer Cells in vivo
Subsequently, an in vivo study was carried out to further investigate the biological effects of circCSPP1 in the progression of liver cancer. A stable HepG2 cell line was established following transfection of circCSPP1 knockdown DovePress and control vectors. Tumor growth inhibition was induced by circCSPP1 knockdown (Figure 7A-D). The weight of the tumors was reduced by si-circCSPP1 ( Figure 7E and F). The maximum tumor diameter and volume allowed in these studies is 20 mm and 1500 mm 3 , respectively. The maximum tumor diameter and volume observed in the present study was 17.5 mm and 1432 mm 3 , respectively. Furthermore, as circCSPP1 regulated the migration and invasion ability of liver cancer cells. We established tumor metastasis model. The results showed that circCSPP1 knockdown could reduce the number of metastatic nodules ( Figure 7G).
Discussion
circRNAs have been shown to play crucial roles in certain cellular functions, such as proliferation, apoptosis, differentiation and metabolism. 15 In the present study, a novel circRNA was identified, which was denoted by circCSPP1. The data demonstrated that circCSPP1 originated from exons 3 to 9 of CSPP1 and formed a loop structure via connecting the 3ʹ and 5ʹ splice sites. The stability of circCSPP1 was confirmed by its stable expression under RNase R digestion.
To the best of our knowledge, this is the first study of circCSPP1 that focused on its altered expression and biological effects. The upregulated expression and the stability of circCSPP1 render it a potential biomarker and a diagnostic and therapeutic target. However, HepG2 used as a liver cancer cell are challenged now, this is a limitation of the present study. We will study the effect of circCSPP1 on the other liver cancer cell lines in our future work. Various miRNAs have been shown to participate in the biological processes of liver cancer. miR-497, −320, −613, −215 and −145 are associated with decreased proliferation, migration and invasion of human liver cancer cells via targeting vascular endothelial growth factor A, specificity protein 1, liver cancer tumor suppressor gene 1 and ADAM19. [16][17][18][19][20][21] Among them, miR-1182 which belongs to the miR-1182 family has been extensively studied and shown to play critical roles in cellular differentiation and development. Several studies have suggested the oncogenic or anti-cancer effect of miR-1182 in different types of cancer including ovarian cancer, 22 myeloid leukemia, 23 colorectal cancer, 24 non-small cell lung cancer 25 and gastric cancer. 26 However, to the best of our knowledge, miR-1182 has not been studied in liver cancer to date. Moreover, its wide expression in human organs, notably during early ocular development, has attracted considerable attention. 27 miR-1182 has been shown to target hsa_circ_0076248 28 as well as circRNA8073, 29 and participates in the regulation of cell apoptosis and proliferation.
DovePress
In the present study, miR-1182 was shown to be the target miRNA of circCSPP1, whose expression was downregulated in liver cancer tissues and cells. Moreover, functional studies revealed for the first time the anti-cancer effects of miR-1182 in liver cancer. Of note, miR-1182 overexpression reversed the effect of circCSPP1, which further verified the association between these two target RNA molecules. In order to further elucidate the underlying mechanism, the target gene of miR-1182 was predicted. The analysis verified RAB15 as a potential target gene. RAB15 expression is closely associated with the susceptibility of cells to DNA damageinduced cell death. This feature may be involved in the regulation of the liver cancer cell phenotype. In future studies, the effects of RAB15 on DNA damage and the associated molecular mechanism involving circCSPP1 and miR-1182 will be investigated.
Insulin-resistant livers may coordinate impaired hepatic metabolic function and increase the risk of liver cancer. miR-833b and miR-205 have been reported as the potential cooperative modulators of liver function. 30 These findings indicated the role of miR-1182 in liver metabolism, which correlated with the progression of liver cancer. This research content is worth studying. In addition, the function of circCSPP1 in HepG2 and Huh7 cells was explored. HLF or HLE hepatoma cell lines exhibit an aggressive malignant phenotype and rapid progression. HLF or HLE
DovePress
OncoTargets and Therapy 2021:14 cell lines will be investigated in future studies to provide further support for the current conclusions. This is one of the limitations of the present study. The mechanism underlying the effects of circCSPP1 in liver cancer will be further assessed in future studies.
In conclusion, the findings of the present study revealed that circCSPP1 promoted the malignant behavior of liver cancer by sponging miR-1182, suggesting its potential as a diagnostic or therapeutic target in liver cancer.
Publish your work in this journal
OncoTargets and Therapy is an international, peer-reviewed, open access journal focusing on the pathological basis of all cancers, potential targets for therapy and treatment protocols employed to improve the management of cancer patients. The journal also focuses on the impact of management programs and new therapeutic agents and protocols on patient perspectives such as quality of life, adherence and satisfaction. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/ testimonials.php to read real quotes from published authors. Submit your manuscript here: https://www.dovepress.com/oncotargets-and-therapy-journal
|
2021-05-01T05:14:03.093Z
|
2021-04-23T00:00:00.000
|
{
"year": 2021,
"sha1": "db4c66564d57980bc2ecedbcae2023276c7bd4b4",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=68797",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "db4c66564d57980bc2ecedbcae2023276c7bd4b4",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
2016992
|
pes2o/s2orc
|
v3-fos-license
|
RETRACTED ARTICLE: New proposal for the serum ascites albumin gradient cut-off value in Chinese ascitic patients
Serum ascites albumin gradient (SAAG) has been recognized as a reliable marker in the differential diagnosis of ascites. The etiological background of cirrhosis is rather different between western countries and eastern countries. The threshold of SAAG in Chinese ascitic patients has not been evaluated yet. The aim of this study was to define a new reasonable threshold of SAAG in Chinese ascitic patients. Adult patients with ascites admitted to the Shanghai Changzheng Hospital from Jan 2004 to Jun 2010 were retrospectively analyzed. The diagnostic criteria for cirrhotic ascites are clinical manifestations, radiological features and esophageal-gastric varicosis, or histopathology. Serum was detected by chemical method using a commercial kit. We used receiver operating characteristic (ROC) analysis to achieve maximal sensitivity and specificity of SAAG. The mean value of SAAG in portal-hypertension-related ascites was significantly higher than that in the non-portal-hypertension-related ascites (21.15 ± 4.38 g/L vs 7.48 ± 3.64 g/L, P = 0.002). The SAAG cut-off value under 12.50 g/L predicted portal hypertension ascites with the sensitivity of 99.20%, specificity of 95.10% and accuracy of 97.65%. SAAG is useful to distinguish portal-hypertension-related ascites and non-portal-hypertension-related ascites, and 12.50 g/L might present as a more reasonable threshold in Chinese ascitic patients. The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/1602582638991860
Introduction
Ascites is a common manifestation of advanced liver disease. Besides, it can also occur in individuals with extrahepatic diseases such as abdominal malignancy, tuberculosis, and autoimmune diseases. Traditionally, ascites is classified as being either transudative or exudative based on the ascitic fluid total protein (AFTP) concentration [1], the ratio of ascitic fluid protein concentration to serum total protein concentration or lactic dehydrogenase (LDH) [2]. However, the serum ascites albumin gradient (SAAG), defined as the serum albumin concentration minus the ascitic fluid albumin concentration, has been proposed as a physiologically based alternative in the classification of ascites in the past 20 years [3][4][5]. Hoefs [6] first introduces the SAAG and reports that SAAG can reflect the portal vein pressure and improve the accuracy of ascites identification. Thereafter, several investigators also demonstrate that the superiority of SAAG in distinguishing portal hypertensive ascites (SAAG > 11 g/L) and non-portal hypertensive ascites (SAAG < 11 g/L) [6][7][8][9]. Thus, SAAG 11 g/L has been considered as a reasonable threshold in clinical practice.
However, there are also some controversies for SAAG [10][11][12]. In 2002, Goran et al. [13] revise the threshold of SAAG and find that when using the cut-off value of 11 g/L, the diagnostic sensitivity is 97.56% and specificity is 46.34%, while using 15.86 g/L, the sensitivity and specificity could reach the maximum. According to Starling, SAAG is the best indicator of portal hypertension to distinguish nonportal hypertensive and portal hypertensive ascites [14]. However, the best threshold may call further research, the primary study reporting 11 g/L only included the ascites due to cirrhosis and tumor, except tuberculosis. In fact, tuberculous ascites are also frequently seen in clinical practice. Cirrhosis is mainly caused by alcohol abuse in Western countries, while in China by Hepatitis B virus infection. Besides, China is also an epidemic area for tuberculosis. To date, the SAAG in Chinese patients with ascites has not been evaluated yet. Hence, this study aimed to define a new cut-off value of SAAG with maximal sensitivity and specificity in Chinese patients with ascites.
Patients
Adult patients with ascites admitted to the Shanghai Changzheng Hospital from Jan 2004 to Jun 2010 were retrospectively analyzed. The diagnostic criteria for cirrhotic ascites are mainly based on clinical manifestations, radiological features and esophageal-gastric varicosis, or histopathology, excluding spontaneous bacterial peritonitis (SBP), malignancy and tuberculosis. Malignant ascites was diagnosed when the results of ascitic fluid cytologic study or peritoneal biopsy was positive. The diagnosis of tuberculous peritonitis required ascitic acid-fast stain or mycobacterial growth in a culture of ascites or peritoneal biopsy with or without histologically proven granulomatous peritonitis, and without tuberculous pericarditis. Serum and ascites fluid were obtained (maximum interval 24 hours) and were tested for albumin concentration and other related parameters, the albumin concentration was detected by chemical method using a commercial kit (P800, Roche), other parameters included ascite routine, protein, tumor marker, cast-off cells, ADA. The patients with unknown origin of ascites or without data of SAAG were excluded. We used cirrhotic ascites as Portalhypertension-related (PHT) ascites, and malignant or tuberculous ascites as Non-portal-hypertension-related (NON-PHT) ascites.
Statistical analysis
Continuous variables were tested for normal distribution and expressed as mean ± standard deviation (SD) and analyzed by Student's t test. Rates were analyzed by Chisquare test. ROC curve was used to assess the diagnostic value of SAAG. The statistical analysis was performed using SPSS 17.0 (SPSS Inc., Chicago, IL, USA) and P < 0.05 was considered statistically significant.
Patient characteristics
A total of 213 patients diagnosed with ascites were included in this study. The baseline characteristic is shown in (Table 1). The clinical findings of the patients with PTH group and NON-PTH group were similar with respect to age and sex ratio.
SAAG analysis
The mean SAAG level of all patients involved was 15.95 ± 7.82 g/L. The mean level of SAAG in the PHT group (cirrhotic ascites) was significantly higher than that in the NON-PHT group (malignant and tuberculous ascites) (21.15 ± 4.38 g/L vs 7.48 ± 3.64 g/L, P = 0.002). Besides, the SAAG levels of the patients with malignant or tuberculous ascites were both significantly lower than that of patients with cirrhotic ascites ( Table 2). In the PHT group, all patients had a SAAG of 11 g/L or greater. While in the NON-PHT group, 12 patients had a SAAG of 11 g/L or higher, and 69 patients had a SAAG less than 11 g/L ( Figure 1).
ROC curve
The ROC curve of SAAG is shown in Figure 2 (Figure 2). The area under the curve was 0.987, and the SAAG cut- off value 12.50 g/L was the best threshold to predict PHT ascites with a sensitivity of 99.20%, a specificity of 95.10% and an accuracy of 97.65%. When using the SAAG cut-off value 11 g/L, its sensitivity and specificity were 100.00% and 85.19% with an accuracy of 94.37%. The accuracy and specificity using SAAG 12.50 g/L were significantly higher than those using SAAG 11 g/L (accuracy, 97.65% vs 94.37%,P<0.01; specificity, 95.10% vs 85.19%, P<0.01).
Discussion
Starling [14] recognizes that the protein content of edema fluid is a reflection of its osmotic pressure and that the osmotic pressure gradient between blood and interstitial fluid is a direct function of the corresponding capillary hydrostatic pressure gradient. According to his concept, the protein concentration gradient between serum and ascites reflects the portal pressure. Hoefs [6] first reports a liner correlation between the SAAG and portal venous pressure in 56 patients with chronic liver disease. More recently, this relationship is also confirmed by Rector and Reynolds [8], who report an excellent correlation between the SAAG and portal pressure. Using AFTP to distinguish exudate and transudate is influenced by serum protein concentration as well as portal pressure. In contrast, the SAAG correlates directly with only one physiologic factor, the portal pressure [6]. Portal pressure keeps stable, and therefore there is no or only minor change of SAAG despite diuresis or therapeutic paracentesis. Hence, it is possible to understand why the SAAG 11 g/L has been widely used to differentiate PHT Figure 1 The scatter of SAAG in classifying samples. In the PHT group, all patients had a SAAG of 11 g/L or greater. While in the NON-PHT group, 12 patients had a SAAG of 11 g/L or higher, and 69 patients had a SAAG less than 11 g/L.
Figure 2
The ROC curve of SAAG. The area under the curve was 0.987, and the SAAG cut-off value 12.50 g/L was the best threshold to predict PHT ascites with a sensitivity of 99.20%, a specificity of 95.10% and an accuracy of 97.65%. The accuracy and specificity using SAAG 12.50 g/L were significantly higher than those using SAAG 11 g/L (accuracy, 97.65% vs 94.37%,P<0.01; specificity, 95.10% vs 85.19%, P<0.01).
and NON-PHT ascites. However, Kajani et al. [15] report that when using the value of SAAG 11 g/L for differentiating PHT and NON-PHT ascites, the accuracy is only 87.5%. In addition, SAAG corrected with serum globulin level could also promote the accuracy to determine portal hypertension [16]. Cirrhosis, cancer and tuberculosis are the most common causes of ascites. As mentioned above, most studies about SAAG are performed in patients with cirrhotic or malignant ascites. However, its utility in tuberculous ascites has not been assessed yet. Therefore, our study involved 27 patients with tuberculous ascites. In this analysis, all the patients with cirrhotic ascites had SAAG of 11 g/L or greater, which was rather similar to the study by Shakil et al. [17]. In contrast to the study by Kajani et al. [15], the causes of cirrhosis did not appear to affect SAAG. The albumin gradient in 83 patients with viral-hepatitis-related cirrhosis (21.28 ± 4.22 g/L) was very near to that in the 49 patients with non-viralhepatitis-related cirrhosis (20.94 ± 4.67 g/L). In addition, the SAAG in 117 patients with nonalcoholic liver disease was not different from that in the 15 patients with alcoholic liver disease (21.26 ± 4.41 g/L vs 20.27 ± 4.17 g/L).
With regard to the discrimination between malignant and nonmalignant ascites, Pare et al. [7] report that SAAG less than 11 g/L is an excellent criterion for the diagnosis of malignant origin of ascites. Similar observations have also been made by Mauer et al. [5]. Based on the present experience, it appears that the criterion of SAAG less than 11 g/L for the distinction between malignant and nonmalignant ascites may be less specific than previously thought. In the present study, a SAAG less than 11 g/L was seen in only 43 of 54 patients with malignant ascites without metastatic liver involvement. Similarly, the gradient in the patients with malignant ascites also did not differ from the gradient in the patients with tuberculous ascites. It suggests that SAAG cannot further distinguish malignant ascites from tuberculous ascites.
Runyon et al. [9] report a very high accuracy of 96.7% for SAAG of 11 g/L based on 901 samples. In our study, when using SAAG 11 g/L, its accuracy was 94.37%, which was slightly lower than 96.7%, while the specificity was only 85.19%. This may be due to the different ascitic etiology between western and eastern countries. The threshold 11 g/L is based mainly on the prevalence of alcoholic cirrhosis in Western countries. In China, cirrhosis is mostly caused by HBV infection. Furthermore, the previous evaluation of diagnostic tests often uses sensitivity, specificity and accuracy, which often depend on the prevalence of study population. In fact, the ascitic prevalence is different in western and eastern countries. Moreover, the receiver operating characteristic (ROC) curve is currently recognized as the best way to measure the diagnostic information and decision-making. The cut-off value obtained by ROC curve has greater accuracy and clinical utility [18,19]. Our research achieved a new value of SAAG of 12.5 g/L by ROC curve. Compared with the previous SAAG of 11 g/L, the new SAAG had higher accuracy and specificity to distinguish PHT and NON-PHT ascites.
Based upon the data herein presented, we conclude that SAAG is useful to distinguish PHT and NON-PHT ascites, and 12.5 g/L might present as a more reasonable threshold in Chinese ascitic patients. However, further study is still needed to be done using larger samples.
|
2017-09-07T11:52:11.753Z
|
2013-08-23T00:00:00.000
|
{
"year": 2013,
"sha1": "9c6bffb696dc83e5267c306a676867603c294919",
"oa_license": "CCBY",
"oa_url": "https://diagnosticpathology.biomedcentral.com/track/pdf/10.1186/1746-1596-8-143",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3d038821fb03429d5d36ca77f129fd087bb7338c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
46775174
|
pes2o/s2orc
|
v3-fos-license
|
ß-endorphins as Possible Markers for Therapeutic Drug Monitoring
This study was performed in order to investigate possible role of brain beta-endorphins as markers of antidepressive drugs therapy monitoring. Experiment was done using amitriptyline and trazodone as antidepressants. For quantification of brain beta-endorphins we used RIA technique. Our results showed significant decrease of brain beta-endorphins concentration in drug-pretreated animals, vs. those in of control group treated with 0,95% NaCl. The lower values were obtained in trazodone pre-treated animals. This study shows that use of psychoactive drugs have influence on brain beta-endorphins concentration. beta-endorphins could be of great importance, used as markers for evaluation of patient treatment.
Introduction
The investigations of relation between endorphins and depression began with fi ndings of enkephalin and opioid receptors located in mood-response areas of the brain.Previous research raised a question whether excess, defi ciency, or static levels of endorphins cause depression.Also, it was hypothesized that endorphins may not be a factor in depression at all.Th erefore, it was presumed that it is premature to conclude how the endogenous opioid system is involved in depression.Endorphins are likely to modulate the nervous system activity over the long-term rather than short time period.Endorphins are released in shock, freeze, "fight or flight", trauma, physical pain and in all stress including psychological stress.They serve as an analgesic (pain killing), anesthetic and cause dissociation, immobilization and loss of self.Depressives induce elevation of stress hormone levels in blood.Since endorphins are released along with ACTH in response to any stressor, depressives are supposed to "elevate" endorphin levels as well (,).
ANTIDEPRESSANT DRUGS Th ere are numerous antidepressant medications, which are specifi cally designed to mimic the eff ects of endorphins in the brain to make a depressed person feel better or cope with stressful situations.Because of stimulated production of endorphins, studies have shown that physical activity can have a similar eff ect as antidepressants.But unlike antidepressants, there are no negative physiological side eff ects of exercise.Antidepressants can also take between two and three weeks before showing the improvement of eff ects, whereas exercise can boost your mood instantly.Investigations in the field of psychopharmaceutical drugs gave us a lot of knowledge about their positive and side eff ects.Tricycle antidepressants were fi rst in use, expressing their eff ects as a supplement for primary role -inhibition of norepinephrin and serotonin uptake by the nervous ends (, ).More recently, practice patterns have shifted to the antidepressant class as the most commonly prescribed for anxiety disorders.Th is is due to their proven effi cacy in anxiety as well as in commonly co-morbid depressive symptoms.Although efficacy between the selective serotonin reuptake inhibitors (SSRIs) and the tricyclic antidepressants (TCAs) is similar, the SSRIs have an enhanced safety and tolerability profi le, making them the preferred agent for all anxiety disorders ().Th e effi cacy of each antidepressant available has been found equal to that of amitriptyline in double-blind studies as far as mild to moderate depression is involved.However, it seems that some antidepressants are more eff ective than others in the treatment of severe types of depression ().Th e considerable side eff ect burden, the possibility of producing anticholinergic delirium, and their contribution to promoting falls and hip fractures, make TCAs especially unsuitable for the elderly and no longer recommended.SSRIs have been proved to relieve many of the anxiety and/or depressive symptoms associated with anxiety disorders, such as generalized anxiety disorder (GAD), social anxiety disorder (SAD), panic disorder, obsessive-compulsive disorder (OCD), and post-traumatic stress disorder (PTSD).Th ey are eff ective, well tolerated, and relatively safe in overdose situations, making them an ideal choice for the treatment of anxiety, particularly for the elderly population.Although the majority of SSRIs have gained US Food and Drug Administration (FDA) approval for major depressive disorder, they may not be interchangeable in the treatment of anxiety disorders, because each SSRI has diff erent FDA approvals for the anxiety disorders ().Trazodone, nefazodon and bupropion has less defi ned neuropharmacology and are taken as atipic.Nevertheless, they have better effi ciency and endurance, leading to better acceptance.Trazodone is an eff ective antidepressant drug with a broad therapeutic spectrum, including anxiolytic effi cacy.Th is triazolopyridine antidepressant, is currently the second most commonly prescribed agent for the treatment of insomnia due to its sedating qualities.Given trazodone's widespread use, a careful review of the literature was conducted to assess its effi cacy and side eff ects when given for treatment of insomnia.Although trazodone is usually referred to as a serotonin (-HT) reuptake inhibitor, this pharmacological eff ect appears to be too weak to fully account for its clinical eff ectiveness (, ).
ANTIDEPRESSANTS AND Β-ENDORPHINS Previous investigations showed that acute amitriptyline and clomipramine produce naloxone-reversible antinociception.Th is apparent opioid-like involvement was further investigated by measuring β-endorphin levels in the hypothalamus following acute and chronic treatment with these antidepressants.Th ey demonstrated signifi cantly raised levels of β-endorphin.Th e support was provided for the suggestion that antidepressants BOSNIAN JOURNAL OF BASIC MEDICAL SCIENCES 2007; 7 (1): 11-14 RADIVOJ JADRIĆ ET AL.: β-ENDORPHINS AS POSSIBLE MARKERS FOR THERAPEUTIC DRUG MONITORING activate opioid systems, through both a direct opioid receptor interaction and an indirect action through enhanced release of opioid peptides.Moreover, it is postulated that the direct action of antidepressants on opioid receptors and the endogenous opioid peptides released interact as agonists at both μ-and δ-opioid receptors to inhibit nociceptive transmission, since the activity is antagonized by both naloxone and naltrindole ().Research showed synergistic infl uence of trazodone, combined with mud bath in therapy of fi bromyalgic syndrome, bettering psychological response of homeostasis formation and systems as an answer to stress.Desipramine and paroxetin, used in animal depression models did not signifi cantly aff ect the extracellular levels of β-endorphins in nucleus accumbens, but chronic antidepressant treatment did normalize serotonin-induced release of β-endorphins, as well as behavioral manifestation of depressive behaviour (, ).
Material and Methods
Albino Wistar rats, weight g were used, divided in groups of , with each animal control to itself.Ethical Committee of our Institution approved the experiment.Amitriptyline (mg/kg/day) and trazodone (mg/kg/day) were administrated to experimental, and , NaCl solution to control group.Before brain samples were collected, all animals were properly sacrifi ced.Collection of brain samples was performed immediately for control group, and after st and th day of amitriptyline and trazodone administration in treated animals.For analyzing β-endorphin levels we used RIA technique, for quantifi cation of human serum and brain β-endorphin (Nichols Institute, San Juan, Capistrano, USA), and for radioactivity level β-counter with gamma-radiation source (LKB Wallac -Sweden).β-endorphin concentration is directly proportional to radioactivity measured in samples.Concentration is given in pg/g for brain β-endorphin values.Counting mean value, standard deviation and standard error we performed statistic evaluation of obtained results.Th e level of signifi cance was determined by use of Student's T test, with values p≤, considered as significant.
Results and Discussion
Our data, presented by charts and graphics, show brain βendorphins values after st and th day of antidepressive drug administration.Obtained values for each day were compared to the other, and to those of control group.
Our results are different compared to results of other authors ().After st day results show significant decrease of brain β-endorphins, which was present also in rat brains after th day of continuous antidepressant administration.In general, amitriptyline produced greater response considering higher brain β-endorphins concentration compared to trazodone.There was significant difference between brain β-endorphins values of animals treated with trazodone compared with amitriptyline group after st day of treatment.There was no significant difference between rat brain β-endorphins in other compared groups/days, showing lower values of brain β-endorphins concentration on each day of continuous tra-zodone versus amitriptyline administration (chart ).Individual values for each animal of experimental groups are presented in Graphic .Our results show mostly lower brain β-endorphins values of animals treated with trazodone.Th ose diff erences may be dependent to lower brain β-endorphins values before beginning of treatment.We consider that all of these changes can be caused by diff erences in mechanism of triazolopyridine (trazodone) and tricyclic antidepressant drugs action on brain β-endorphins content.Th at points at possible diff erences in synthesis intensity of β-endorphins, degradation of brain β-endorphins and possible releasing into blood stream, caused by constant antidepressive drugs administration.
Conclusion
-Rat brain β-endorphins values after a continuous antidepressive drugs treatment are signifi cantly lower then those in control group -In general, amitriptyline produced greater response considering higher brain β-endorphins concentration.
-Values of rat brain β-endorphins in trazodone pretreated animals are lower compared to values of amitriptyline treated groups -Evaluation of brain β-endorphins level could be used as markers for investigation of psychoactive drug eff ects
|
2018-04-03T01:31:22.283Z
|
2007-02-20T00:00:00.000
|
{
"year": 2007,
"sha1": "cf935da5cfd552d3795918e2646b5dbd015f7541",
"oa_license": "CCBY",
"oa_url": "https://www.bjbms.org/ojs/index.php/bjbms/article/download/3081/810",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "cf935da5cfd552d3795918e2646b5dbd015f7541",
"s2fieldsofstudy": [
"Biology",
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119159874
|
pes2o/s2orc
|
v3-fos-license
|
Formal group exponentials and Galois modules in Lubin-Tate extensions
Explicit descriptions of local integral Galois module generators in certain extensions of $p$-adic fields due to Pickett have recently been used to make progress with open questions on integral Galois module structure in wildly ramified extensions of number fields. In parallel, Pulita has generalised the theory of Dwork's power series to a set of power series with coefficients in Lubin-Tate extensions of $\Q_p$ to establish a structure theorem for rank one solvable p-adic differential equations. In this paper we first generalise Pulita's power series using the theories of formal group exponentials and ramified Witt vectors. Using these results and Lubin-Tate theory, we then generalise Pickett's constructions in order to give an analytic representation of integral normal basis generators for the square root of the inverse different in all abelian totally, weakly and wildly ramified extensions of a p-adic field. Other applications are also exposed.
Introduction
The main motivation for this paper came from new progress in the theory of Galois module structure. Indeed, explicit descriptions of local integral Galois module generators due to Erez [5] and Pickett [16] have recently been used to make progress with open questions on integral Galois module structure in wildly ramified extensions of number fields (see [18] and [23]).
Precisely, let p be a prime number. Pickett, generalising work of Erez, has constructed normal basis generators for the square root of the inverse different in degree p extensions of any unramified extension of Q p . His constructions were obtained by using special values of Dwork's power series. Moreover, they have recently been used by Pickett and Vinatier [18] to prove that the square root of the inverse different of E/F is free over Z[G] under certain conditions on both the decomposition groups of G and the base field F , when E/F is a finite odd degree Galois extension with group G.
In parallel, Pulita has generalised the theory of Dwork's power series to a set of power series with coefficients in Lubin-Tate extensions of Q p in order to classify rank one p-adic solvable differential equations [19].
Our main goal was to generalise Erez and Pickett's construction in order to give explicit descriptions of integral normal basis generators for the square root of the inverse different in all abelian totally, weakly and wildly ramified extensions of a p-adic field. In this paper, our goal is totally achieved using a combination of several tools : formal group exponentials, Lubin-Tate theory, and the theory of ramified Witt vectors. This leads us to generalise Pulita's formal power series to power series with coefficients in Lubin-Tate extensions of any finite extension of Q p .
At the same time, we also get explicit generators for the valuation ring over its associated order, in maximal abelian totally, weakly and wildly ramified extensions of any p-adic field.
Notation. Let p be a rational prime, and let Q p be the field of p-adic numbers, Q p be a fixed algebraic closure of Q p and C p be the completion ofQ p with respect to the p-adic absolute value. We let v p and | | p be the normalised p-adic valuation and absolute value on C p such that v p (p) = 1 and |x| p = p −vp(x) . As v p and | | p are completely determined by each other, either can be used in the statement of results; we will use the valuation v p as this is the convention in the literature on Galois modules in Lubin-Tate extensions.
Throughout this paper, for any extension K/Q p considered we will always assume K is contained inQ p and we will denote by O K , P K and k its valuation ring, maximal ideal and residue field respectively. We identify the residue field of Q p with the field of p elements, F p . For any n ∈ Z >0 , we denote by µ n the group of nth roots of unity contained inQ × p .
This generalises Dwork's power series as E P,1 (X) = E γ (X) when P (X) = X p + pX. For all choices of P (X) and n, the power series E n (X) is over-convergent and has the property that E n (1) is a primitive p n th root of unity ζ p n inQ p . Comparing degrees then shows us that Q p (ω n ) = Q p (ζ p n ) for all n and all choices of P (X). We remark that this result is also a consequence of basic Lubin-Tate theory, see [15] or [20] for details of this theory.
In this paper, we first generalise Pulita's exponentials to power series with coefficients in Lubin-Tate extensions of any finite extension K of Q p , in particular by combining Fröhlich's notion of a formal group exponential ( [7], Chap. IV, §1) with the theory of ramified Witt vectors. Note that we impose no other restrictions on our base field K and no restrictions on the uniformising parameter used to construct the Lubin-Tate extensions of K. Inspired by the methods of Pulita, we prove the following core result of the paper : Theorem 1 Let K be a finite extension of Q p , with valuation ring, maximal ideal and residue field denoted by O K , P K and k respectively. Let q = card(k) for some power q of p.
Let π and π ′ be two uniformising parameters for O K , and let P, Q ∈ O K [[X]] be Lubin-Tate polynomials with respect to π and π ′ respectively, i.e., for the unique formal group that admits P as an endomorphism and exp F P ∈ XK[[X]] for the unique power series such that Let {ω i } i>0 be a coherent set of roots associated to Q.
We then apply Theorem 1 to give two explicit results in Lubin-Tate theory. For each integer n ≥ 1, we denote by K π,n the n-th Lubin-Tate extension of K with respect to π. This extension is abelian and totally ramified, with degree q n−1 (q −1) and conductor n. Let P ∈ O K [X] be a Lubin-Tate polynomial with respect to π. The extension K π,n /K is generated by any primitive n-th Lubin-Tate division point with respect to P , i.e., any element ω ∈Q p such that P (n) (ω) = 0 whereas P (n−1) (ω) = 0.
1. the trace element T r K π,2 /M π,2 (E Q P,2 (1)) is a uniformising parameter of M π,2 and a generator of the valuation ring O M π,2 of M π,2 over its associated order in the extension M π,2 /K ; 2. if p is odd, then the elements T r K π,2 /M π,2 (E Q P,2 (1)) π and T r K π,2 /M π,2 (E Q P,2 (1)) + q π are both generators of Furthermore, Part 2 of this theorem will enable us to give explicit integral normal basis generators for the square root of the inverse different in every abelian totally, weakly and wildly ramified extension of any p-adic field (see Corollary 3.2).
We also remark that in this second part, the first element seems the more natural, however the second is in fact the generalisation of Erez's basis generator for the square root of the inverse different. If these basis generators can be used in local calculations in a similar way to those of Erez and Pickett, it should be possible to solve the case of whether A E/F is free over Z[G] whenever the decomposition groups at wild places are abelian and in particular whenever E/F itself is abelian. We hope this will be possible in the future, however, so far these calculations have eluded us.
Organisation of the paper. This paper is organised into three sections. In Section 1, we give the background to the theory we need to prove our results. Precisely, we first introduce Lubin-Tate formal groups in their original setting and also in terms of Hazewinkel's so called functional equation approach. We also introduce Fröhlich's notion of formal group exponentials and logarithms. Following work of Ditters, Drinfeld, and Fontaine and Fargues, we finally introduce the theory of ramified Witt vectors needed to generalise Pulita's methods. In Section 2, we study the properties of the power series E Q P,n (X) and prove Theorem 1. We also improve on Fröhlich's original bound of the radius of convergence of any formal group exponential coming from a Lubin-Tate formal group over a non-trivial extension of Q p . In Section 3, we explore the applications described above and prove Propositions 1 and 2, as well as Theorem 2.
Formal groups
Let A be a commutative ring with identity. Definition 1.1 We define a 1-dimensional commutative formal group over A to be a formal power series Throughout, all the formal groups we consider will be 1-dimensional commutative formal groups. For brevity we will now refer to these simply as formal groups.
Properties 1-3 can be used to prove that there exists a unique j(X) ∈ XA[[X]] such that F (X, j(X)) = 0 (see Appendix A.4.7 of [9]). This means that the formal group F (X, Y ) endows XA[[X]], among other sets, with an abelian group structure. Notation 1.2 When considering a set endowed with such a group structure we write the group operation as + F . We will also use the notation − F to determine this group operation composed with the group inverse, for example Moreover, we say that the homomorphism f is an isomorphism if there exists a homo-
Lubin-Tate formal groups
We now describe a special type of formal group, due originally to Lubin and Tate. Such formal groups are used in local class field theory to construct maximal totally ramified abelian extensions of a p-adic field K. For full details see, for example, [15] or [20].
Let K be a finite extension of Q p , fix a uniformising parameter π of O K and let q = |O K /P K | be the cardinality of the residue field of K. Definition 1. 4 We define F π as the set of formal power series P (X) over O K such that Such power series are called Lubin-Tate series with respect to π.
For each P ∈ F π there exists a unique formal group F P (X, Y ) ∈ O K [[X, Y ]] which admits P as an endomorphism. Such formal groups are known as Lubin-Tate formal groups. For each P ∈ F π and each a ∈ O K , there exists a unique formal power series, [a] P (X) ∈ XO K [[X]], such that P ([a] P (X)) = [a] P (P (X)) and Further, the map a → [a] P (X) is an injective ring homomorphism O K → End O K (F P ) and for any P, Q ∈ F π , the formal groups F P (X, Y ) and F Q (X, Y ) are isomorphic over O K .
Let P Cp = {x ∈ C p : v p (x) > 0}. For P (X) ∈ F π and a ∈ O K , the formal power series F P (X, Y ) and [a] P (X) converge to limits in P Cp when evaluated at elements of P Cp . We can thus use the abelian group operation + F and the injective ring homomorphism a → [a] P (X) to endow P Cp with an O K -module structure. For every n ≥ 1, we then let T P,n = {x ∈ P Cp : [π n ] P (x) = 0} be the set of π n -torsion points of this module and refer to it as the set of the nth Lubin-Tate division points with respect to P . If x ∈ T P,m if and only if m ≤ n, then we say x is a primitive nth division point.
We let K π,n = K(T P,n ) and K π = ∪ n K π,n .
The set T P,n depends on the choice of the polynomial P (X) but the field K π,n depends only on the uniformising parameter π. The extensions K π,n /K are totally ramified, abelian and of degree q n−1 (q − 1). We have K ab = K π K un and K un ∩ K π = K, where K ab and K un are the maximal abelian and unramified extensions of K respectively.
Hazewinkel's approach to Lubin-Tate formal groups
We now describe a different approach to the construction of Lubin-Tate formal groups due to Hazewinkel. This approach enables us to use Hazewinkel's functional equation lemma to prove the integrality of various power series relating to formal groups, which will be essential in the sequel. For full details see [9] (Chap. I, §2).
Recall that p is a rational prime, K is a finite extension of Q p , π is a fixed uniformising parameter of O K and q is the cardinality of the residue field of K.
We denote by f −1 g (X) ∈ XK[[X]] the unique power series such that .
We now state two parts of the functional equation lemma for this special setting.
It is routine to check that ) is a formal group and from part 1. of the previous theorem we know that it has coefficients in O K . In fact, it is a Lubin-Tate formal group and every Lubin-Tate formal group can be constructed in this manner. This link is described in the following proposition. Then, ). 3. These relations give a one to one correspondence between the Lubin-Tate formal groups obtained from power series P (X) ∈ F π and power series g(X) We also observe that for any a ∈ O K , substituting h(X) = ag(X) into part 2 of Theorem 1.5 then gives us f −1 ) .
Formal group exponentials
Hazewinkel's power series f g (X) and f −1 g (X) can be thought of as special formal group isomorphisms which were first studied by Fröhlich in ( [7], Chap. IV, §1).
Let E be any field of characteristic 0, let F (X, Y ) be a formal group over E and let G a (X, Y ) = X + Y be the additive formal group. There exists a unique isomorphism , known as the formal group logarithm (loc. cit., Prop. 1). The inverse of log F (X) is known as the formal group exponential and is denoted by exp F (X); we note that necessarily we also have, ) be a Lubin-Tate formal group for K as in Prop. 1.6. We then have and these power series are uniquely determined by the following equivalent identities: We also observe that, [a] P (X) = exp F (a log F (X)) .
Remark 1.7 The reason these power series are referred to as formal group exponentials and formal group logarithms is that if K = Q p , then P (X) = (X + 1) p − 1 ∈ F p and F P (X, Y ) = X + Y + XY = G m , the multiplicative formal group. We then have exp F P (X) = exp(X)−1 and log F P (X) = log(X −1) where log and exp are the standard logarithmic and exponential power series.
Witt vectors
This section is concerned with the notion of ramified Witt vectors, generalising the classical theory of Witt vectors introduced by Witt in his original paper [26]. This notion was first developed independently by Ditters [3] and Drinfeld [4], and then by Hazewinkel [10] from a formal group approach in a more general setting. The reader is also referred to Section 5.1 of the current preprint [12] of Fontaine and Fargues.
Standard Witt vectors
We first briefly recall the construction of "standard" Witt vectors. Let p be a prime number, and let X 0 , X 1 , ... be a sequence of indeterminates. The original Witt polynomials are defined by : The standard Witt vectors can be constructed as a functor W : A → W (A) from the category of commutative rings to itself. Precisely, if A is a commutative ring, we first define W (A) as the set of infinite sequences A Z ≥0 . The elements of W (A) are called Witt vectors, and to each Witt vector x = (a n ) n ∈ W (A), one can attach a sequence a (n) n ∈ A Z ≥0 whose coordinates are called the ghost components of x and are defined by the Witt polynomials : a (n) = W n (a 0 , ..., a n ), for all n ≥ 0.
The set W (A) is then uniquely endowed with two laws of composition that satisfy the axioms of a commutative ring, in such a way that the ghost map Γ A : (a n ) n ∈ W (A) → a (n) n ∈ A Z ≥0 becomes a ring homomorphism.
Ramified Witt vectors
Let p be a prime number. Let K be a finite extension of Q p , with valuation ring O K and residue field k. We fix a uniformising parameter π of O K , and write k = F q with q = p f . Ramified Witt vectors over O K are constructed as a functor W O K ,π : A → W O K ,π (A) from the category of O K -algebras to itself, starting with generalised Witt-like polynomials and then proceeding along the lines of the construction of the usual Witt vectors. For convenience, as well as to collect some useful properties of the ramified Witt vectors, we shall briefly describe this functor.
In the case of ramified Witt vectors, the relevant polynomials are : Let A be an O K -algebra. We first define W O K ,π (A) as the set A Z ≥0 as the set of infinite sequences over A. We shall use the notation (a n ) n for elements in W O K ,π (A), and a n n for elements in A Z ≥0 .
If (a n ) n ∈ W O K ,π (A), we define its ghost components as a (n) = W n,O K ,π (a 0 , ..., a n ) for all n ≥ 0. The sequence a (0) , a (1) , ... is called the ghost vector of (a n ) n . This defines a map, that we shall denote by Γ π,O K ,A or simply by Γ A when the setup is explicit, called the ghost map of A : The following lemma is essential for what follows.
Proof. We proceed along the lines of ( [1], No 2, Sect. 1, Par. 1 & 2), replacing multiplication by p by multiplication by π, and replacing p by q in the exponents. The first assertion is a consequence of the equivalence Therefore, for every sequence u n n ∈ A Z ≥0 , there exists at most one element (a n ) n ∈ The second assertion is a consequence of the following relation that can easily be proved in the same way as Lemma 1 of ( [1], No 2, Sect. 1), since q ∈ πO K : In particular, for m = 1 and for any sequence (a n ) n ∈ W O K ,π (A), this implies that σ(W n,O K ,π (a 0 , ..., a n )) = W n,O K ,π (σ(a 0 ), ..., σ(a n )) ≡ W n,O K ,π (a q 0 , ..., a q n ) mod π n+1 A ≡ W n+1,O K ,π (a 0 , ..., a n+1 ) mod π n+1 A Therefore, according to (⋆), we can prove by iteration on n ≥ 0 that a sequence u n n is in the image of Γ A if and only if σ(u n ) ≡ u n+1 mod π n+1 A for all n.
In particular, the O K -algebra A = O K [(X n ) n , (Y n ) n ], endowed with the O K -endomorphism σ given by σ(X n ) = X q n and σ(Y n ) = Y q n , satisfies the above lemma and is such that Γ A is bijective because the relation σ(a) ≡ a q mod πA is satisfied for all a ∈ A. Therefore, the map Γ A transfers the structure of an O K -algebra to Moreover, for all n ≥ 0 and all x ∈ O K , this defines polynomials S n and P n in Now, for any arbitrary O K -algebra A, we endow the set W O K ,π (A) with laws of composition given by (ϕ(a n )) n . This map commutes with the previous laws of composition.
Next, for a fixed O K -algebra A, we consider the O K -algebra B = O K [(X a ) a∈A ] which satisfies the assumptions of Lemma 1.8 with σ(X a ) = X q a . In particular, the ghost map Γ B induces a bijection between W O K ,π (B) and some subalgebra of B Z ≥0 that respects the previous laws of composition, which gives W O K ,π (B) the structure of an O K -algebra. Now, the surjective homomorphism ρ : X a ∈ B → a ∈ A yields a surjective map W O K ,π (ρ) : W O K ,π (B) → W O K ,π (A), which endows W O K ,π (A) with the structure of an O K -algebra as well, thereby proving the following: In particular, W O K ,π (A) is a O K -algebra with Witt vector (0, 0, . . .) as the zero element, and Witt vector (1, 0, 0, . . .) as the identity element.
An important remark is that, if π ′ is another uniformising parameter, there exists a unique isomorphism of functors, u π,π ′ , between W O K ,π and W O K ,π ′ , that commutes with the ghost maps (see Section 5.1 of [12]). In particular, this is the reason why elements of W O K ,π (A) are simply called ramified O K -Witt vectors.
Let A be an O K -algebra. There are three maps that play a crucial role in W O K ,π (A). The first is the Teichmüller lift [ ], which is multiplicative and given by : The second is the Frobenius map F , defined uniquely by the polynomials F n introduced above. Precisely, as a consequence of Lemma 1.8, one can prove that this is the unique endomorphism of the O K -algebra W O K ,π (A) that satisfies : (F (a 0 , a 1 , ...)) = a (1) , a (2) , ... .
As noticed in [12], these two maps do not depend on π, in the sense that they commute with the isomorphism u π,π ′ for any other uniformising element π ′ .
The last map is the Verschiebung map V π and it is additive : Contrary to the others, this map depends on the choice of π. Precisely, V π ′ = π ′ π V π . Note also the relation Γ A (V π (a 0 , a 1 , ...)) = 0, πa (0) , πa (1) , . . . . These maps satisfy the following properties, most of which can be proved after being translated to ghost components using Lemma 1.8 : i. The composed map F V π is the multiplication by π in W O K ,π (A), whereas the composed map V π F is the multiplication by (0, 1, 0, 0, ...). When A has π-torsion, these two operations correspond to each other.
iii. If l/k is a finite extension, then W O K ,π (l) is the ring of integers of the unique unramified extension L/K with residue field extension l/k.
Remark 1.11
In the language of Hazewinkel, these ramified Witt vectors are "untwisted". In his paper [10]
Link with standard Witt vectors.
Let A be an O K -algebra. When K = Q p and π = p, the ramified Witt Z p -algebra W Zp,p (A) is, as a ring, the ring of standard Witt vectors W (A). In particular, one can prove the following (see, for example, the end of Paragraph 5.1 in [12]) : Proposition 1.12 Let K 0 denote the maximal unramified subextension of K/Q p . If A is a perfect F q -algebra, there is a canonical isomorphism : under which the Teichmüller lifts correspond to each other, and the Frobenius map in where f is the residue index of K/Q p and F denotes the standard Frobenius map in W (A).
The power series E Q P,n (X)
In this section we prove Theorem 1, the core result of the paper.
Specific Witt vectors
Recall that p is a prime number and K is a finite extension of Q p , with valuation ring, valuation ideal and residue field denoted by O K , P K and k respectively. We let q = card(k) and fix a uniformising parameter π of O K . In this section, we follow [19, §2.1], but in our more general setting, in order to construct some specific ramified O K -Witt vectors with useful properties.
We fix a formal power series P ∈ O K [[X]] such that This series defines an endomorphism of O K -algebras : The following lemma is a straightforward generalisation of the first statement of [1, Ch.IX, §1, Exercise 14] to ramified O K -Witt vectors : , the ghost vector of S P (h) is given by h(X), h(P (X)), h(P (P (X))), ...
i.e., such that the n-th ghost component of S P (h) is σ n P (h), for every n ≥ 0.
Moreover, the homomorphism of O K -algebras S P is also characterised by : F (S P (h)) = S P (h(P )) .
Proof. According to Lemma 1.8, since the O K -algebra A := O K [[X]] has no πtorsion and is endowed with the map σ P , its ghost map Γ A : W (A) → A Z ≥0 is injective, and each sequence h(X), h(P (X)), h(P (P (X))), ... is clearly in the image of Γ A . Therefore, for every formal power series h ∈ A, there is a unique Witt vector S P (h) ∈ W O K ,π (A) with ghost components h(X), h(P (X)), h(P (P (X))), ... , thereby proving the existence of the map S P . Finally, we prove that this is a homomorphism of O K -algebras after translating the properties of such a homomorphism in terms of ghost components, according to Lemma 1.
In particular, note that the ghost vector of S P,a (h) is h(a), h(P (a)), h(P (P (a))), ... . including the case r = +∞, corresponding to a 0 = 0. Given k ≥ 0, condition 2 is equivalent to v p (α k r ) = 0 and v p (α k i ) > 0 for all i < r, which is again equivalent to condition 2 applied to the Witt vector F k (S P,a (h)) according to the assertion iii of Proposition 1.10.
The following proposition is a key ingredient for what follows :
We write (β 0 , β 1 , ...) for the components of the Witt vector F k (S P,a (h)). Its ghost vector is h(P (k) (a)), h(P (k+1) (a)), ... , where P (i) denotes the polynomial P composed i times.
Since v p (P (a)) ≥ inf(qv p (a), v p (π)+v p (a)) ≥ v p (a) > 0, we have that v p (P (k) (a)) → +∞ as k → +∞. In particular, if k is big enough, then v p (h(P (i) (a))) = v p (a 0 ) for all i ≥ k. Therefore, for such value of k, the relations between the components of F k (S P,a (h)) and its ghost components give us : We thus see, by iteration on j ≥ 0, that v p (a 0 ) = rv p (π) if and only if v p (β j ) > 0 for all j < r and v p (β r ) = 0, thereby proving the assertion.
Formal group exponentials
Again, K is a finite extension of Q p , with valuation ring O K and residue field k. We write card(k) = q for some power q of p. We fix a uniformising parameter π of O K . Let P ∈ F π be some Lubin-Tate series with respect to π, and denote by F P ∈ O K [[X, Y ]] the unique formal group which admits P as an endomorphism (see Subsection 1.1). By the functional equation lemma (assertion 3 of Proposition 1.6), there exists a formal power series g ∈ XO K [[X]] with the coefficient of X equal to 1, and such that ] is given by Hazewinkel's functional equation construction applied to g, with uniformising parameter π and residue cardinality q. Moreover, its composition inverse f −1 g ∈ XK[[X]] equals the exponential exp F P of the formal group F P : Hazewinkel's functional equation construction applied to h, π and q then gives us f h (X) = X + X q π + X q 2 π 2 + · · ·, and according to assertion 2 of Theorem 1.5, we have ]. Therefore, we can make the following definition : ] be a Lubin-Tate series with respect to π. We define Notation 2.4 Let F be a formal group which endows a set A with a group structure under the action + F . We use the following sigma notation to denote the composition of multiple elements of A using the group law + F : where j 0 ∈ Z ≥0 and j n ∈ Z ≥0 ∪ {∞}. Analogously to usual sums, the limits of infinite formal group sums might not always exist, and when they do, they might not be contained in A.
Let L be a finite extension of K. We provide the group XO L [[X]] with the "X- We then fix another uniformising parameter for O K , denoted by π ′ , and let Q ∈ O K [[X]] be a Lubin-Tate series with respect to π ′ . We fix a coherent set of roots {ω i } i>0 associated to Q(X), i.e., a sequence of elements ofQ p such that ω 1 = 0, Q(ω 1 ) = 0 and Q(ω i+1 ) = ω i .
We also fix n ≥ 1 and let L = K π ′ ,n be the nth Lubin-Tate extension of K with respect to π ′ . We note that O L = O K [ω n ]. Let h(X) = X. Since π and π ′ generate the same ideal in O K , the polynomial Q satisfies Identities 4. In particular, according to Lemma 1. Definition 2. 6 We define As an interesting consequence of the properties of these power series, we can give an improvement to the known bound for the radius of convergence of the formal group exponential exp F P (X).
Proposition 2.7
The power series exp F P (X) converges on the disc {x ∈ C p : v p (x) > 1/e K (q − 1)}, where e K = v π (p) denotes the absolute ramification index of K if v π is the discrete valuation on K such that v π (π) = 1.
Proof. First, with h(X) = X, we know exp F P (ω 1 X) = E P (S Q,ω 1 (h), X) = E Q P,1 (X) is a formal power series with integral coefficients, so it converges at any element x ∈ C p with strictly positive valuation. We know that v p (ω 1 ) = 1/(q − 1)e K as ω 1 is a uniformising parameter for K π ′ ,1 , therefore exp F P (x) converges whenever v p (x) > 1/(q − 1)e K .
Remark 2.8
The only bound of the radius of convergence of exp F P (X) known to the authors was that given by Fröhlich in [7, Ch.IV, Thm 3], which he accredits to Serre. This bound was 1/(p − 1), so our bound improves this result for all K = Q p . For K = Q p and P (X) = (X + 1) p − 1, we obtain exp F P (X) = exp(X) − 1 and we see that this bound is optimal. We conjecture that this bound is in fact optimal for all choices of K and P .
One crucial argument in the proof of Theorem 1 will be provided by the following lemma : Lemma 2.9 Let h(X) = X. For every λ = (λ 0 , λ 1 , ...) ∈ W O K ,π (O L ), the following equality holds ; Proof. On the one hand, using Definition 2.5 and the multiplicativity of the ghost map for ramified Witt vectors, we get successively : E P (S Q,ωn (h)λ, X) = exp F P ω n λ (0) X + ω n−1 λ (1) X q π + · · · + ω 1 λ (n−1) X q n−1 On the other hand, using Definition 2.6, Definition 2.5 and Identities 2, we also get : , thereby proving the desired equality. Proof. According to Lemma 2.9, the series E P (S Q,ωn (h)λ, X) is a finite sum with respect to the formal group law F P . Therefore, it is over-convergent if and only if each term of the sum is over-convergent and has strictly positive valuation when evaluated at x ∈ C p with v p (x) ≥ 0. But this is a consequence of the property that E Q P,n−j (λ j X q j ) ∈ λ j X q j O K [[X]]. The last assertion is therefore trivial.
Proof of Theorem 1
We can now prove our main theorem on the properties of the formal power series E Q P,n (X) : Proof of Theorem 1. Part 1. According to identity 2 of Subsection 1.1, we have with h(X) = X and λ = S Q,ω n+1 (f ) for f (X) = Q(X) X − π.
We have to prove that each term in this sum is over-convergent and has strictly positive valuation when evaluated at some x ∈ C p with v p (x) ≥ 0.
On the other hand, we know from the proof of Proposition 2.7 and Definition 2.6 that exp F P (ω 1 X) ∈ XO K [ω 1 ][[X]]. Substituting X with (πω n+1 /ω 1 )X and noting that v p (π) > v p (ω 1 ), we then see that the series exp F P (πω n+1 X) belongs to (πω n+1 /ω In particular, it is over-convergent by Proposition 2.7, and it has positive valuation when evaluated at any x with v p (x) ≥ 0.
Part 2.
In Part 1, we saw that exp F P (πω n+1 X) ∈ (πω n+1 /ω ]. We know that v p (πω n+1 /ω 1 ) > v p (ω n ), therefore using expression (5) above and the fact that Let O L be the valuation ring of some finite extension L of K. Let ν = (ν i ) i ∈ W O K ,π (O L ) and suppose that, for some j, v p (ν i ) ≥ v p (ν j ) for all i ∈ Z. In Definition 2.3 we saw that E P (X) ∈ XO K [[X]]. Therefore, from Definition 2.5 and the fact that F P (X, Y ) has integral coefficients, we see that E P (ν, X) ∈ ν j XO L [[X]].
In particular, let L = K(ω m ) = K π ′ ,m for some m with 0 < m ≤ n. For ν = S Q,ωm (X), we know from Proposition 2.2 that v p (ν i ) > 0 for all i. As ν ∈ Recall from Part 1 that λ = ( ] . For all 1 ≤ j ≤ n we have v p (λ j ω n+1−j ) > v p (ω n ) and from Lemma 2.9 we know that It is therefore sufficient to prove that ]. By definition the coefficient of X in the formal group exponential exp F P (X) is equal to 1. Therefore, if E Q P,n+1 (X) = exp F P ω n+1 X + ω n X q π + · · · + ω 1 X q n π n = i≥1 a i X i , ] . By definition, we see that λ 0 = λ (0) = f (ω n+1 ) = ω n /ω n+1 − π, and so which proves the result since v p (ω n ) > v p (ω n+1 ) and since E Q P,
Applications
We recall that p is a rational prime, K is a finite extension of Q p , π and π ′ are uniformising parameters of O K and q is the cardinality of the residue field of K. We let P ∈ F π (resp. Q ∈ F π ′ ) be Lubin-Tate series with respect to π (resp. π ′ ), and F P (X, Y ) (resp. F Q (X, Y )) be the unique formal group with coefficients in O K that admits P (resp. Q) as an endomorphism.
Lubin-Tate division points
Since the extensions K π,n are finite and abelian over K for all choices of n and π, we have K π ′ ,n = K π,n exactly when their norm groups are equal [25,Appendix,Theorem 9]. An exact description of the norm group N Kπ,n/K (K × π,n ) was computed in [11,Proposition 5.16]. Namely, N Kπ,n/K (K × π,n ) = π × 1 + P n K . This implies K π,n = K π ′ ,n if and only if π ≡ π ′ mod P n+1 K .
We now prove Proposition 1, which shows how values of the power series E Q P,n (X) give expressions for any nth Lubin-Tate division point with respect to P in terms of nth Lubin-Tate division points with respect to Q whenever π ≡ π ′ mod P n+1 K . Proof of Proposition 1. We assume that π ≡ π ′ mod P n+1 K . We proceed by induction on m, with 0 < m ≤ n.
From Identities 2 and 3 of Subsection 1.1, we have From Proposition 2.7 we know that exp F P (X) converges on the disc {x ∈ C p : v p (x) > 1/e K (q − 1)}. Also, in the proof of Proposition 2.7 and according to Definition 2.6, we saw that exp F P (ω 1 X) ∈ XO K [ω 1 ][[X]], and so exp F P (πω 1 ) will have positive valuation. We can therefore evaluate both the left and right hand side above at 1 and get : [π] P E Q P,1 (1) = exp F P (πω 1 ) − F P exp F P (πω 1 ) = 0 .
Now let k ≤ m and assume the result holds for m = 1, . . . k − 1. Again, using Identities 2 and 3, we have From Theorem 1 Part 1 we know that if π ≡ π ′ mod P n+1 K , then E Q P,m−1 (X q ) is over-convergent. From Proposition 2.7 we know that exp F P (X) converges on the disc {x ∈ C p : v p (x) > 1/e K (q − 1)}. Therefore, all the power series on the right hand side of this equation are overconvergent. Also, exp F P (πω m ) and E Q P,m−1 (1) both have positive valuations, so the formal group operations on these values are well defined. We can therefore evaluate both the left and right hand side at 1 : By the induction hypothesis, we know that E Q P,m−1 (1) is a primitive [π m−1 ] P -division point and therefore E Q P,m (1) is a primitive [π m ] P -division point. Part 1 now follows by induction.
Part 2 then follows directly from Part 1 and Equation 6 above.
Galois action on E Q P,n (1)
From Proposition 1 we know that K π,n = K(E Q P,n (1)). We will now give a complete description of how Gal(K π,n /K) acts on E Q P,n (1).
We know that E Q P,n (1) is a primitive nth Lubin-Tate division point with respect to P . From standard theory (see [11, §6-7], specifically Theorem 7.1) we know that the elements of Gal(K π,n /K) are those automorphisms such that E Q P,n (1) → [u] P (E Q P,n (1)), where u runs over a set of representatives of O × K /(1 + P n K ), for example We now prove Proposition 2, which gives us a complete description of Gal(K π,n /K) in terms of values of our power series.
Proof of Proposition 2. From the definition of E Q P,n (X) and Identity 3 of Subsection 1.1, we have Using Identity 2 and the observation that z j = z q i j for all i and j, we then see that this is equal to We now develop each term of this expression. For all 1 ≤ j ≤ n − 1, we get : Similarly to before, from Theorem 1 Part 1 we know that if π ≡ π ′ mod P n+1 K , then E Q P,n−j (X) is over-convergent for all 0 ≤ j ≤ n − 1 and therefore, E Q P,n−j ((z j X) q j ) is over-convergent for all 0 ≤ j ≤ n − 1. From Proposition 2.7 we know that exp F P (X) converges on the disc {x ∈ C p : v p (x) > 1/e K (q − 1)}. Therefore, all the power series in (9) are over-convergent. Also all the power series in (9) have positive valuations when evaluated at 1, so we can use formal group operations on these values. Evaluating (9) at 1 we get E Q P,n−j (z j ). Combining this with (7) we then get [ n−1 i=0 z i π i ] P (E Q P,n (1)) = E Q P,n (z 0 ) + F P E Q P,n−1 (z 1 ) + F P . . . + F P E Q P,1 (z n−1 ) .
Local Galois module structure in weakly ramified extensions
As mentioned in the introduction, one of the main motivations for the generalisation of Dwork and Pulita's power series has come from recent progress with open questions on Galois module structure.
First, let E/F be a finite odd degree Galois extension of number fields, with Galois group G and rings of integers O E and O F . From Hilbert's formula for the valuation of the different D E/F ( [21], IV, §2, Prop.4), we know that the valuation of D E/F will be even at every prime ideal of O E and we can define the square-root of the inverse different A E/F to be the unique fractional O E -ideal such that Erez has proved that A E/F is locally free over O F [G] if and only if E/F is at most weakly ramified, i.e., the second ramification groups are trivial at every prime [6] ; however, the question of whether A E/F is free over Z[G] still remains open. The tame case has been solved by Erez [6]. Now, it is possible for weakly ramified extensions to be wildly ramified, and here new obstructions arise. Using Fröhlich's classic Hom-description approach, see [8], it is possible to reduce this problem to carrying out calculations at a local level. The key to these local calculations is to have an explicit description of an integral normal basis generator for the square-root of the inverse different in weakly ramified extensions of local fields. It is also possible to reduce this problem further to considering only totally ramified extensions (see [6, §6] and [18, §3]).
Precisely, let M be the unique degree p extension of Q p contained in Q p (ζ p 2 ); this extension is totally, weakly and wildly ramified. First, in [5], Erez proves that the element 1 + T r Qp(ζ p 2 )/M (ζ p 2 ) p is an integral normal basis generator for the square-root of the inverse different of M/Q p . In [23], Vinatier uses Erez's basis to prove that A E/F is free over Z[G] with F = Q whenever the decomposition group at every wild place is abelian. Then, in [16], Pickett uses the trace map and special values of Dwork's power series to generalise Erez's basis to degree p extensions of an unramified extension of Q p that are contained in certain Lubin-Tate extensions. In [18], Pickett and Vinatier use Pickett's bases to prove that A E/F is free over Z[G] under certain conditions on both the decomposition groups and base field.
Following these results, we shall give explicit descriptions of integral normal basis generators for the square-root of the inverse different in abelian totally, weakly and wildly ramified extensions of any finite extension of Q p .
Another application is concerned with the Galois module structure of the valuation ring over its associated order in extensions of local fields. Precisely, let L/K be a finite Galois extension of p-adic fields, with Galois group G. We denote by O K ⊂ O L the corresponding valuation rings, and by A L/K the associated order of O L in the group algebra K[G], that is This is an O K -order of K[G], and the unique one over which O L could be free as a module. When the extension L/K is at most tamely ramified, the equality A L/K = O K [G] holds, and O L is A L/K -free according to Noether's criterion. But when wild ramification is permitted, the structure of O L as an A L/K -module is much more difficult to determine (see, e.g., [22] for an exposition of recent progress in this topic). A p-adic version of Leopoldt's theorem asserts that the ring O L is A L/K -free whenever K = Q p and G is abelian. However, the field Q p is actually the only base field which satisifes this property. One extension of this result is due to Byott ([2], Cor. 4.3): If L/K is an abelian extension of p-adic fields, then O L is free as a module over its associated order A L/K whenever the extension L/K is totally and weakly ramified. We shall construct explicit generators of the valuation ring over its associated order in maximal abelian totally, weakly and wildly ramified extensions of K, using the description of such extensions that comes from Proposition 3.1 below.
|
2012-01-19T11:14:57.000Z
|
2012-01-19T00:00:00.000
|
{
"year": 2012,
"sha1": "73f8ef9317ffc924c7e7b29af5a221dec3341e58",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1201.4023",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "73f8ef9317ffc924c7e7b29af5a221dec3341e58",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
64771916
|
pes2o/s2orc
|
v3-fos-license
|
Should archivists edit Wikipedia, and if so how?
Abstract Archival codes of ethics emphasize promoting archives and making them available to a wide audience. Literature highlights the importance of ‘participatory archives,’ often using Web 2.0 technologies. Using Archives 2.0 as a framework, this article suggests that archivists can move towards these goals by increasing their engagement with the online encyclopaedia Wikipedia. After identifying this wider context, the article evaluates in detail existing literature specifically on the subject of Wikipedia and archives. Particular scrutiny is placed on several case studies written by US university archivists who aimed to promote collections, and the issues they collectively highlight. To explore the question of how archives should engage with Wikipedia, this article uses a markedly different methodology to the case study literature. In-depth interviews were carried out with professional ‘Wikipedians in Residence’ who have worked with archivists, as well as archive professionals with relevant experience. An analysis section focuses on understanding Wikipedians’ perspectives on archives’ engagement with Wikipedia, comparing them with those of archive professionals, and ultimately emphasizing the importance of a collaborative way of working together. The conclusion makes recommendations based on this and draws out theoretical implications for Archives 2.0.
Introduction
Wikipedia, which defines itself as a 'multilingual, web-based, free-content encyclopedia project … based on a model of openly editable content, ' 1 was launched in 2001, and archives have involved themselves with it in different ways for at least a decade. This article does not, therefore, examine an especially novel phenomenon. Originality comes from focusing on the overall picture of archives' engagement with Wikipedia; others have presented individual case studies, or mentioned the topic in wider discussions of 'Web 2.0. ' The article draws parallels between archives' involvement with Wikipedia and overarching goals currently ascribed to the profession. These parallels are arguably underappreciated, perhaps because Wikipedia's quality and reliability have long been questioned -although support for its role in academia is growing. 2 After outlining why archivists should consider working with Wikipedia, the article addresses its other central question: how should they do it? The conclusion then draws out some theoretical implications. Part of the research strategy involved It is suggested here that archives' involvement with Wikipedia should be seen as an 'online collaborative initiative' that many archives ought to consider engaging with in a more than superficial manner. But care is taken not to overstate the benefits, or let the analysis share the arguably idealistic tone detectable in some of how the Wikipedia community portrays itself.
Literature review
This literature review is divided into two subsections. The first assesses the wider context for archives' involvement with Wikipedia, using the concept of 'Archives 2.0' as a framework to suggest that this involvement fits well with the current view of the archival mission. The second gathers together for critical reflection several case studies written by archivists who have worked on Wikipedia, comparing their ideas with the findings of the first subsection.
Context for archives working with Wikipedia
Even at first glance, archivists getting involved in Wikipedia seems very much in accordance with the principles which nominally guide their practice. The ICA's Access Principles state: Archivists have a professional responsibility to promote access to archives … They are continually alert to changing technologies of communication and use those that are available and practical to promote the knowledge of archives … They proactively provide access to the parts of their holdings that are of wide interest to the public through print publication, digitization, postings on the institution's website, or by cooperation with external publication projects. 7 The ARA Code of Ethics encourages similar efforts: 8. Members … should encourage the use of records to the greatest extent possible, consistent with institutional policies, the preservation of holdings, legal considerations, individual rights, and donor agreements. … 12. Members should promote the awareness, preservation understanding and use of the world's documentary heritage amongst stakeholders, cultural and information professionals and the public, and where appropriate, work co-operatively with the members of their own and other professions to do so. 8 These principles reflect what Cook has described as 'a collective shift during the past century from a juridical-administrative justification for archives grounded in concepts of the state, to a socio-cultural justification for archives grounded in wider public policy and public use. ' 9 'Archives 2.0, ' described as 'a framework for defining the ideas and attitudes shaping archival practice today and on which archivists can continue to build, ' is useful for developing this initial impression. 10 Theimer, who first outlined it, characterizes it as 'an approach to archival practice that promotes openness and flexibility'; archivists must 'be user-centred and embrace opportunities to use technology to share collections [and] interact with users. ' 11 It is, she emphasizes, 'more than simply "Archives + Web 2.0, "' although Web 2.0 technologies, of which Wikipedia is a prominent example, are a significant factor in realizing the principles of Archives 2.0. 12 Palmer heralds it as 'a broader epistemological shift which concerns the very nature of the archive. ' 13 Working with Wikipedia relates to several aspects of Archives 2.0, and it is difficult to link it to one aspect in particular. This elusiveness (which can also be presented as flexibility) may partly explain why Wikipedia has received little attention in archival literature.
Openness
Theimer links a change in many archives' access policies -from restricting access based on researchers' qualifications to striving to ensure 'the broadest possible use' of collections -to the use of popular and widely accessible Web 2.0 'spaces' by archives. 14 Archivists adding links to Wikipedia and editing articles relating to their collections undoubtedly fits this trend. Indeed, one of the case studies Theimer cites involves an archivist working on Wikipedia.
But 'openness' should also be understood in the context of making content available without restrictions, or 'open access. ' Terras considers the significance of galleries, libraries, archives and museums (GLAMs) making their content available this way digitally, emphasizing that 'it is possible to digitize cultural heritage materials … and not actually make them any more accessible than they were previously'; maximizing accessibility involves enabling 'reuse, remixing and repurposing' of content 'for any purpose. ' 15 The National Archives (TNA) offers likeminded guidance: 'Archives should … Explore ways to build infrastructure to enable the re-use and re-interpretation of online collections, and expose the information to the widest possible audience. ' 16 This supports archives uploading content to Wikimedia Commons, where everything must have a 'free' licence such as CC0, CC-BY or the Open Government Licence, or be in the public domain. 17
Participation
In 2002, Duff and Harris called for descriptive systems with 'holes that allow in the voices of our users. ' 18 Cook more recently anticipated a new paradigm based on 'participatory archiving, ' involving 'virtual communities united by social media in cyberspace. ' 19 This echoes the Archives 2.0 trend of archivists assuming the role of 'facilitator, ' rather than 'gatekeeper. ' 20 It is argued here that Wikipedia can be a platform for archivists hoping to achieve this in some ways, but not in others.
Eveleigh's work on 'mapping the participatory landscape' provides clarification. 21 In this framework, an archivist editing Wikipedia articles with a view to promoting an archive collection is taking a 'mechanistic' approach. They are participating in the pre-existing Wikipedia community, with its own norms and objectives, and the archivist has no intention to build a separate community of interest specifically around their archive. This is quite different to more open-ended, or 'organic, ' Wikipedia edit-a-thons, which could provide the foundations of a 'collaborative community' of archive users. Another interesting 'frame' that Eveleigh identifies is the 'Archival Commons, ' which Anderson and Allen envisage as 'a peer-based framework for the assembly, arrangement, and representation' of archive collections. 22 Archivists interested in making this seemingly distant vision a reality could perhaps utilize Wikipedia as part of the process.
Eveleigh notes that crowdsourcing, one of the most prominent forms of participation analysed in the archival literature, is 'particularly associated with user involvement in archival description, transcription and metadata enhancement. ' 23 This contrasts with Wikipedia editing, which takes place in an environment separate from an archive's own website and catalogue. Some -such as Shilton and Srinivasan, whose aims include 're-envisioning archival principles of appraisal, arrangement, and description to actively incorporate participation from traditionally marginalized communities' 24 -therefore may not find Wikipedia engagement useful. For Huvila's 'radical user orientation' projects, it would likewise be unsuitable. 25 This reflects Flinn's comment that 'in themselves, new technologies do not guarantee' a more democratic archival practice that empowers the 'under-voiced. ' 26 But arguably most archivists who wish to adopt Archives 2.0 principles are unwilling or unable to take such a 'radical' approach. Wikipedia may be favoured because it is distinct from 'authoritative' finding aids created only by archivists. Some of the questions Yakel raises -'can archives remain trusted institutions if they share authority over the representation of records?, ' for example -are thus negated. 27 If this mindset leads archivists to choose Wikipedia over radical peer production that 'impinges on, changes, and recontextualizes the records, ' we might ask, as Yakel does, 'are we losing more in the long run?' 28 Nonetheless, the fact that Wikipedia editing does not necessarily align itself with the most radical characterisations of Archives 2.0 is not a reason to shun it altogether.
Outreach
Another aspect of Archives 2.0, connected to both openness and participation, is 'being where our users are' -actively trying to attract new users, not just making collections open to those who would seek them out anyway. As Palmer shows, archivists cannot assume that 'if we build it, they will come. ' 29 Eveleigh identifies 'outreach and engagement' as another form of participation, but one which has 'much in common with traditional audience engagement and marketing initiatives, extended in reach and ambition by means of the internet. ' 30 This has important ramifications when it comes to Wikipedia.
Studies examining archivists' use of various 'Web 2.0' sites suggest that this 'marketing' mindset predominates. Kriesberg's analysis of 1880 tweets from 34 archival institutions shows over half consist of 'administrative updates, links to institutional site content, and event promotion. ' 31 Griffin and Taylor find that 'social media investment by special collections does not result in a significant level of interaction between departments and external constituents, ' and that 'social media profiles tended to serve as one-way information conduits' used 'to advertise collections, events, and activities. ' 32 Neither of these studies examined Wikipedia engagement. As indicated in the introduction, Wikipedia should be seen as distinct from the category 'social media. ' But arguably concepts such as 'Web 2.0' and 'social media' are blurry and conflated in the minds of some archivists. Mason's article, which does consider Wikipedia despite being titled 'Outreach 2.0: Promoting Archives and Special Collections through Social Media, ' is perhaps indicative of this. 33 It is therefore understandable why archivists, attracted by Wikipedia's prominence and vast user base, might approach it with a view to enhancing outreach efforts. This context, and its disadvantages, informs the following analysis of Wikipedia-focussed case studies.
Wikipedia-focussed literature
Galloway and DellaCorte, authors of the most recent example, summarize in detail all earlier case studies written by archivists about their experience editing Wikipedia. 34 This is therefore not repeated here. Instead, the case studies are analysed together thematically. Although all are by American university archivists, they present a broad spectrum of approaches to Wikipedia, allowing wider conclusions to be drawn.
Motivations
All the authors describe their primary aims in similar terms. Szajewski hoped to 'raise the visibility of digitized historic sheet music assets. ' 35 For Lally, being represented 'where people engage in research has become increasingly important in a world of readily-accessible, distributed information. ' 36 Such motivations clearly align themselves to Eveleigh's 'outreach and engagement' frame.
Two case studies offer additional nuances. Combs explains: Our motivation appears at first blush to be a selfish one: exposure … However, our actions also align with Point VI in the Society of American Archivists Code of Ethics: "Archivists strive to promote open and equitable access to their services and the records in their care without discrimination or preferential treatment … Archivists recognize their responsibility to promote the use of records as a fundamental purpose of the keeping of archives. " 37 As suggested earlier, Wikipedia editing can support Archives 2.0's 'openness' agenda, as well as representing a means simply of promoting a service. Galloway and DellaCorte, meanwhile, aimed 'to meet Wikipedia's mission to improve the quality of the articles by adding content where necessary and substantially editing an article if called for. ' 38 As the most recent authors to focus on this topic, they were able to learn from several previous case studies, and hoped to be 'good Wikipedia citizens' in order to avoid difficulties their predecessors faced. But there is also the sense that the 'mission' of Wikipedia, making knowledge available, somewhat aligns with the goals of an archive service, particularly one situated within the education sector. A 2012 blog post by Jo Pugh highlighting TNA's work on Wikipedia shows similar sentiment: We could pretend that inaccuracies and omissions on Wikipedia regarding our records are none of our business. But if we're interested in working with, particularly young, audiences (and we are) we need to accept that Wikipedia is where they will be getting their information from. And that means it is our business and that if we can improve those articles, we should. 39
What archivists have done
There is more variation in the nature of each archive's involvement with Wikipedia. The authors of the earliest case studies describe a process of determining well-represented subjects in their archival collections, finding relevant Wikipedia articles and adding links from them to their catalogues' fonds-level descriptions. 40 Combs' approach was similar, although as well as adding links she reports reviewing articles 'for accuracy based on our research and holdings, ' and creating new articles in order to link them to collections. 41 Szajewski also concentrated on the 'External Links' section of articles, although he limited the scope of his project to a single collection of digitized sheet music, linking relevant articles about particular songs, songwriters or lyricists to individual digitized items rather than entire collections. 42 Others, such as Elder et al., describe a different tack: Originally, UHDS [the University of Houston Libraries Digital Services Department] intended to contribute exclusively to the External Links section of existing Wikipedia articles … As the UHDS pilot progressed, however, the emphasis changed … UHDS staff found it was much more effective to match digital items with Wikipedia articles and to share those items in Wikimedia Commons rather than (or in addition to) the External Links section of the articles. 43 This reflects Terras' emphasis on the value of GLAMs making content openly available. Galloway and DellaCorte, meanwhile, hired student 'interns' to edit articles, thereby helping them better appreciate their collections and how to use them. 44 No case studies explore archives' experiences with WIRs or edit-a-thons, but the literature still highlights significant flexibility in how archives can work with Wikipedia.
Challenges
That said, several case studies highlight problems encountered by archivists whose approach could be described as 'marketing. ' This stems from Wikipedia's rule 'Wikipedia is not a soapbox or means of promotion. ' 45 Combs quotes excerpts from a lengthy debate amongst Wikipedia editors over allowing any links to archival finding aids on external websites. Some automatically considered them spam, and quickly removed them. 46 The Wikipedia community subsequently amended its 'Conflict of Interest' policy to be more accommodating to archivists adding links to 'uniquely relevant' collections they manage. 47 But 'relevance' is subjective, and Combs suggests that occasional removal of links 'simply must be accepted'; if archivists 'respect the spirit of the project by contributing content, not just links, ' they will build up 'street cred, ' and their contributions will endure. 48 Elder et al. place similar emphasis on this 'balancing act. ' 49 This is one of the key issues that archivists editing Wikipedia ought to be aware of. It has not been fully foregrounded before, however, and will be explored in more detail in the analysis section.
Galloway and DellaCorte highlight other challenges, connected with Wikipedia's policies on original research and notability. 50 There is insufficient space to examine these further here, but overall the case studies should not discourage any archivist from editing Wikipedia.
Judging success
All the case studies conclude on a highly encouraging note. Combs suggests that 'in some ways, Wikipedia is the best thing to happen to manuscript collections in years. ' 51 Several authors note that editing is free and enables archives to reach millions of potential users, including non-English speaking audiences, as links and content added to the English Wikipedia often reappear in Wikipedias in other languages. 52 The importance of uploading archival content to Wikimedia Commons to the objectives of the University of Houston's Digital Library is evidenced by the decision to 'formalize' the pilot project and incorporate it into policies. 53 Galloway and DellaCorte likewise report that archives staff will be trained 'so that working in Wikipedia becomes a normal routine when processing collections and writing finding aids. ' 54 All the case study authors use server statistics to link their work on Wikipedia to significant increases in visits to their own online catalogues. Such indicators are undoubtedly valuable. Theimer declares in a blog post: 'if we want to compete effectively and be taken seriously we have to roll up our sleeves and gather this kind of data. ' 55 But the case studies' approach to evaluation contrasts with that shown in other studies of Web 2.0 and archives. Krause and Yakel, for example, describe a 'multimethodological approach' combining quantitative data collected through Web analytics with various qualitative methods. 56 The latter are underrepresented in the archival literature on Wikipedia, and the interviews conducted in the course of researching this article go some way towards addressing this. Smith-Yoshimura and Shein suggest that 'the success of a Web 2.0 page cannot be measured by numbers alone … The quality of a user's experience or contribution is a significant consideration, yet one that evades metrics. ' 57 One additional aspect, largely absent from the case studies but highlighted by Yakel, is the capability archivists have to improve the 'authority' of Wikipedia. In her analysis of Combs' experience she argues: By understanding the social system in Wikipedia, Combs and her colleagues at Syracuse University identified a means of working within Wikipedia's social norms to integrate information about their collections into Wikipedia. As a result, the Syracuse University archivists have become a part of the social web in a way that the archives-generated sites have not. 58 Wikipedia engagement can place archivists in the aforementioned role of 'facilitator' rather than 'gatekeeper, ' sharing their authority rather than seeking to defend it. This contrasts with sites which allow user contributions but with archivists fully in control. 59 The validity and implications of this idea are considered further in the analysis section.
Summary
The overall impression that arises from the literature is that Wikipedia engagement can be valuable for archivists seeking to achieve a wide range of goals, although not without disadvantages and obstacles. It is in keeping with the widely supported spirit of Archives 2.0. This helps to answer the question 'should archivists edit Wikipedia?' Some of its potential seems not to have been realized by the archive sector, arguably because it is not always approached in the most effective way. Although sometimes alluding to it, the literature does not fully explore this general question of how archivists should edit Wikipedia. This is therefore examined in greater detail in the analysis section, drawing upon data acquired through in-depth interviews. But first it is necessary to explain briefly the interview process.
Methodology
A total of six interviews were conducted. Four participants were WIRs (anonymized as W1, W2, W3 and W4), interviewed using Skype. Between them they have worked with public and private archive services of various sizes, and both as full-time employees and as freelance consultants. One archivist (A1) who has edited Wikipedia in a professional capacity was interviewed in person. Finally, emailed responses to questions were received from a project officer at an archive service (A2) who has worked with a WIR. Although this sample size may seem small, the numbers of WIRs who have worked with archives and, seemingly, archive professionals who have worked on Wikipedia are themselves not vast. The analysis and conclusion show that broad themes and theories can still be drawn out from the interviews.
Although familiar with archives and archivists, the WIRs do not have archival backgrounds and are closely connected to the Wikimedia community. Their responses enable examination and expansion of ideas drawn out from the literature. The interviews with the two archive professionals provided something approximate to a 'control, ' as the data gathered from interviewing them is compared with the impressions that arise from the case study literature written by other archivists, as well as with the responses from the WIRs. Although empirical details on what each participant did are valuable, their reflections on the spirit in which they did it are more so given this article's theoretical focus. This, along with the reasons already mentioned, justifies choosing a methodology of in-depth interviewing.
Participants could read a list of potential questions beforehand, although the interviews were only semi-structured. The questions varied somewhat between interviews, particularly because some were more applicable to the WIRs' experiences than those of the archive professionals, and vice versa. Detailed summaries of the interviews, not reproduced here due to insufficient space, were produced and approved by each participant. A consent form addressing research ethics was signed by each participant before their interview, which included a clause noting that respondents would not be individually identifiable.
Analysis
Two key findings emerged clearly from the interview data. Firstly, all interviewees, both WIRs and archive professionals, are either fairly or very enthusiastic about the possibilities Wikipedia can offer many archives. This was perhaps to be expected, and there was some disagreement as to whether every single archive has something to contribute, linked to the notability issue noted in the literature review. But no interviewee felt strongly that particular types of archives were more or less suited to working with Wikipedia. Secondly, the interview data attested to the 'flexibility' which archives can find in Wikipedia, also mentioned in the literature review. Participants variously focussed on the value of uploading content to Wikimedia, edit-a-thons, and encouraging individual users to edit articles based on their research in the archives.
More attention is now given to the second question this article addresses: how should archivists engage with Wikipedia? The interview data suggest their approach should be informed by clearer appreciation of the goals of the Wikipedia community. The fundamental message that emerges is the importance both archivists and members of the Wikimedia community ought to place on ensuring that they collaborate in a balanced manner. As the literature review shows, some case studies hint at this, but there is deeper examination here. The conclusion then draws out the wider significance and implications of this analysis.
Understanding the Wikipedians' position
The WIRs were unsurprisingly lukewarm towards the idea that archivists might only add links to articles, leaving the rest of their content unimproved. None noted that archivists or other GLAM professionals do this less now than in the past, despite the case study literature showing that it was first questioned several years ago. A key reason why WIRs are not enthused by the prospect of archivists focusing solely on adding links to Wikipedia stems from a concern that Wikipedia is missing out on the often highly specialized knowledge of archives' staff, users and unique content. Wikipedians are frustrated by what they see as a 'lack of imagination, ' as W3 put it, not by fundamental opposition towards the notion that archives may be receiving some promotional benefit. W3 emphasized that although Wikipedia is not a 'guide to the web, ' some links -including those added by archivists -clearly ought to be on Wikipedia. When dealing with archivists W2 highlights increased visibility as an incentive for them. W1 sees 'tension' inherent in the WIR's role, who represents Wikipedia's interests but may also be a paid employee of an institution with an archive, and so is obliged to do work that is in some sense 'promotional. '
Reshaping how archivists view Wikipedia
This clearer understanding of what the Wikipedia community wants from archivists should shape their approach. It reemphasizes that Wikipedia is distinct from social media, challenging the notion that it can be 'used' by archivists like Facebook or Twitter, which authors such as Crymble call social media 'tools. ' 60 The literature review noted that an 'External Links'focused approach was more likely to encounter difficulties, and W3 likewise claims archivists who choose it '[set] themselves up for a bad experience. ' It is notable, then, that A1 calls Wikipedia a 'useful tool' which could be 'harnessed' to produce a 'quick win' in the form of adding links to A1's cataloguing. This partly resulted from time constraints. A1 had several other duties, such as cataloguing and blogging, alongside Wikipedia editing. Many archivists who may be interested in Wikipedia doubtless find themselves in a similar position. Though this issue may be unavoidable, and was highlighted repeatedly in the interviews with the WIRs, Wikipedia does have the advantage that archivists can work on it as and when time permits rather than at regular intervals, with little discernible impact on the outcome. Here again it differs from social media, which often carries an expectation from users of frequent new content. Archivists could also save time and engage volunteers by encouraging them or other users to edit.
The inclination of archivists such as A1 to 'use' Wikipedia as a 'tool' seems to relate to questions surrounding their 'authority. ' A1's concern, which archivists' managers and users may well share, about others 'tampering' with 'definitive' content created by archivists is valid. But it may also represent a missed opportunity. Archivists could, instead, see their contributions as 'lending the authority of [their] institution to Wikipedia, ' 61 thereby making it more authoritative. This requires some culture change, which can be slow; Duff and Haskell suggest that archivists have an 'aversion to decentralized control' which 'may derive from [their] primary duty and obligation to preserve the authenticity and integrity of records … traditionally linked to bureaucratic control and neutral custodianship of records. ' 62 But the interview data suggest that this change makes deeper engagement with Wikipedia seem more worthwhile. A2, for example, comments: Some people still cling to the idea that knowledge can only be external and fixed and held by experts … I think that's what causes tension, as you create or edit something on Wikipedia and immediately relinquish control of it. I think our input as a sector would greatly enhance the accuracy of the resource though. Community created knowledge doesn't at all replace expertise … but if the two work together the whole is better than the sum of its parts.
Arguably, the Wikipedia community's interest in archives lies more in accessing their unique content and databases, and less in archivists' 'expertise' or 'authority. ' Some WIRs did not highlight the latter prominently when interviewed. But W3 suggests in the context of edit-a-thons that although archivists 'do not have the final say on what gets created, ' they fulfil the role of 'experts on the subject people have come to learn about. ' Their status as the 'curator of knowledge becomes, if anything, more important. ' This can be located in the wider context of the archival profession rethinking the importance of authority. The idea of archivists as sole experts is diminishing, but 'expertise' still has a role to play in a more participatory future, Flinn argues: Whilst the accuracy and reliability of collaborative projects like Wikipedia remain controversial and debated, many studies suggest that it is not wildly inaccurate compared with traditional sources or, perhaps more importantly, that it necessarily excludes expert views or scholarship. In fact the significant point seems to be that most entries in these models are created by experts in their particular field and then maintained by larger numbers of less specialist gardeners. For those who advocate the democratising potential of these developments, prospects for change and transformation perhaps lie not so much with the idea of the 'crowd' but that the experts are drawn from a much broader, less elitist notion of where knowledge and expertise can be found. 63 Based on the interview data, archivists should acknowledge that they are not the only holders of 'authority' on Wikipedia, without feeling that their 'authority' counts for nothing. The Wikipedia community values it because they link it to knowledge.
Determining how archivists should engage with Wikipedia
The key notion emerging here is 'balance. ' W1 emphasized the need for a relationship that 'benefits both sides' based on 'give and take' or 'quid pro quo, ' where archives receive greater visibility in exchange for improving Wikipedia's comprehensiveness and quality. There should be 'collaboration, ' instead of archives viewing Wikipedia as a 'tool' -or indeed vice versa. Rather than holding back, archivists should consider deeper involvement if time and managerial support allow. Although adding links back to their catalogues can be worthwhile for both archivists and Wikipedia, editing articles, uploading content to Wikimedia Commons or investigating Wikidata may be more valuable still. W1 suggests that more information on a subject and related digitized content added can 'entice' more people to be interested in it and follow links from Wikipedia articles to relevant collections. This resembles Terras' 'virtuous circle. ' 64 The main body of a Wikipedia article can highlight what a relevant archival collection comprises, not just its existence. The benefits of more substantial editing are not, therefore, limited only to an increased likelihood that links at the end of articles are not removed. The interviews demonstrate this more forcefully than the case study literature.
W3 argues more directly that engaging with Wikipedia can involve 'marketing' archives in more than just the sense of 'boosting their visibility': As the sector responds to competitive pressures, these institutions have become more canny about marketing, but they've interpreted 'marketing' in a way that owes a lot to the commercial ethos: colourful brochures, consistent branding, social media, and so on. This kind of marketing can put the logo and stock photos out front, while leaving the experts in a back room. A very different view of marketing is that the uniqueness of the collections and subject expertise should be used to market these institutions. Co-operation with Wikipedia is about inverting that conception of marketing: whether it's by sharing content and metadata, or putting on a public event, we put 'back room' subject experts in the spotlight.
This message should encourage archivists to feel that by collaborating with Wikipedia in a more in-depth, equal manner, they -and Wikipedia -stand to benefit more than if they engage wholly on their own terms. It is worth noting again how W3 distinguishes Wikipedia engagement from social media, which is linked to a 'commercial' approach to marketing. Archivists should not see deeper engagement as a necessary evil, something they do only to build up what Combs calls 'street cred. ' 65 Nor should, as W1 notes, either side feel they are acting out of 'goodwill' or 'charity. ' There are undoubtedly similarities between the overarching aims of Wikipedia and many archive services, understood on the broad level of making knowledge and content available to as wide an audience as possible. But other mutual benefits are more tangible. Putting digitized material onto Wikimedia Commons is not giving it away for 'nothing' if the archive desires increased visibility in return. W4 emphasizes the importance of archives 'branding' the content they upload, so users who click on it clearly see where it has come from. W2 recalls that the results of making images freely available for reuse 'staggered' some archivists. Both they and others who saw the impact became 'excited about sharing more content' because it brings exposure, and sought to embed it in everyday policy. Focusing on the idea of collaboration based on equal exchange helps to demonstrate the value for archives in making content freely available under an open licence, as well as engaging in editing.
Conclusion
This article presents quantitative and qualitative evidence strongly indicating that it is worthwhile for many, if not all, archivists to engage with Wikipedia, emphasizing how this engagement aligns with archival ethics and Archives 2.0 and is supported by case studies, server statistics and the interview data. At the same time, following Flinn's advice mentioned in the introduction, the article takes a critical approach to the literature and interview data in order not to overstate the possibilities Wikipedia can offer archives.
The analysis section examined how archivists should go about realizing this potential. This is an area that the sector is less certain about. Comparison of just two interviews, those with A1 and A2, indicates that professionals' attitudes towards Wikipedia are varied. Based on analysis of all the interview data, this study has firstly suggested that archivists need awareness of the aims of the Wikipedia community, centred around making knowledge and content freely available. It helps them appreciate why, although most Wikipedians accept that archivists engage with Wikimedia projects with some promotional or marketing intentions, viewing Wikipedia as a promotional 'tool, ' akin to social media, can be problematic. It also shows that the Wikipedia community's knowledge focus means that archivists' authority, while not pre-eminent, is still important. The section then highlighted the importance of engaging collaboratively rather than 'using' Wikipedia.
There are significant conclusions in themselves. Wikipedia may or may not become part of many archivists' everyday activity, but it is unlikely to disappear. This article will now therefore sketch out some practical recommendations, before ending with broader theoretical conclusions drawn from the analysis.
Recommendations
A key issue that emerged while researching this article is the lack of general guidance from within the archive sector showing archivists why and how they might consider engaging with Wikipedia, particularly in a UK context. This is compounded by the fact that the majority of the profession will likely never work with a WIR, even on a short-term basis. Although overall numbers of WIRs are increasing, the interview with W4 in particular highlighted that the focus of Wikimedia UK is moving towards working with larger institutions, particularly in the education sector. There may therefore be less direct or prolonged contact between archive professionals and WIRs. Duff and Haskell, who investigate various collaborative projects and methods to facilitate access, but not Wikipedia, argue: The profession needs to develop a statement of principles to help archives flourish in this new terrain. Archivists need guidance on making records available online, using records in games, promoting remixes of their records, and participating in crowdsourcing projects. The Code of Ethics provides broad guidelines on issues of access and privacy, but the profession needs to ponder and debate these concerns and develop guidelines to assist archivists who make records available online, especially those made available for modification and augmentation using social media technologies or in crowdsourcing or gamification projects … A statement of principles should also provide guidance on how to deal with the content contributed by participants. What value are archivists ascribing to these voices, and how should archival systems manage them? 66 This applies to working with Wikipedia too. ARA or TNA could provide direction in this area, perhaps by highlighting a range of UK case studies, as the Museums Association has done. 67 More generally, archivists could note how other GLAM institutions have responded to the opportunities and challenges presented by working with Wikipedia. This could be done through partnerships and networking, as well as reading literature written for a nonarchival audience. Recently, for example, a WIR contributed a piece to a museum-focused journal arguing that 'issues of democratization, voice, and authority in museums can be addressed through Wikipedia's community, process, and its potential as a model for a new Open Authority in museums. ' 68 Although archivists appreciate the differences between the professions, literature such as this with a more widely applicable message and the interviews conducted for this study suggest that these differences may be less important when it comes to working with Wikipedia.
Future research could investigate qualitatively what archive users think about archives engaging with Wikipedia. It could be that 'regulars' would suggest the required effort could better be expended on more detailed cataloguing, but newer or more casual users might be more appreciative. There should also be further investigation into how Wikipedia might modify its policies or outreach activities in order to foster a more productive relationship with a wider range of archives. More practically, archives could trial events which combine Wikipedia edit-a-thons and direct editing of or commenting on archival finding aids. This could enable both archive services and Wikipedia to obtain substantial content. Finally, it is important that archives do not limit their attention just to Wikipedia. The WIRs interviewed were keen to discuss Wikidata in particular, and its potential to enable catalogues from different institutions to become increasingly interlinked, not just gathered together as is currently done on other sites.
Wider conclusions
The broader implications of this study with respect to Archives 2.0 may not be immediately obvious, but some important ideas do present themselves. One the one hand, Archives 2.0 might be criticized as a disorganized assortment of concepts, and by suggesting that archivists' engagement with Wikipedia embodies several of these different strands it may seem that this article prevents a unified conclusion from emerging. Perhaps neither really helps one to understand the other on a deeper level, beyond the impression that they seem to cohere to a significant extent. On the other hand, by foregrounding the importance of collaboration it does become possible to see a wider significance of archivists' work on Wikipedia. This goes some way towards filling the gap in the previous literature on the subject, which as has been shown is significantly more practical than theoretical in its focus.
Although Wikipedia is a decentralized, participatory project, this study suggests that archivists' engagement with it cannot -at least in many cases -be located at the most radical end of the Archives 2.0 scale. It does not involve user-produced cataloguing -users are not directly involved at all necessarily. Archivists' 'authority' is still somewhat preserved, and there is still emphasis on promotion, although not in the sense of 'using' a 'tool. ' As Palmer and Stevenson note: Web 2.0 gives us the opportunity to think not just about promoting our collections through online and traditional finding aids but also about working to present them more imaginatively -to engage in dialogue and build communities around archives. 69 It is possible to suggest, then, that collaboration is desirable because it enables Archives 2.0-type goals to be met more effectively, but is not inherently transformative in every case. Collaboration is a concept that is not separate from promotion; it can support it. 'Outreach' does not have to be achieved only through 'tools. ' The very fact that Wikipedia engagement casts the archivist in the role of collaborator in a larger project that they do not control is significant. Compared to a participatory archives website, where archivists oversee users who can only tag or comment on finding aids that the archivists create and manage, the relationship between archivists who edit Wikipedia and the wider Wikipedia community is significantly more equal. Archivists and users who work together on Wikipedia, either in edit-a-thons or more informally, thereby both move closer to the status of 'peer collaborators' -the status that Palmer argues users ought to have, 'intrinsic to the process of meaning-making, rather than outside interlopers (however welcome) who must be kept at arm's length from the authoritative record. ' 70 The fact that archivists themselves become 'peer collaborators' might demonstrate to their users that they are not purely interested in an imbalanced model of participation done on the archivists' own terms. It could also be a step towards a more radical participatory archive, getting archivists to change their view of this concept which has until now perhaps appeared more prominently in literature than reality. Collaboration is arguably a key concept for unifying the strands of Archives 2.0, and analysing archives' engagement with Wikipedia helps to demonstrate how this is so.
Taking this view may be significant not only in the context of today and Archives 2.0 but also looking further ahead towards the semantic web, or Web 3.0, and a possible 'Archives 3.0. ' Wikipedia is likely to remain potentially valuable for archives for the foreseeable future, and perhaps increasingly so, but future developments must be closely monitored.
|
2019-02-17T14:04:28.948Z
|
2017-06-23T00:00:00.000
|
{
"year": 2017,
"sha1": "e2ea3bd49b2678c18d1cde97575fbd290ce72629",
"oa_license": "CCBY",
"oa_url": "https://zenodo.org/record/1092773",
"oa_status": "GREEN",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "e1f8b92b19d802d8b70083b6b069f95797566f5d",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
250012597
|
pes2o/s2orc
|
v3-fos-license
|
Thyrotoxic Periodic Paralysis: A Case Report and Discussion of Thyrotoxic Periodic Paralysis: A Case Report and Discussion of Clinical and Imaging Features Clinical and Imaging Features
Thyrotoxic Periodic Paralysis (TPP) is a rare manifestation of thyrotoxicosis, resulting in periodic episodes of acute onset muscle weakness in the setting of hypokalemia. The thyrotoxic form of Hypokalemic Periodic Paralysis (HPP) is less studied than the more well-known familial form due to fewer reported cases and smaller prevalence. This case study presents a 30-year-old African American male with multiple episodes of acute lower extremity muscle weakness, tachycardia, and a history of heat intolerance. Abnormal findings on thyroid ultrasound coupled with increased thyroid related immunoglobulins led to a diagnosis of TPP related to exacerbation of newly-found Graves’ disease. The case study will further discuss the importance of imaging in assessing the etiology of TPP with review of relevant literature.
INTRODUCTION
Thyrotoxic Periodic Paralysis (TPP) is a rare manifestation of thyrotoxicosis.It is one of the lesser common subsets of hypokalemic periodic paralysis (HPP), the more common subset being Familial Hypokalemic Periodic Paralysis (FHPP).HPP can be extremely dangerous for patients to develop, since episodes may potentially lead to fatal muscle weakness via involvement of respiratory muscles, or life-threatening cardiac arrhythmias in the setting of hypokalemia 1 .The prevalence of TPP in North America is stated to be 0.1-0.2% 2 .Majority of the cases are found in the Asian population, and are extremely rare in the African American patients.Thus, there is significant value in studying individual cases of TPP in the African American population, as there is sparse data on such a rare occurrence of an already rare disease.
THYROID STIMULATING IMMUNOGLOBULIN 409 (H (<140% [BASELINE]
Due to abnormalities of the thyroid panel, thyroid ultrasound was requested.A diffusely heterogeneous and enlarged thyroid gland was visualized on grayscale ultrasound, with markedly increased vascularity on color Doppler assessment.
Clinical Presentation:
TPP is a rare condition presenting with periodic episodes of muscle weakness in the setting of hypokalemia and hyperthyroidism.The muscle weakness usually starts in the lower extremities and the laboratory values show elevated free triiodothyronine or thyroxine levels.TPP can also present with nonspecific cramping, muscle pain, stiffness, and decreased deep tendon reflexes, with intermittent resolution of symptoms 3 .Concurrent symptoms of hyperthyroidism, such as palpitations, weight loss, tremor, and heat intolerance, may also be present 4 .Episodes may be precipitated by meals with high carbohydrates, alcohol, trauma, and stress 3,5 .Our patient's presentation aligns with the typical presentation of TPP described above, demonstrating periodic episodes of weakness (acute myopathy) in the lower extremities, signs of hyperthyroidism, and elevated free T3 and T4, all within the setting of hypokalemia.Ultrasound imaging confirmed clinical findings of hyperthyroidism via thyroid storm, noting enlarged heterogeneous thyroid gland with diffusely increased vascularity on color Doppler assessment.Graves' disease workup confirmed autoimmune activity behind hyperthyroidism.
Demographics:
TPP typically presents in Asian males around the 2nd to 5th decades and has a prevalence of 10% in Asian countries.The prevalence of TPP is about 0.1-0.2% in non-Asian populations 5,6 .In Japanese people, the DRw8 subtype of the HLA gene complex increases their risk of developing periodic paralysis 7 .However, the same subtype found in the Caucasian population increases their risk for Graves' disease, but not necessarily periodic paralysis 8 .TPP seems to be primarily studied in Asian populations, but not so much in the Caucasian or African American patients 9 .The first couple cases of an African American patient presenting with TPP in literature were published in 1961 and 1984, the latter of which did not possess the above HLA subtype 10,11 .A report in 1994 describes that 4 cases have been reported within a 13 year period at the researchers' institution, implying that cases of TPP in the African American population may be underreported 12 .Since then, more cases of TPP in the African American population have been described, reporting similarly to the clinical presentation of TPP described above [13][14][15][16][17][18] , with an additional case being reported from native Africa as well 19 .
Differential Diagnoses:
FHPP and Myasthenic Syndromes constitute other potential muscular disorders that must be ruled out.
FHPP is an autosomal dominant genetic disorder characteristic of mutations in ion channels of the skeletal muscle sarcolemma (such as the alpha1 subunit of the dihydropyridine-sensitive calcium channel and sodium channel SCN4A) 20 .FHPP age of onset is typically around the first two decades and frequency of attacks diminishes with age, meaning FHPP is likely to present at a younger age than TPP 21 .Myasthenic Syndromes, such as myasthenia gravis and Lambert Eaton myasthenic syndrome, are autoimmune disorders that affect the neuromuscular junction causing progressive muscle weakness in various muscle groups such as proximal limbs 22 .Myasthenia gravis is becoming increasingly common in the elderly, men being more likely to be diagnosed after the age of 50, and Lambert Eaton myasthenic syndrome usually presents over 40 years of age 22,23 .Both of these myasthenic syndromes tend to present later in life than TPP.
Pathophysiology:
The pathophysiology behind TPP still remains unclear.The Na+/K+-ATPase pump is responsible for maintaining the transmembrane difference of potassium in cells.Both insulin and beta-adrenergic catecholamines are known to activate the function of the Na+/K+-ATPase pump 24 .Hyperthyroidism causes an increase in beta-adrenergic activity, which may activate the pump, driving potassium into cells and causing hypokalemia in the blood 25 .Thyroid hormones may also have direct activity on Na+/K+-ATPase pumps, which may increase activity and function of these pumps 26 .Having meals high in carbohydrates may also precipitate episodes of TPP due to an increase in insulin which activates Na+/K+-ATPase pumps 27 .
Imaging Features:
Ultrasound is the primary imaging modality used in assessing the thyroid.In a retrospective study with patients diagnosed with TPP, the most common finding on ultrasound consisted of an enlarged thyroid gland, as demonstrated in our patient.This study consisted of 13 patients that had undergone sonographic imaging.Another sonographic finding in the majority of the patients was hyperechoic regions and decreased echogenicity of the thyroid.In the group of patients with decreased echogenicity, the patients' thyroids were either diffusely hypoechoic or had multiple areas of hypoechogenicity.The imaging features closely resemble Graves' disease, which the majority of the patients were already diagnosed with 28 .The thyroid will also be hyperemic on Doppler ultrasound 29 , as was seen in our presented case.The imaging appearance of TPP can vary depending on if there is an underlying hyperthyroid disease.For example, in a recent case study, a male patient with no relevant past medical history was diagnosed with TPP.A thyroid ultrasound of this patient showed multiple bilateral thyroid nodules 30 .TPP can be seen in various etiologies of hyperthyroidism, which can make its sonographic appearance quite variable at times.
Treatment:
Treatment in the setting of an acute attack includes general supportive care, potassium repletion, and nonselective beta blockade in the case of tachycardia or palpitations 3 .The patient should be monitored for rebound hyperkalemia after resolution of an acute attack.Treatment regimen should also work towards achieving a euthyroid state with options including antithyroid medications, radioactive iodine, or thyroidectomy in a patient with Graves' disease 3 .Definitive treatment of hyperthyroidism is noted to result in complete resolution of paralysis 13 .In the present case, the patient is prescribed propranolol and methimazole for prevention of further hyperthyroid attacks.The patient is currently doing well and follows up with endocrinology in the outpatient setting for management of his Graves' disease.
CONCLUSION
TPP is a rare manifestation of thyrotoxicosis that can be diagnosed through investigation of history of present illness, physical examination, laboratory values, and imaging.A history of multiple episodes of muscle weakness and concurrent tachycardia under a setting of hypokalemia and hyperthyroidism should raise a healthcare provider's index of suspicion for TPP.The use of grayscale ultrasound with color doppler can further confirm the etiology behind the hyperthyroidism, which may further support the diagnosis of TPP.Repletion of low potassium is shown to resolve an acute episode of TPP, and prevention of further hyperthyroid attacks via pharmacotherapy (such as propranolol and methimazole), radioactive iodine, or thyroidectomy can result in prevention of future episodes of paralysis secondary to TPP.
Figure 1
Figure 1 Transverse grayscale image of the thyroid gland at the level of the isthmus demonstrates diffuse enlargement and parenchymal heterogeneity of the entire gland, with overall increased echogenicity.
|
2022-06-25T15:11:44.870Z
|
2022-06-23T00:00:00.000
|
{
"year": 2022,
"sha1": "0db0322023e721e6c854601d585bef5027bedb7e",
"oa_license": "CCBY",
"oa_url": "https://rdw.rowan.edu/cgi/viewcontent.cgi?article=1115&context=crjcsm",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "12b9eb63b02f29e7d910f4289fca5baf14ece571",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
242636898
|
pes2o/s2orc
|
v3-fos-license
|
Isolation, Serotyping and Molecular Detection of Bovine FMD Virus from Outbreak Cases in Aba'ala District of Afar Region
Background: Among the top listed economically important transboundary livestock diseases of cattle, foot and mouth disease (FMD) is the leading bottleneck in livestock production and productivity in Ethiopia. On the basis of FMDV active outbreak cases, a cross sectional study was undertaken to collect samples from January, 2019 to March, 2020 intended for isolation, serotyping and molecular detection of FMDV in the study district. Purposive sampling method was applied to select the study area for the reason that the presence of current active FMD outbreak case report during the study period. Totally, 27 FMD suspected clinical samples were collected from clinically affected study population during eld outbreak. Out of 27 samples, 18 of them were inoculated on cultured Baby hamster kidney (BHK-21) monolayer cells and all 27 samples were tested using conventional RT-PCR and sets of specic universal primers. Finally, the PCR products were visualized with UV illumination and imaged with gel documentation system. Results: The current study result revealed that out of 18 clinical samples subjected to virus isolation, 72.2%(n=13) of these cultures exhibited FMDV induced cytopathic effect (CPE) and the identied serotype was SAT-2 FMD virus. Out of 27 clinical samples tested by conventional RT-PCR, only 12 FMDV samples were found to be FMDV positive by universal primers. Out of 27 clinical samples detected by conventional reverse transcription polymerase chain reaction (RT-PCR), only 12 FMDV samples were found to be FMDV positive by universal primers. Conclusions: Our study nding indicated FMDV is prevalent in the study area and FMDV serotype SAT-2 was the causality for the outbreaks of the disease in the study area. Hence, region wise regular FMD outbreaks investigation, further phylogenetic analysis and vaccine matching eld isolates should be carried out to know in depth data about FMDV serotypes and topotypes involving in afar region of Ethiopia for effective vaccine development and control of the disease.
Livestock diseases remain the most vital impediments to the development of the sector through reducing production and productivity that could ultimately affect regional, national and international trade in live animal and animal products' [4]. According to [6] livestock diseases cause major economic losses to the peasant farmer and pastoralists in Ethiopia amounting to hundreds of millions of birr every year.
Approximately, annual mortality rates attributed to these livestock diseases is computed to be 9-12% for cattle herds, 15% and 13% sheep and goat ocks respectively [7]. Among livestock diseases hindering production and productivity of the sector, foot and mouth disease (FMD) is the most known economically important transboundary viral disease of cattle in Ethiopia [6,8]. Foot and mouth disease (FMD) is an extremely contagious and extremely infectious livestock disease of all cloven hoofed animals. It is the world's most important cattle disease and accountable for vast worldwide drop of livestock production and encouraging national and global business impediments for livestock and livestock products [4,9].
Foot and mouth disease (FMD) is caused by FMD virus (FMDV) that comes under the genus Aphthovirus within the family Picornaviridae. Clinically, FMD is manifested by fever, loss of appetite, salivation, vesicular eruptions in mucosa of the mouth, skin of the inter-digital spaces and coronary bands of the feet and teats, and sudden death of young stock [10,11]. As of the international organization for animal health (OIE), FMD ranks rst among globally important noti able infectious livestock diseases because of exports of infected livestock and livestock products could easily cause outbreak in countries that are previously free from FMD outbreak cases and transboundary distribution nature of the disease [12].
Pastoralists are highly impacted by direct and indirect effects of FMD as their lives are straightforwardly dependent on livestock production [6,13]. Generally, studies conducted on FMD serostatus previously indicated the presence of the disease in various areas of the country with seroprevalence that ranges from 8.18% to 44.2% in different part of the nation [14,15].
Foot and mouth disease virus (FMDV) has seven immunologically, antigenically and genetically distinct serotypes (O, A, C, Asia 1, Southern African Territories (SAT)-1, SAT-2 and SAT-3) that cause indistinguishable clinical disease [16]. Within these serotypes, over 65 diversity of topotypes, genetic lineages and strains have also been identi ed using biochemical and immunological tests. Currently, ve FMDV serotypes (O, A, C, SAT-1 and SAT-2) are identi ed and documented in Ethiopia [3,4,17,18]. The serotypes also differ in their geographical distribution over the world as well as in many regions of the country [3,19]. According to [20] retrospective study nding, FMDV serotypes O, A, SAT 2, and SAT 1 were identi ed as the causative serotypes for outbreak cases occurring during the study time 2007-2012.
While O was the dominant serotype, SAT 2 was the serotype which indicated raise in comparative frequency of occurrence [3,20]. Prompted investigation and detection of FMDV serotypes during outbreak was highly crucial to determine the origin of infection and to use appropriate vaccine [21].
Despite occurrence of several outbreaks of FMDV in the afar region, there is no even single documented information to know about the disease current serostatus, serotypes circulating in the region in general and the study area in particular. To develop effective control measures of FMD, determining its serostatus, virus isolation and identi cation of the serotype(s) circulating in a particular area would have paramount importance. Moreover, having a detailed knowledge on the speci c serotypes circulating in a particular area has paramount importance for companies to target for each speci c FMDV serotype for effective vaccine development in steady of rely on production of trivalent vaccine for serotype O, A and SAT-2.Therefore, the present study was intended for isolation and serotype identi cation of FMDV from outbreak cases in Aba'ala district of Afar region from January, 2019 to March, 2020 in Ethiopia.
Clinical Examination of FMD Outbreaks
In this outbreak investigation and sample collection, characteristic clinical signs of FMDV in the study population were salivation, lameness, vesicle formations on oral cavity, profuse salivation and interdigital vesicles. Suggestive lesions of FMD on the mouth contained destructions and sores on the upper and lower pad area and tongue, while feet abrasions consist of wearing away on the inter-digital space. Outbreak affected cattle were disinclined to travel and lagging overdue the healthy study population and deny for feeding.
FMD Virus Isolation
The Current cell culture based FMD virus isolation result revealed that out of 18 suspected clinical samples processed and cultured, 72.2%(n=13) representative samples were exhibited morphological alterations (FMDV cytopathic effect (CPE) on BHK-21 cells; while the other clinical samples of FMDV (n=9) did not inoculated on the cells because these samples were collected from same outbreak in the study areas. Out of 13 FMD clinical samples that showed cytopathic effect on BHK-21 cells, 33.3% (n=6), 22.2%(n=4) and 16.7%(n=3) were epithelial tissues, vesicular uid and swab samples respectively. These FMD positive clinical samples were characterized primarily by a quick sloughing of BHK-21 monolayer cells and these sloughed cells were roughly round, swelling and formed singly in shape ( Figure 1). As time progress, there was sloughing of cells or monolayer detachment from the wall of cell culture ask and even some cells were severely damaged within 72hrs after inoculation and nally cell death that indicates the presence of virus. However, samples that did not show CPE do not induce morphologic changes of cell. Whole cell sloughing of the pane was regularly observed after 48 hrs of cell injection. samples were identi ed to be serotype SAT-2. Therefore, the current study nding revealed that SAT-2 serotype could be the possible causes of the disease detected in the outbreak areas as depicted (table 2). All outbreaks con rmed samples collected from both kebeles of the same district were identi ed as SAT-2 serotype. In conclusion, outbreaks of the study district (subunits) was occurred due to FMD serotype SAT-2.
Molecular Detection of FMD Virus
The extracted RNA from all 27 FMD suspected clinical samples was detected using conventional RT-PCR method and speci c primers [22]. This conventional RT-PCR was employed for the ampli cation and detect the genetic material of the disease in collected clinical samples [23]. All samples were ampli ed and detected using FMDV universal primers (FMDV7F/FMDV7R). Out of 27 samples detected, only 12 FMDV clinical samples were found to be FMDV positive (DNA bands on gel electrophoresis around 328 bp) as indicated ( Figure 2).
Discussion
The present study was the rst in its kind about foot and mouth disease isolation, molecular detection and identifying of the serotype involving in afar region. Foot and mouth disease (FMD) is responsible for frequent outbreaks and causes signi cant economic devastation in the region in particular and on the nation in general. The disease is described by development of typical FMD lesions around the mouth as well as on the foot and unexpected losses of newborn calves [10,11]. Occurrence of the disease epidemics is growing livestock problems entirely in all corners of the country. The disease has become one of the most important bottleneck to livestock keepers as result of signi cant reduction in production and productivity as well as possibly trade restriction in afar region in particular and Ethiopia in general [24][25][26].
In this research nding, from 18 suspected clinical samples subjected to BHK-21 cell line adaptation, 72.2% (n=13) eld samples showed FMDV induced cytopathic effect (CPE). These cells were appeared as rounding in cells culture, swelling, clumping of the cells as one can demonstrate from ( Figure 1). The present study nding was consistent with previous research works such as by [27][28][29] in which positive sample (CPE) on BHK-21 cells was described by a fast sloughing of the cells. Our study result was in line with study nding by [30], in which infected cells in both study results showed round and sloughing as well as monolayer detachment from the wall of cell culture ask. Other authors such as [31] also described that FMDV isolated from clinical samples and inoculated on BHK-21 cell-culture results in infected-cell that showed speci c CPE within 24-48 hours post infection was characterized by rounding of cells and distortion of the monolayer and cell detachments. The remaining samples did not show CPE; this could be due to loss of our samples through shipping from sample collection site to laboratory.
Ethiopia is one of the FMD endemic countries in the horn of Africa, with almost ve serotypes prevailing so far. Cumulative research reports in Ethiopia on FMDV serotypes revealed that this disease occurrence is due to any of O, A, C, SAT-2, and SAT-1 as diagnosed by clinically, serologically, virologically and molecular techniques during the period 1981-2018 [3,20]. In our study result, serotyping of FMDV results disclosed that the identi ed serotype SAT-2 (100%) FMD virus was circulating in Aba'ala district of afar region. This could be to mean that serotype SAT-2 is vastly prevailing and the foremost serotype responsible for frequent outbreaks in the study area of afar region, Ethiopia. In support of this study ndings, studies conducted by [3,13,26,32], who reported that serotype SAT-2 virus in Borean pastoral area, Benishangul-Gumuz, Gambella, Addis Ababa and Adama, respectively. Moreover, serotype SAT-2 was previously reported from many sub-saharan African countries [33,34] described the endemicity of this serotype in these countries. Studies conducted in Uganda indicated that SAT-2 serotype was the most prevalent serotype accountable for the disease occurrence [35]. Another FMDV Serotyping study results in Chad in 2016 showed SAT-2 was the dominant serotype during its study period followed by serotype O [36]. Furthermore, the International Organization for Animal Health (OIE) FMD disease occurrences report in Africa continent since 2000-2010 disclosed that SAT-2 was escalating as an important serotype (41%) followed by O serotype (23%) [37]. Multi-topotype SAT-2 endemicity and outbreaks out of the Sub-Saharan terrestrial ranges have also been observed in countries south of the Sahara desert, and the Northern African and the Middle East region such as Libya, Egypt, Palestinian Autonomous Territories (PAT), and Bahrain [38].
In this study, out of 27 clinical samples detected using conventional PCR for the presence of FMDV genetic material in the sample, only 44.4% (n=12) were found to be positive. Of these 12 samples detected as positive for FMDV, bovine epithelial tissues were accounted for 22.2% (n=6) and had the lower Ct values which could indicate higher concentrations of the virus in these samples. Our results also showed that bovine vesicular uid samples were accounted for 14.5% (n=4) and swab samples were accounted for 7.4% (n=2). This study nding con rmed that the existence of more FMD viral RNA in the epithelial tissues samples as compared to vesicular and swab samples. This study nding is supported by OIE [18] as this institution described epithelial tissues are the ideal samples for virus detection. This was also justi ed by the fact that reduced samples of conventional RT-PCR positive samples and small samples yielding infectious virus might have been because of virus destruction in the course of transfer from the eld. The presence of SAT-2 serotype in the present study district would be as result of uncontrolled cross-border movement of animals intended for pursuit of feed and water and also free trade in livestock among neighboring regions and countries since SAT-2 is widespread to various neighboring countries [39][40][41].
Conclusions
The present study nding indicated that FMDV is prevalent in the study area of afar region as con rmed by clinically, serologically, virologically and molecular techniques particularly in the in the study area of the region and serotype SAT-2 was the causality for the outbreaks of the disease in the study area. The occurrence of this disease is a foremost badly behaved for the improvement of the livestock industry as it causes enormous worldwide harms of livestock sector as well as severe impacts on export earing from national and international trade thereby threaten the living means of livestock keepers in particular and income source of the country in general. Out of the serotypes identi ed in our country, the identi ed prevailing serotype was SAT-2 that causes frequent outbreaks in the study area of afar region, Ethiopia. Region wise regular FMD outbreaks investigation to have more full information about the serotypes, topotypes involving in the region and vaccine matching studies of eld isolates to evaluate vaccine protection potential has paramount important for effective vaccine development.
Description of the study areas
This research work was implemented in Aba'ala district (Erkudi and Hidmo kebeles), which is located in afar region, Ethiopia. This study district was purposively selected for the reason that the presence of active FMD outbreak case report in the course of the study period, January, 2019 to March, 2020. Afar regional state shares joint intercontinental borders with Eritrea in the north-east and Djibuti in the east part of the region. The region is described speci cally through arid and semi-arid weather conditions with low and unpredictable rainfall. The altitude of the region ranges from 120m below sea level in Danakil depression to 1500 m above sea level. Majority of the pastoral community mainly depend on livestock production for their livelihood. According to APADB (2006), approximately there are 1.9 million cattle population in the region, and 90% of the study animals are managed under pastoral production and the rest 10% in agro-pastoral production system. The study area is situated in the North area of the region, Northeastern Ethiopia. It lies around between 13°15′ and 13°30′ 1atitude and 39°39′ and 39°55′ longitude. The high temperature of afar ranges as of 25°C in case of rainy period to 48°C during the dry season [42].
Study Population
The study populations were cattle that had experienced outbreak cases of FMDV and manifested typical FMD clinical signs in the Aba'ala district area during the study period of this research work. The study animals were cautiously inspected for the manifestation of distinguishing clinical signs of FMD such as vesicular lesions around the oral cavity, on the feet, salivation, lameness, anorexia and rise in temperature [43]. All ages and sexes of the study population reared by agro-pastoralists in the outbreak affected kebeles (subunits) of the study district were sampled.
Study Design
Prior to eld level investigation and sample collection, district and kebele level animal health expertises were informed to report for regional veterinary laboratory centers when FMD outbreak occurred. Therefore, based on the occurrence of active FMD outbreak case report and active outbreak nding, a cross sectional study was used to collect tissue samples. Clinically, FMD suspected study populations were physically inspected for the manifestation of FMD with typical signs were sampled to collect biopsy samples that were intended for viral isolation, molecular detection and serotype identi cation purpose.
Sampling techniques and Sample Size Determination
Purposive sampling method was employed to select FMD affected study district, cattle herds, sampling animals as a result of the occurrence of FMD active case reports in the course of the study period, January, 2019 to March, 2020. Accordingly, within the study areas (subunits) animals with clear signs, symptoms and suspected to be infected with FMDV as indicated in ( gure 3) were selected and sampled.
From all outbreak affected kebeles, 27 swab, epithelial tissue and vesicular uid samples were collected from clinically FMD suspected animals with active outbreak lesions for cell culture based virus isolation, molecular detection and identi cation of serotypes circulating in the study district.
Sample Collection and Transportation
Representative active bovine epithelial tissues, vesicular uid and swab samples were aseptically collected with the help of tissue forceps from un-ruptured and freshly ruptured vesicles of clinically affected animals during the course of eld outbreak to isolate the circulating viruses responsible for the occurrence of disease. These collected FMD suspected samples were kept in a sampling bottle containing virus transport medium that has equal volume of 0.04M phosphate buffer saline (PBS) with 50% glycerol enriched by antibiotics and antifungal according to the protocol recommended by OIE [18]. Collected clinical FMDV suspected specimens were transported to laboratory and stored at -20 °C and got transportation to National veterinary institute (NVI) using cold chain for virus isolation, molecular detection and serotype identi cation purpose.
FMD Virus Isolation
The samples collected were processed and cultured on BHK-21 cell monolayer with three subsequent passages as follows. About 1 gram of each tissue was taken and washed three times using sterile phosphate buffered saline containing antibiotics and antifungal (PBS) on petridish. Then, washed tissues were transferred to sterile mortar, cut into pieces using scissor and minced by scalpel blade. These minced tissues were then grounded and homogenized in sterile sand with a sterile pestle and mortal. Nine ml of PBS was added to the homogenized tissues and well mixed as well as small volume tissue culture made and small amount of ve percent antibiotics (penicillin, streptomycin and Amphotericin B solution) containing medium were added so that the nal volume was ten times that of the epithelial tissue, producing of ten percent suspension [18]. All procedures were conducted under the Biosafety cabinet level 2. About 1ml of ltered tissue suspension was inoculated on con uent cultured Baby hamster kidney (BHK-21) monolayer cells grown on 25cm 2 tissue culture asks and incubated at 37°C for 1hr for adsorption of the virus. Then, cell cultures were added 8ml of maintenance medium (2% MEM) and incubated at 37 °C and 5% CO2 in a humidi ed incubator. The appearance of virus induced cytopathic effect (CPE) was observed daily under the inverted microscope. The inoculated cell line was harvested when 85-100% of CPE was observed. These infected cells did not show CPE within 72hrs post infection on the third passage were supposed to be virus negative [18,44]. Samples that showed typical CPE (positive cases), clinical tissue materials were used for serotype identi cation of the virus involved in the outbreak cases using antigen detection sandwich ELISA [45].
Serotyping of FMD Virus Isolates
FMD Serotyping was executed using both antigen detection sandwich ELISA and sets of serotype speci c primers intended for testing of FMD virus and identifying the serotypes responsible for outbreaks cases. Sandwich-ELISA was executed with particular combinations of anti-FMDV monoclonal antibodies (MAb), used as coated and conjugated antibodies. The kit was developed for detection and serotyping of FMDV O, A, C, SAT1 and SAT2. A pan-FMDV test, detecting any isolates of O, A, C, Asia1 and SAT serotypes, was also included in the kit to complement the speci c serotyping of FMDV. The test was implemented based on the manufacturer's instruction and OIE [46]. A total of 13 positive sample suspensions that exhibited FMDV cytopathic effect (CPE) on BHK-21 cell were needed to be tested for detection of serotype identi cation using sandwich ELISA on a microplate containing 96 wells.
About 25μl of dilute buffer was dispensed into all wells of the test plate, then 25μl of previously diluted samples using ELISA buffer and ready-to-use controls was dispensed into the appropriate wells of the test plate pre-coated with recombinant FMD viral antibody. One positive control for each FMD types O, A, SAT1 and SAT2 and negative controls were included in each plate. The plates were sealed using the enclosed plate sealer and incubated for 1hr at room temperature (20-25°C). After incubation, all uids on the plates were discarded and the remaining residual uids were removed. Then 200μl of washing solution were added and incubated for 3min at room temperature, subsequently wells were emptied and the washing repeated twice (three washing cycles in total). Then all residual uids were removed by tapping on clean absorbent paper and 50μl of conjugate. A was added from columns 1 to 8 and the same volume of conjugate B was added from columns 9 to 12. Plates were covered and incubated at room temperature for 1hour. After incubation 50μl of substrate per well was added to all wells and plates were covered and left at room temperature for 20minutes in the dark. The reaction was stopped by adding 50μl of stop solution (sulfuric acid (H2SO4)). Immediately after stopping, reading the optical density (OD) of each well was done at 450 nm wavelength using micro plate reader.
Molecular Detection of FMD Virus
The presence of FMD viral genetic material in all 27 collected eld samples was tested using conventional RT-PCR and speci c primers that amplify Viral protein 1 (VP1) of FMDV using the RNeasy Mini Kit following the manufacturer's instruction (Qiagen, USA).
FMD Viral RNA Extraction
Total RNA was extracted from collected FMD suspected clinical samples suspension using Qiagen RNA extraction kit following manufacturer's instructions as [47]. Brie y, 140 mirco-liters of sample suspension was added to 560μl buffer AVL carrier RNA in the mirco centrifuge and vortexed for 15second to mix and then incubated at room temperature (25 0 c) for 10 min. The tubes were brie y centrifuged to remove drops from the inside of the lid. Then, 560μl of ethanol (70%) was added to the sample and mixed by pulse vortexing for 15 seconds followed by centrifuging to remove drops from the inside lid. Then, 630μl of the solution were applied to the QIAMP Mini-spin column in a 2ml collection tube and centrifuged 12,500rpm for 1min. The ltrate was discarded and the column was placed in a fresh 2ml collection tube. Then, 500μl of buffer AW2 were added and centrifuged at 12,500rpm for three min and the ltrate was discarded. Next, 65μl of Buffer AVE was added to the column equilibrated at room temperature for one min and centrifuged at 12,500 for 1min. Using reverse transcription polymerase chain reaction (RT-PCR) and speci c primers set FMDV7-forward (FMDV7F) and FMDV7-reverse (FMDV7R) as depicted in (table 1), extracted RNA samples were detected for the presence of FMDV.
Agarose Gel Electrophoresis
The PCR products were analyzed on the prepared 1.5% Agarose gel by adding 4μl gel red with loading dye and then the PCR product were loaded in the volume of 10μl in each well and 10μl molecular marker (ladder) was added started 100bp plus. Electrophoresis was run for one hour at 120V. Then, the DNA band was visualized by UV illumination, using desktop according to the base pair (bp), and then the size was determined and documented.
Data Management and Statistical Analysis
Data generated from laboratory investigations were recorded and coded using Microsoft Excel spreadsheet and analyzed using STATA version 14.
Declarations
Ethics approval and consent to participate Written ethical approval and consent for this study was obtained from Samara University College of Veterinary Medicine of Animal Research Ethics and Review committee (Ref/AREC020/2019). All efforts were made to minimize animal suffering during sample collection. Informed oral consents were obtained from all animal owners who participated in the study to take samples from their cattle and for further research use of the samples. These written and oral consents were documented.
Consent for publication
Not applicable Availability of data and materials The data sets used and/or analyzed during the current study available from the corresponding author on reasonable request.
Competing interests
The authors declare that they have no competing interests in the publication of this paper. Figure 1
|
2020-07-02T10:37:12.408Z
|
2020-06-25T00:00:00.000
|
{
"year": 2020,
"sha1": "9997601ffd626f7b1a740bce16d3498668b09bda",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-28928/v2.pdf?c=1631881954000",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "d3afb99d51df98f31f936db27d36da6776766bd5",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
}
|
238127193
|
pes2o/s2orc
|
v3-fos-license
|
Linkage between commuting time and sickness absence in the context of China: transmission channels and heterogeneous effects
Most of employees in urban China have experienced a heavy commuting burden,which has become an urgent issue that should be solved in the process of new urbanization strategy. However, not only has the exploration of relationship between the commuting and sickness absence been still scant in China, but also there is no discussion made to analyze the mechanism linking the commuting time and sickness absence. To address these gaps, this study firstly investigates the commuting-absence effect as well as the potential transmission channel between them. Using a unique dataset of 2013 Matched Employer-Employee Survey (CMEES) in China, we apply the zero-inflated negative binomial model to explore the nexus between the commuting and sickness absence. To discuss the potential mechanism linking commuting and sickness absence in the context of China, the estimations of the commuting on health-related outcomes and work efforts are performed by the OLS and Logit regression to discuss the mechanism.
Background
Most of employees in urban China have experienced a heavy commuting burden,which has become an urgent issue that should be solved in the process of new urbanization strategy.
However, not only has the exploration of relationship between the commuting and sickness absence been still scant in China, but also there is no discussion made to analyze the mechanism linking the commuting time and sickness absence. To address these gaps, this study firstly investigates the commuting-absence effect as well as the potential transmission channel between them.
Methods
Using a unique dataset of 2013 Matched Employer-Employee Survey (CMEES) in China, we apply the zero-inflated negative binomial model to explore the nexus between the commuting and sickness absence. To discuss the potential mechanism linking commuting and sickness absence in the context of China, the estimations of the commuting on healthrelated outcomes and work efforts are performed by the OLS and Logit regression to discuss the mechanism.
Results
The empirical results reveal that the commute has a positive effect on sickness absence, while it is still robust against several specifications. More importantly, the commutingabsence effect is mainly transmitted through health-related outcomes of employees, whereas we find no clear evidence supporting the shirking behaviors. Additionally, the heterogeneous effects of commuting-absence are differentiated across the Hukou status, gender, transportation modes, scale of cities and types of enterprises.
Conclusion
The longer commute induces to lower productivities through the sickness absence, that is, the longer journey from home to work is positively related with the increasing sickness absence, which keeps in consistency with previous studies. And the potential effect of commute-sickness absence is mainly transmitted through their health-related outcomes.
Additionally, the impacts of commute time on sickness absence are differentiated across the Hukou status, gender, transportation modes, scale of cities and types of enterprise.
Background
Commuting is an indispensable part of daily life for millions of people worldwide [1]. A large array of previous studies focused on the relationships between the commuting and the employees' labor market performance [2][3][4]. With negative externalities of the sickness absence, the nexus between the commuting and sickness absence has also attracted extensive attentions [2,5].
Based on the theory of new welfare economics, long commute is viewed as an activity of time-consumption, which is related with poor psychological and physical health outcomes [6][7][8]. Accordingly, the leisure time of employees for health-promoting plans, such as physical activities, relaxation and social participation may be crowded out by the longer commuting time [9]. In addition, while the leisure could be substituted for shirking by each other, there is more likely for shirking behaviors among those employees with a longer commute time [10]. It implies that with the decreasing cost of absence, a longer commuting may lead to more benefit from their absence to ask for more leisure, which could be used for other purposes rather than for work [5]. Therefore, asking for "sickness" leave could be regarded as a result of rational decision.
There is also no consensus reached in empirical studies. It is a common belief that longer journeys may induce more sickness absence. A body of evidences from developed markets have confirmed that a longer commuting might increase the likelihood of illness-related absence [2,5,[11][12], but several studies also reveal that there was no evidence supporting the commute-absence effect [5,13]. More importantly, some researches provide the evidence on heterogeneous commute-absence effects. Using data from the Panel Study of Income Dynamics for the years 2011, 2013, and 2015, Gimenez-Nadal et al. (2018) found that the daily commute is associated with men's sick-day absences, while it has no significant effect on women [14]. Similarly, Karlström and Isacsson (2010) also pointed out that commuting time only has a positive effect on sickness absence of women with lower wages [15].
Additionally, several studies from non-western countries also show clear evidences that revealed a negatively relationship between commuting time and subjective well-being [17]. A survey was conducted in Tokyo for school teachers by Nomoto et al. (2015), which also demonstrated that long-time commuters are more likely for less sleep and exercise [18].
With the rapid urbanization and increasing ownership of private vehicle, most of employees in urban China have experienced a heavy commuting burden [8], an increasing amount of research has focused on the nexus between commuting and urban residents' subjective well-being [8,[19][20][21]. But the discussion on commute-sickness absence effect is still scant in those non-western countries, which should be further explored in the future.
In sum, previous studies have two limitations as follows. Firstly, it is important to be aware that absence due to sickness is a multi-factorial phenomenon [22][23]. Most of studies were carried out in the European developed markets, whereas the discussion in the non-western or undeveloped context is still unexplored. Another limitation is that the mechanism linking the commuting and sickness is still unclear, whereas the debate whether the commuting-absence effect is transmitted through health-related outcomes or shirking behaviors is underexplored as well.
To fill these gaps, this study attempts to address two issues: whether commute time is positively related with sickness absence or not? If it does, what is the potential mechanism linking the commute time and sickness absence?
This study may contribute to the exiting studies in several distinct ways.
Firstly, following Goerke and Lorenz(2017) [5], a unique dataset (CMEES) and the zeroinflated negative binomial model are firstly applied to explore the nexus between commuting and sickness absence in China context. Secondly, the two potential transmission channels linking the commuting and sickness absence are discussed in China context by estimating the effect of commuting on healthrelated outcomes and work efforts.
Thirdly, the heterogeneous commuting-absence effects with respect to Hukou status, gender, patterns of commuting, scale of cities and types of companies are taken into full consideration within the context of China.
Data source
We use the data from China's Matched Employer-Employee Survey (CMEES), which is conducted by the School of Labor and Human Resources, Renmin University. Using the two-stage method of stratified sampling, the dataset was selected from an enterprise listing set up on the basis of the 2008 national economic census data. The sample was collected from managers who were responsible for employment relations or personnel matters in the private and public sector companies with 20 or more staffs. If a sampled enterprise refused to response, it would be replaced by another company with the same firm size in the same industry.
The commute time is only available in the wave of 2013. There are 4,532 employees from 444 enterprises and 12 cities covered in 2013 CMEES. The CMEES not only collects the rich information on the characteristics of company-level, demographic and employment traits for employees, but also provides the detailed information about both days absent for sickness and the commuting time, which is appropriate to discuss the effect of the commuting on sickness absence.
Ethnics Statement
This study is a secondary analysis based on the data from the CMEES conducted by School of Labor and Human Resources, Renmin University all of which were subject to multiple stages of reviews by experts to address methodological, ethical and legal issues related to data collection. Final approvals of all CMEES surveys were required from the Research Ethics Committee of Renmin University to ensure that the data collection complied with ethical requirement according to the Statistics Act.
Explanatory and outcome variables
The outcome variable is the annual number of days absent from work due to sickness, which derives from the following items in the survey "In the past year, how many days have you asked for a leave due to illness?" The focal variable is the commuting time, which is described as minutes spent in one-way daily commute.
According to previous studies, controlled variables are divided into three sets: individuallevel variables, company-level variables as well as city-level variables.
Individual-level variables consist of age, age squared, male (ref=female), education years, education years squared, migration status (ref=non-migrant), occupation categories (ref=ordinary worker), job strain, overtime, training, job tenure, job security, injure, wage (log). Company level variables incorporate company type (ref=domestic private enterprise) and sector (ref=other non-manufacture industry). City-level variables has three categories, including first-tier cites, second-tier cites and third-tier cities.
Econometric model
The number of days absent for sickness is a count variable (0, 1, 2, 3, and so on), the ZINB model may be more appropriate for this study. The countfit function in Stata software was used to make goodness-fit test of all four count models. All the fit statistics including AIC, BIC, and Vuong test proves that the ZINB is the best model for this study (As depicted Table 1).
The ZINB regression includes three steps. First, a Logit model is applied for the "certain zero" cases to predict whether an employee would be in this group or not. Then, an NB model is used for the prediction of the counts for those workers who are not certain zeros.
Finally, all two models are pooled. minutes, but only 25.89% of them commute less than 10 minutes. Table 3 (4)).
Robust checks
Several robust checks are performed to verify the sensitivity of the main findings, as shown in Table 4. In Model (5), it applies the same baseline model through exclusive observations whose absenteeism was more than 60 days during the past year. To correct the measurement error, we also defined sickness absence as a dummy variable (it equals 1 if the individual took sick leave during the past year) to give an additional analysis of the commute-absenteeism relationship in model (6). Similarly, we also divided commuters into three subgroups to estimate the commute-absenteeism effect in model (7), where the commuters with less than 10minutes commute (i.e., 0≤CT≤10minutes) are defined as short-commuters. The journey between 10minutes and less than 26minutes are middletime commute (i.e., 10<CT≤26minutes), while those who travel over 26minutes are longtime commuters (i.e., CT>26minutes). Model (8) is to re-estimate for those who have not been injured at work during the past year; while Model (9) excludes observation of individuals whose medical expenditure in the past year was more than 10,000 Yuan. In Model (10), the variables of the transportation modes are incorporated, the active mode refers to those who walked or cycled to work, while passive mode includes those who drove cars or used public transportation.
As shown in Table 4, there is a robust positive association between commute and sickness absence against several specifications.
Mechanism analysis
There are two possible mechanisms linking the commute and sickness absence. One is that longer commute might weaken employees' health status, which induces involuntary or unavoidable absenteeism; another is that commuting may induce shirking behaviors, thereby increasing the probability of the voluntary or avoidable absenteeism. The healthrelated status, such as subjective health indexes (self-rated health status, degree of depression) and objective health indicators (BMI index, obesity, and annual medical expenses) are incorporated as outcomes in Model (11) -Model (15). With the dataset unavailable, we apply the length of weekly overtime and weekly overtime probability (whether the weekly work time is above 40 hours or not ) as proxies for work efforts to check the potential mechanism of shirking behaviors in Model (16) and Model (17). Table 5, a longer commuting is associated with poorer self-rated health status and a higher degree of psychological depression, and it also is highly related with an increase of their BMI index, annual medical expenses as well as the risk of obesity.
As shown in
However, results reveal that the commuting has no significant effect on both the overtime length and probability of overtime, that is, the mechanism of commute-absence effect through the shrinking behaviors is not confirmed (see model (16)- (17) in Table 5).
Heterogeneous effects
In this section, we attempt to estimate heterogeneous effects of commutes on absenteeism for sickness with respect to Hukou status, gender, transportation mode, the scale of cities, the type of enterprises and Hukou status.
The estimations as shown in Table 6 indicate that commute is positively associated with migrants' sickness absence, but has no significant effect on urban citizens. Similarly, commute only has a positive influence on men's absence, while has no significant influence on women. As for the transportation mode, no significant evidence is found from both active and positive groups. A significant commute-absence effect is captured in the first-tier city group, while the associations linking commute time and sickness absence are not significant in the second-tier and third-tier city. Commute has a negative effect on the employees in foreign-owned enterprise, and it also produces a positive influence on the workers in domestic private enterprises, but has no significant effect on the employees in state-owned enterprise.
Discussion
It is apparent that not only is the relationship between commuting and sickness absence theoretically ambiguous, but also it has still reached inconsistency in empirical studies.
The results in benchmark demonstrated that employees might incline to ask for ill-related absence for nearly 1 day (exp (0.0038)), with an additional increase of 1 minute, and it is consistent with the findings by Van and Gutiérrez-i-Puigarnau (2011), Goerke and Lorenz 2017 ,which also confirmed a positive nexus between commute and sickness absence [2,5]. This result is still robust against several specifications including excluding observations, correcting the measurement error, incorporating transportation modes.
The results from mechanism analysis reveal that longer commuting is associated with poorer subjective and objective health status. It is consistent with previous evidence that commuting has been linked to negative health-related outcomes [8][9]24]. In this scenario, health-related outcomes do act as an important transmission channel linking the relationship between the commuting and sickness absence. It is in line with Gimenez-Nadal et al. (2018) [14], which also depict the association between commuting and workers' health-related outcomes. This finding implies that more time spent on commute might break the work-life balance among employees and tend to push more burdens on both objective and subjective health status, including a combination of the tension, tiredness, depression, irregular diet and so on, which might lead to the greater likelihood of involuntary or unavoidable sickness absenteeism and lower their productivity.
The mechanism analysis also demonstrates that commute has no significant influence on work efforts, which implied that commuting cannot increase the probability of shirking behaviors. It is contrary with Goerke and Lorenz (2017), which pointed out that commuting time are positively related with working overtime [5]. The potential explanation may be that overtime is a common phenomenon in China [25]. The premium for overtime is an important part of the salary, and the employees have to work overtime for additional premiums as well as the basic compensation [26]. For another, Chinese employers are more likely to overburden their employees [27], and working overtime has to be a necessary routine to meet the employer's claims or expectation. Thus, the longer commuters in China are less likely to show shrinking behaviors not only due to the financial reasons, but also owing to the requirement by employers.
The estimation of heterogeneous effects shows more interesting results. Commute has a positive effect on migrants' sickness absence, but has no significant effect on urban citizens. It is in contrast with the finding by Chia (1988), which suggested that migrants in Singapore have a higher possibility of sickness absence than their local counterparts [28].
The potential explanation is that rural migrants' access to public health service is legally restricted by the Hukou system [29]. Once they get sick, they have to choose a private clinic nearby rather than a formal hospital to get medical service [30]. These unregulated private clinics usually fail to provide official certificates for migrants to obtain sick leave permission. Without sick leave permission , these rural migrants may suffer an extra economic loss of day-off work, so they are less likely to be absent even if they are ill or uncomfortable [31].
Commute is positively related with the men's absence, while has no significant influence on women. It is consistent with Gimenez-Nadal et al. (2018), which draws the same conclusion in US [14]. This gender differential may be due to women's shorter commute times. The gender stratification is still a serious problem in China, whereas the women often bear disproportionately heavy household responsibility including housework and childcare [32][33][34].
Both for the passive and active modes, commuting time have positive but no significant effect on sickness absence. This finding is different against previous studies which confirmed that the active commuting is related with better health status and less sickness absence [35][36][37]. The possible reason may be that cycling in China may induce to worse health-status, which is different from other developed countries. Because the non-motor vehicle traffic plans and public bicycle facilities supply are ignored by the local government [38]. Thus, insufficient bicycle lanes expose cyclist to be more vulnerable in mixed traffic, which may threaten the cyclist's health by potential bike-automobile collision [39].
Long commute produces a positive influence on sickness absence in first-tier cities, but has no significant effect in second-tier and third-tier cites. This implies that the employees in the megacity behemoths suffer more commute burden [8,40], which does more harm to their subject and objective health status. As for the enterprises differentials, our findings reveal that commute is negatively related with absence for the employees in foreignowned enterprise, but has positive effect on absence of the employees in domestic private enterprises. The potential reasons are as follows: foreign-owned enterprise is more likely to pay transportation allowance or provide work unit bus for the long distance commuters [41], which could alleviate the employees' commuting burden. Meanwhile, foreigninvested enterprises focused more on humanistic goals, such as caring taking and welfare of employees [42]. Compared with the domestic private enterprises, they have more prone to improve the employee's quality of work life, including adopting more flexible working time plan or more comprehensive health insurance to help the long commuter to recover.
Main findings
With the rapid urbanization in China, to ensure the balance between work and life as well as promote the health for employees has been an urgent issue in occupational health security. This study confirms that commute time is positively related with sickness absence, whereas it is still robust against several specifications. More importantly, it further points out that health-related outcomes for employees mainly act as a transmission channel to the commuting-absence effect, but there is no clear evidence supporting the shirking behaviors. Additionally, the impacts of commutes on the absenteeism for sickness are differentiated across the Hukou status, gender, transportation modes, scale of cities and types of enterprise.
Implications
This study has several implications. Firstly, promoting the public transportation must be given priority in the process of new urbanization to relieve the heavy burden on employees with long commutes. Considering the negative externality of commuting on lower productivities, it is encouraged to provide dormitories by employers to reduce the duration of commutes.
Limitation
This study also has a certain limitation. The applied dataset of 2013 CMESS is a crosssectional data. With the heterogeneous bias by the observed factors, the potential endogeneity might need to be addressed to explore the causality between commutes and the absenteeism for sickness in further studies.
Availability of data and material
The CMEES that support the findings of this study are available from School of Labor and Human Resources, Renmin University, but restrictions apply to the availability of these data, which were used under license for the current study, and so are not publicly available. CMEES are however available from the authors upon reasonable request and with permission of School of Labor and Human Resources, Renmin University. China's Matched Employer-Employee Survey Data Access can be contacted for more information (yuhui_li@ruc.edu.cn; df594133@163.com). Note: ①Robust standard errors in parentheses * p < 0.1, ** p < 0.05, *** p < 0.01 ②Only the coefficients for the commuting time (CT) variables and constant are reported. ③Like in the main tables, the following control variables are included: individual-level, companylevel and city-level. Model: Zero-inflated negative binomial regressions are used in all models.
Consent for Publication
Note: ①Robust standard errors in parentheses * p < 0.1, ** p < 0.05, *** p < 0.01 ②Only the coefficients for the commuting time (CT) variables and constant are reported in NB model. ③Like in the main tables, the following control variables are included: individual level, company level and city level.
|
2020-03-26T10:18:46.725Z
|
2020-03-18T00:00:00.000
|
{
"year": 2020,
"sha1": "ab1e22e2a7961d728931615b6948161e2ae8e06a",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-32124/v1.pdf",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "37b76da0b961024f308ec54355a1ba9a6755fdbb",
"s2fieldsofstudy": [
"Sociology",
"Economics"
],
"extfieldsofstudy": [
"Geography"
]
}
|
258957930
|
pes2o/s2orc
|
v3-fos-license
|
The modified fascial sling technique for ulnar nerve anterior transposition: surgical techniques and results
Background Approaches to surgical treatment to cubital tunnel syndrome include simple decompression, decompression with medial epicondylectomy, and decompression with anterior transposition of the ulnar nerve. Transposition of the ulnar nerve involves decompression and transposition of the nerve anteriorly to a subcutaneous, intramuscular, or submuscular position. However, transposing the ulnar nerve to subcutaneous plane renders it more susceptible to external trauma. Hence, this technique article introduces the use of a modified fascial sling. Methodology The modified fascial sling technique for anterior transposition of the ulnar nerve involves careful dissection to identify the ulnar nerve, decompression of the nerve, then transposition of the ulnar nerve anterior to the medial epicondyle. An AlloWrap (Stryker, Kalamazoo, MI, USA) is first wrapped around the ulnar nerve, followed by wrapping a fascial sling fashioned from the flexor carpi ulnaris fascia. A prospective case series for this surgical technique was conducted. Wilcoxon signed-rank test compared preoperative and postoperative qDASH-9 scores, an abbreviated questionnaire to assess functional limitations of the upper limb. Results Five patients were included in this study, with a mean duration of follow-up of 530.4 days. The mean QuickDASH-9 functional disability score was 36.5 ± 25.1 preoperatively and 20.6 ± 12.8 postoperatively, demonstrating statistically significant improvement (P = .008). Conclusion The modified fascial sling technique for anterior transposition of the ulnar nerve was developed to address the complications of perineural adhesions after transposition causing tethering of the ulnar nerve. At the same time, the fascial sling prevents posterior subluxation of the ulnar nerve back to its original location, thereby reducing the risk of recurrent symptoms.
Cubital tunnel syndrome results from chronic compression or repeated trauma to the ulnar nerve, most commonly occurring within the cubital tunnel, although compression at other sites can occur. Diagnosis is often made based on signs and symptoms and confirmed electrodiagnostically via nerve conduction study and electromyography to identify abnormal ulnar nerve conduction across the elbow.
The ulnar nerve originates from the medial cord of the brachial plexus, entering the posterior compartment of the arm deep to the arcade of Struthers. It then enters the cubital tunnel, bounded laterally by the olecranon, medially by the medial epicondyle, superficially by the Osborne ligament, with the floor formed by the joint capsule and posterior band of the medial collateral ligament of the elbow. The ulnar nerve then passes between the two heads of the flexor carpi ulnaris to enter the forearm.
Cubital tunnel syndrome patients typically present with paresthesia and/or radiating pain along the medial forearm to the palmar and dorsal aspects of the hand and ulnar one and a half fingers. In severe cases, they may have weakness of wrist flexion, flexion of the distal interphalangeal joints of the ring and little fingers, and/or finger abduction and adduction. On examination, there may be wasting of the hypothenar eminence, mild clawing of the little and ring finger, positive Froment's sign (adductor pollicis muscle weakness), Wartenberg sign (third palmar interosseous muscle weakness), reproduction of sensory symptoms on elbow flexion, and Tinel's sign along the course of the ulnar nerve.
Compression is often the principal mechanism resulting in cubital tunnel syndrome. Possible sites of compression include the arcade of Struthers, cubital tunnel, Osborne's ligament, and fascia of the flexor carpi ulnaris. Elbow flexion exacerbates structural restriction by changing the shape of the cubital tunnel from an oval to an ellipse, thereby narrowing the canal by 55% and increasing Nonsurgical treatment includes elbow splinting, 3 lifestyle modifications, physiotherapy, analgesia, and corticosteroid injections. In the context of severe symptoms and signs or failure of conservative treatment, surgical intervention may be considered. This article will provide a broad overview of existing surgical techniques and introduce the modified fascial sling technique for ulnar nerve transposition, which has been practiced in our institution.
Common surgical techniques
The surgical technique is influenced by the underlying pathology and site of compression. Broadly, approaches to surgical treatment include simple decompression, decompression with medial epicondylectomy, and decompression with anterior transposition of the ulnar nerve.
Simple decompression of the cubital tunnel, which can be done open or endoscopically, involves making an incision overlying the ulnar nerve to divide the constricting fascia and Osborne's ligament, thereby relieving the compressive forces. Advantages include no devascularization of the ulnar nerve, shorter operative time, and avoiding scarring, kinking, and compression at secondary sites that are associated with nerve transposition. The smaller skin incision also has a faster healing and is more aesthetically appealing. In patients who have more severe symptoms, simple decompression, via either open or endoscopic methods, may be insufficient.
Classic medial epicondylectomy was first described by King and Morgan in 1950. 12 Simple decompression is performed, followed by exposure of the medial epicondyle and detachment of the common flexor origin. The medial epicondyle is osteotomized from the metaphyseal-diaphyseal junction to the distal supracondylar ridge, with the remaining bony edge smoothened. The common flexor origin is then repaired to the periosteum. Compared to simple decompression, medial epicondylectomy allows the ulnar nerve to subluxate anteriorly, thus relieving prior pressure and traction on the ulnar nerve within the cubital tunnel, yet preserving the gliding tissues surrounding the nerve and the ulnar nerve blood supply. In a study by Hicks et al, patients who had simple decompression and medial epicondylectomy had a statistically significant reduction in strain of the ulnar nerve postoperatively, as compared to no significant reduction in strain for patients who had a simple decompression with no medial epicondylectomy. 9 However, complications of medial epicondylectomy include medial instability of the elbow due to detachment of the anteromedial collateral ligament, valgus elbow instability if >40% of the medial epicondyle is removed, loss of the protective prominence of the medial epicondyle, postoperative pain, ulnar nerve subluxation over the remnant medial epicondyle, and weakness related to partial detachment of the common flexor origin of the forearm. 17 Hence, a modification, first reported by Le Viet in 1991, 13 was introduced, whereby a frontal partial medial epicondylectomy is utilized to preserve the protective function of the medial epicondyle for the ulnar nerve. With This questionnaire asks about your symptoms as well as your ability to perform certain activities. Please answer every question, based on your condition in the last week, by circling the appropriate number. If you did not have the opportunity to perform an activity in the past week, please make your best estimate of which response would be the most accurate. It doesn't matter which hand or arm you use to perform the activity; please answer based on your ability regardless of how you perform the task. Rate your ability to do the following activities in the last week by circling the number below the appropriate response. A QuickDASH-9 score may not be calculated if there is greater than 1 missing item. QuickDASH-9 score ¼ (sum  1.1)  5/2, a missing response is added as the average of the remaining. this technique, no cases of ulnar nerve injury or subluxation, medial elbow instability, or weakness of the flexor muscles of the forearm were observed at six months' follow-up. 17 Transposition of the ulnar nerve involves decompression and transposition of the nerve anteriorly to a subcutaneous, intramuscular, or submuscular position. The first successful anterior transposition was reported by Curtis in 1898. 5 A human cadaveric study by Gelberman et al 7 found traction of the ulnar nerve during flexion of the elbow to be a major contributor to increased intraneural pressures. Hence, advocates of transposition argue that simple decompression is insufficient to address dynamic compression of the ulnar nerve during elbow movement. However, it has been argued that transposition requires extensive dissection of the ulnar nerve, and thus puts the ulnar nerve vascularity at risk. 16 Transposing the ulnar nerve to the subcutaneous plane places the nerve superficial to the common flexor origin, rendering it more susceptible to external trauma, especially in patients with minimal subcutaneous tissue. The nerve is also predisposed to subluxation to its prior position. This can be mitigated by a fascial sling, 4 which is the technique that will be described in this study, or a vascularized adipose flap. 21 For intramuscular transposition, a channel is cut in the flexor pronator muscles to accommodate the ulnar nerve intramuscularly. However, there is a significant risk of scarring and recurrence of ulnar nerve compression. Lastly, submuscular transposition, first described by Learmonth in 1942, 14 entails placing the ulnar nerve deep into the common flexor muscle group after dividing the common flexor origin, adjacent to the median nerve.
Materials and methods
A retrospective case series was conducted. All patients who underwent cubital tunnel release with ulnar nerve transposition using the fascial sling technique from November 2018 to January 2022 by the authors were included. A nerve conduction study was performed to confirm the diagnosis. Ultrasound and/or magnetic resonance imaging were used to evaluate for any underlying pathology.
Age, gender, hand dominance, details regarding presentation, operation, and follow-up were collected from hospital medical records. Telephone interviews were conducted in May 2022 using the QuickDASH-9 questionnaire (Table I; DASH ¼ disabilities of the arm, shoulder and hand). 6 The QuickDASH-9 questionnaire was chosen in view of its feasibility and practicality in administrating a 9-item questionnaire over the telephone, its unidimensional structure, high correlation with the original DASH questionnaire, high reliability, internal consistency, and responsiveness. 6 A score of 0 represented no functional disability, while 99 represented the greatest possible functional impairment. A preoperative qDASH score was also taken preoperatively. Improvement of symptoms postoperatively was assessed.
Statistical analysis was performed using Statistical Product and Service Solutions Version 22.0. Wilcoxon signed-rank test compared preoperative and postoperative qDASH-9 scores, with P value <.05 taken to be statistically significant. The olecranon and medial epicondyle, anatomic landmarks for the cubital tunnel, are marked out on the skin (Fig. 1). A 5centimeter curvilinear longitudinal incision over the cubital tunnel is made. Osborne's ligament is identified through blunt dissection and is carefully divided to expose the ulnar nerve. Throughout the dissection process, the medial brachial cutaneous nerve and medial antebrachial cutaneous nerve need to be protected to avoid inadvertent transection resulting in the development of painful neuromas. Subluxation of the ulnar nerve is confirmed intraoperatively under direct visualization when flexing and extending the elbow (Fig. 2).
Identification of the ulnar nerve is assisted by placing it within a vessel loop to avoid inadvertent injury to the nerve (Fig. 3).
Dissection continues proximally to the arcade of Struthers, freeing the ulnar nerve from the medial intermuscular septum under direct visualization (Fig. 4). Distally, careful dissection continues until the fascia of the flexor carpi ulnaris is identified.
The ulnar nerve is transposed anterior to the medial epicondyle after adequate freeing of the nerve. Care should be taken to preserve the entire longitudinal blood supply of the ulnar nerve (Fig. 5).
An approximately 1x2cm fascial sling is fashioned from the flexor carpi ulnaris fascia (Fig. 6). Existing fascial sling techniques entail wrapping the fascial sling directly around the ulnar nerve (Fig. 7). 8,15 In our technique, we utilize the AlloWrap, a human amniotic membrane designed to provide a biologic barrier following surgical repair. 1 It contains bioactive proteins that support wound healing. It aims to reduce friction between the ulnar nerve and fascial sling and minimize scarring at the transposition site, which can predispose to symptom recurrence. We apply the Allowrap loosely around the ulnar nerve (Fig. 8). The fascial sling is loosely placed over the Allowrap and secured with an absorbable suture (Fig. 9). After irrigating the wound, closure is done in layers with absorbable sutures. Adequate padded dressing is placed over the surgical site, with the upper limb placed in an arm sling. Postoperatively, patients underwent sessions by the physiotherapist and occupational therapist for nerve gliding exercises.
Results
Six patients were approached to be included in this study, whereby one patient declined to be included. The mean age was 54.4 years (range, 26-83 years). All were of Chinese ethnicity, with only one male patient. Ulnar nerve transposition was performed on the dominant arm in 80% of patients. Preoperatively, 20% presented with pain (visual analog scale [VAS] 6/10), 80% with paresthesia, 40% with hypoesthesia, and 80% with subjective weakness (Table II). On examination, 40% had muscle wasting of the hypothenar eminence and/or interossei, and 60% with Tinel's sign positive over the cubital tunnel. All patients underwent ultrasound evaluation preoperatively, and 20% with additional magnetic resonance imaging evaluation. The mean preoperative QuickDASH-9 functional disability score was 36.5 ± 25.1.
The mean duration of follow-up was 530.4 days (Table III). The patient who experienced pain preoperatively had persistent pain postoperatively, but with a reduction of VAS from a preoperative score of 6/10 to a postoperative score of 1/10. There was improvement in postoperative paresthesia, hypoesthesia, and subjective weakness in all the patients. Patients who were working preoperatively returned to work after an average of 83.3 days. None of the patients regretted undergoing surgery. The patients' postoperative functional disability scored at 20.6 ± 12.8 on the QuickDASH-9, demonstrating statistically significant improvement (P ¼ .008).
Discussion
Anterior transposition of the ulnar nerve is an effective treatment for cubital tunnel syndrome patients who have failed nonsurgical management. A study by Huang et al demonstrated improvement in VAS and disability score (DASH), 10 while a 67-month follow-up study by Stuebe et al found improvement in intrinsic muscle mass and functional outcomes. 19 These findings are in keeping with our findings of improved patient symptoms, functional disability score (QuickDASH-9), and no patient regret with regard to undergoing the surgery.
Compared to simple decompression, ulnar nerve transposition entails a larger incision, more extensive dissection and manipulation of the ulnar nerve and surrounding structures. Hence, this puts a patient more at risk of complications such as nerve fibrosis, perineural adhesions, vascular compromise to the nerve, infection, scar sensitivity, posterior subluxation, and damage to surrounding nerves resulting in painful neuromas. However, a meta-analysis by Said et al comparing the outcomes between simple decompression and ulnar nerve transposition found no significant difference between the two methods in terms of eventual clinical outcome scores, and rate of revision surgery. 18 Simple decompression is often considered in patients with milder forms of cubital tunnel syndrome and where there is a structural abnormality causing compression that can be removed surgically. On the other hand, anterior transposition is preferred in the context of ulnar nerve subluxation where there is chronic insult to the ulnar nerve from ongoing subluxation.
Clinical failure rates have been reported to be approximately 25% after anterior transposition of the ulnar nerve. 20 In a systematic review by Kholinne et al, 11 perineural scarring of 79% was found to be the most common intraoperative finding in patients who require revision surgery. Hence, the modified technique of using a fascial sling was developed to address the complications of perineural adhesions after transposition causing tethering of the ulnar nerve. At the same time, it prevents posterior subluxation of the ulnar nerve back to its original location, thereby reducing the risk of recurrent symptoms. However, scarring of the fascial sling and tethering become significant risk factors for symptom recurrence. 21 Hence, placement of the AlloWrap aids to reduce scarring and tethering so as to optimize surgical outcomes.
This study describes a modified fascial sling technique with the addition of AlloWrap for anterior transposition of the ulnar nerve. At our institution, a 2 Â 2 cm piece of AlloWrap costs US$750, which does increase the cost of the surgery to the patient. However, its use aims to reduce postoperative scarring rates and the need for revision surgery, and future cost-effectiveness analysis is required to analyze if our surgical technique with the AlloWrap results in overall cost savings. We hope that our proposed technique will decrease scarring of the ulnar nerve, but future study with a longer duration of follow-up, larger population size, objective measures for preoperative and postoperative assessment (eg, grip strength, two-point discrimination, and electrophysiological assessment), and comparisons between this modified technique and other techniques without the use of AlloWrap are needed to better assess the effectiveness of this proposed technique. Disclaimers: Funding: No funding was disclosed by the authors. Conflicts of interest: The authors, their immediate families, and any research foundation with which they are affiliated have not received any financial payments or other benefits from any commercial entity related to the subject of this article.
|
2023-05-29T15:03:59.965Z
|
2023-05-01T00:00:00.000
|
{
"year": 2023,
"sha1": "d2b623928960c61588a075d685b1a2079bebf546",
"oa_license": "CCBYNCND",
"oa_url": "http://jsesreviewsreportstech.org/article/S2666639123000482/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "39a4228c8abed0e1eab2d72e7702b800f434a7d4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
259996413
|
pes2o/s2orc
|
v3-fos-license
|
Exploring the role of m6A modification in the great obstetrical syndromes
Abstract Background N6-methyladenosine (m6A) is one of the predominant RNA epigenetic modifications that modify RNAs reversibly and dynamically by “writers” (methyltransferase), “erasers” (demethylase), and “readers.” Objective This review aimed to provide a comprehensive understanding of the complexity of m6A regulation in the great obstetrical syndromes to understand its pathogenesis and potential therapeutic targets. Methods The terms “placenta or trophoblast” and “m6A or N6-methyladenosine” were searched in PubMed databases (June 2023). Results In this review, we discuss the regulatory role of m6A in the great obstetrical syndromes such as preeclampsia (PE), spontaneous abortion (SA), hyperglycemia in pregnancy (HIP) and fetal growth to emphasize the clinical relevance of m6A dysregulation in pregnancy. We also describe mechanisms that potentially involve the participation of m6A methylation, such as proliferation, invasion, migration, apoptosis, autophagy, endoplasmic reticulum stress, macrophage polarization, and inflammation. Conclusion We summarize the recent research progress on the role of m6A modification in the great obstetrical syndromes and placental function and provide a brief perspective on its prospective applications.
Introduction
In the 1970s, N6-methyladenosine (m6A), one of the predominant internal RNA modifications in eukaryotes, was shown to modify RNAs dynamically and reversibly to support diverse biological processes.Approximately 0.1-0.4% of adenines exhibit m6A modification, with an average of 3-5 methylated sites in each mRNA in mammals [1].Methylation occurs at the sixth position of the nitrogen atom of adenosine at the posttranscriptional level.Currently, colorimetry [2], dot blotting [3], and liquid chromatography/mass spectrometry (LC/MS) [4] are mainly used to detect global m6A levels.Until m6A-specific methylated RNA immunoprecipitation sequencing (MeRIP-seq) was developed, the transcriptome-wide profile of m6A localization in RNA was described [5].It has been confirmed that the m6A modification usually occurs in the motif DRACH (D ¼ G/A/U, R ¼ G/A, H ¼ A/U/C).m6A is highly enriched in the 3 0 untranslated region (UTR) and near the stop codon, affecting the stability and function of RNA [6].
The placenta is an active interface between the mother and fetus that is essential for maternal-fetal nutrient and gas exchange, maintenance of healthy pregnancy, and fetal development."Great obstetrical syndromes" is a general term for several complications of pregnancy that conceivably lead to or the shortand long-term maternal and fetal risks and require close health monitoring and treatment [7].m6A provides a new perspective to explore the pathogenesis of placenta-related diseases.In this review, we will summarize m6A-related functional proteins and the current research progress in the great obstetrical syndromes.
RNA m6A methylation
m6A methylation is a reversible and dynamic modification of mRNA and other types of noncoding RNA (ncRNAs), including lncRNA, microRNA (miRNA), circ-RNA, ribosomal RNA (rRNA), transfer RNA (tRNA), and small nuclear RNA (snRNA).The installation and removal of m6A on RNAs is catalyzed by methyltransferases (writers) and demethylases (erasers), respectively.Altered methylation is recognized by readers, which exert a regulatory role in RNA stability, decay, splicing, translation and nuclear export [1].
Unlike mRNA, which can be translated into proteins, ncRNAs also show specialized functions in the modulation of gene expression at both the transcriptional and posttranscriptional levels.m6A modification could influence the generation and maturation of miRNAs, the splicing, generation, translation, subcellular trafficking (cytoplasmic export) and degradation of circ-RNAs, and RNA-protein interactions in lncRNAs [8][9][10][11][12].Interestingly, ncRNAs can also affect m6A modification by interacting with m6A regulatory proteins [13,14].
m6A Writers
The installation of m6A methylation is achieved by a highly conserved RNA methyltransferase complex.The RNA methyltransferase complex is mainly composed of methyltransferase-like 3 (METTL3), which plays a central role [2]; METTL14, which supports the METTL3 protein structure [15]; and Wilms' tumor 1-associated protein (WTAP), which regulates the recruitment of the complex to target mRNA [16].Knockdown of METTL3 led to a decrease in total m6A levels in trophoblasts [2].
m6A Erasers
Demethylation is mediated by alkylation repair homolog protein 5 (ALKBH5) [17] and fat-mass and obesityassociated protein (FTO) [18].Both of them belong to the AlkB family of nonheme Fe(II)/a-ketoglutarate (a-KG)-dependent dioxygenases, which catalyze a wide range of biological oxidations, including demethylation of m6A [12].Silencing of FTO or ALKBH5 both increased total m6A levels in RNA, and the demonstration of their demethylase activity proved that the posttranscriptional modification of mRNAs is reversible [17,18].
m6A Modification in the great obstetrical syndromes
Normal placental development is critical for sustaining intrauterine life.Trophoblasts proliferate and differentiate into villous and extravillous trophoblasts (EVTs) after implantation.Invasion and migration of trophoblasts mediate immunotolerance and remodeling of the uterine spiral artery at the maternal-fetal interface during embryo implantation [23].Under physiological conditions, trophoblasts maintain homeostasis by autophagy at a low basal level.Excessive autophagy can destroy normal cellular components and eventually lead to apoptosis [24].Dysfunctional trophoblast phenotypes and abnormal placentation can lead to adverse pregnancy-related complications.The molecular mechanisms of m6A modification governing placental formation and trophoblast cell lineage specification and differentiation are an emerging field of research.
We conducted a literature search using the PubMed database ranging from 2013 to 2023, and the keywords we searched mainly include "trophoblast or placenta" AND "m6A or N6-methyladenosine."This review included seven studies based only on the human placenta samples [3,, one study based on both human and rats placanta [29], one study based only on an in vitro trophoblast model [30], and one study based only on animal models [4].In addition, 18 studies were based on clinical specimens and in vitro model validation [2, 8-10, 13,14, 31-42], and three of them also conducted animal studies [40][41][42].Furthermore, four of these studies employed MeRIPseq [5,9,25,31].The m6A modification-related studies in various obstetrical syndromes is shown in Table 1, the regulation of placental function by m6A modification is shown in Figure 1, and the description in detail is as follows.PE: preeclampsia; loPE: late-onset preeclampsia; eoPE: early-onset preeclampsia; RSA: recurrent spontaneous abortion; SA: spontaneous abortion; GDM: gestational diabetes mellitus; IUGR: intrauterine growth restriction; OSO: oxidized soybean oil; LBW: low birth weight.
Preeclampsia (PE)
PE, defined as the development of hypertension and proteinuria after 20 weeks of gestation, is one of the main pregnancy complications contributing to preterm birth, perinatal death, maternal mortality, and intrauterine growth retardation (IUGR).Early-onset PE (eoPE, with delivery at <34 weeks of gestation) and late-onset PE (loPE, with delivery at 34 þ 0 weeks of gestation) are two types of PE [45].Abnormal proliferation, invasion, migration, and dysfunctional syncytialization of human placental trophoblasts and a severe endoplasmic reticulum (ER) stress state are involved in the pathogenesis of PE [46].In most studies, m6A levels were increased in PE placentas, and the expression of METTL3 and METTL14, the main methyltransferases, was upregulated in PE placentas [2,5,32,43].METTL3 affected the expression of methylated transmembrane BAX inhibitor motif containing 6 (TMBIM6) by regulating its mRNA stability via YTHDF2 in HTR-8/SVneo cells, which regulated ER stress [43].METTL14 overexpression inhibited proliferation and invasion and induced light chain 3 (LC3)-dependent autophagy in HTR-8/SVneo cells by epigenetically elevating forkhead box O3a (FOXO3a) [32].METTL14 overexpression also augmented circPAPPA2 m6A methylation but decreased its reader IGF2BP3, leading to a decline in circPAPPA2 levels and inhibition of TEV1 cell invasion [9].Knockdown of METTL3 in rats could reduce the adverse effects of PE, including reductions in fetal weight, placental weight, and renal pathological changes [43].These outcomes suggest that METTL3 is a potential target for the treatment of PE.
However, a few studies have proposed that the global m6A levels are significantly reduced in the eoPE placenta [33], and RNA methyltransferases, including METTL3 and WTAP, are downregulated in the PE placenta [33,34].Knockdown of METTL3 and WTAP significantly inhibited the invasion and migration of HTR-8/SVneo cells through m6A modification of myosin light chain kinase (MYLK) [34] and high mobility group nucleosomal binding domain 3 (HMGN3) [33], respectively.It has also been proven that the expression of ALKBH5 was upregulated in PE placenta and HTR- 8/SVneo cells treated with hypoxia/reoxygenation (H/R) [32,40].The abnormal expression of METTL3 may be involved in the occurrence of PE by regulating the m6A modification levels of different factors, thereby affecting trophoblast invasion and ER stress, respectively.Given the complexity of METTL3 functionality, further investigations remain to be performed.Guo et al. further demonstrated that inhibition of ALKBH5 promotes histone lysine demethylase 3B (KDM3B)-mediated activated leukocyte cell adhesion molecule (ALCAM) promoter demethylation by facilitating peroxisome proliferator-activated receptor gamma (PPARG) mRNA m6A modification and further activates the Wnt/b-catenin pathway, which promotes HTR-8/SVneo cell proliferation and migration and alleviates PE-like features in pregnant mice [40].Therefore, ALKBH5 could also be a candidate target for the treatment of PE.
In addition, abnormal expression of m6A readers was also observed in the PE placenta.The expression of IGF2BP2/3 was downregulated in the placenta of PE patients, and the mRNA level of IGF2BP1, a protective factor of PE, was also decreased in the blood plasma of pregnant women with PE [13,14,35].miR-423-5p and miR-181a-5p were increased in both the plasma and placenta of patients with severe PE, and both suppressed the invasion and migration of HTR-8/SVneo cells by directly inhibiting IGF2BP1 and IGF2BP2 through a conserved binding site in the 3 0 -UTR, respectively [13,14].In addition, IGF2BP2 regulates the RNA stability of Linc01116, which is related to poor uterine spiral artery remodeling via m6A methylation in HTR8/SVneo cells [8].IGF2BP3 significantly inhibits the invasion and migration capacities of HTR8/SVneo cells and EVTs from the first-trimester human placental villi and decreases the mRNA level of IGF2 in HTR8/SVneo cells [35].The findings of IGF2BPs function and its crosstalk with ncRNAs expand our understanding of the role of m6A in trophoblast dysfunction and PE development.
Spontaneous abortion (SA)
Spontaneous abortion (SA) is one of the most common complications of pregnancy and may be associated with inadequate trophoblast cell invasion.Recurrent spontaneous abortion (RSA), also called recurrent miscarriage (RM), is defined as the failure of two or more clinically recognized pregnancies before 20-24 weeks of gestation and includes embryonic and fetal losses [47].However, the current research findings on m6A modification of SA/RSA have not reached complete unity.
Qiu et al. [3] suggested that FTO expression was significantly downregulated in trophoblasts of SA patients with aberrant m6A accumulation, which was also correlated with oxidative stress.The expression of genes involved in immunotolerance, immune cell infiltration and angiogenesis at the maternal-fetal interface, including human leukocyte antigen (HLA)-G, vascular endothelial growth factor receptor (VEGFR), and matrix metalloproteinase-2 (MMP-2) bound to YTHDF2 and FTO, is also decreased in patients with SA [3].Moreover, Qin et al. proposed that eukaryotic initiation factor (eIF) 5 A could regulate METTL14 expression via interaction with its promoter, which influences the viability, proliferation, and migration of HTR8 cells; the eIF5A/METTL14 pathway was also downregulated in the villi of SA [30].METTL3/METTL14-catalyzed m6A RNA methylation on lncRNA HZ01(lnc-HZ01) enhanced its RNA stability, and lnc-HZ01 inhibited the proliferation and invasion of HTR-8/SVneo and Swan 71 cells via the MAX dimerization protein 1 (MXD1)/eIF4E pathway, which might be part of the mechanism of organic pollutant-related abortion [41,42].These findings indicate a tight crosstalk among these m6A modulators.
The effect of ALKBH5 on trophoblast function in RSA remains controversial.Zheng et al. reported that ALKBH5 was expressed at low levels in the EVT of RSA [37].Knockdown of ALKBH5 in the mouse placenta suppressed the weight of the fetus and placentas and decreased the area of the placental labyrinth, which significantly led to fetal abortion in vivo [37].ALKBH5 activated the transforming growth factor-b (TGF-b) signaling pathway by promoting the expression and phosphorylation of small mothers against decapentaplegic (SMAD)1/5 by erasing their m6A modification to induce the expression of matrix metalloproteinase (MMP)-9 and integrin subunit alpha 1 (ITGA1), which can consequently enhance the migration and invasion of HTR8/SVneo cells [37].However, Li et al. revealed that global mRNA m6A methylation was significantly decreased and the expression of ALKBH5 was increased in villous tissue from patients with RSA [36].Inhibition of ALKBH5 also significantly promoted the migration and invasive ability of HTR-8/SVneo cells and human villous explants via cysteine-rich angiogenic inducer 61 (CYR61) m6A modification [36].Therefore, these investigations demonstrated the functional significance of ALKBH5 in RSA placenta, as well as in PE placenta.In addition, METTL3 and IGF2BP3 have also been found to be downregulated in the placenta of RSA [31,38].METTL3-mediated m6A modification of the coding sequences (CDS) of zinc finger and BTB domain containing 4 (ZBTB4) regulates its RNA stability and expression, which further affects trophoblast invasion [31].IGF2BP3 is involved in the regulation of inflammatory pathways in trophoblasts, such as the Hippo pathway, interleukin (IL)-4 and 13 inflammatory, tumor necrosis factor (TNF)-a, and nucleotide oligomerization domain (NOD)-like receptor (NLR) signaling pathways in trophocytes [38].Knockdown of IGF2BP3 in HTR8/SVneo cells decreased the expression of IL-10 by activating the nuclear factor jB (NF-jB) pathway, which further promoted M1 macrophage polarization.An imbalance in the ratio of M2/M1 macrophages ultimately induces abortion [38].However, the specific regulatory mechanisms remain to be explored.
Hyperglycemia in pregnancy (HIP)
HIP manifests as gestational diabetes mellitus (GDM) and diabetes mellitus in pregnancy (DIP), also known as pregestational diabetes mellitus (PGDM), and is defined as the existence of type 1 (T1DM) or type 2 diabetes mellitus (T2DM) [48].Approximately 16.7% of live-birth pregnancies are affected by HIP, and HIP is associated with a significantly increased risk of shortand long-term maternal and fetal complications [49].Growing evidence has shown that m6A and regulators are critical for the pathogenesis of HIP.METTL14 was downregulated and m6A levels in total RNA were lower in GDM placentas.The m6A RNA profile revealed that m6A levels in both the 3 0 -UTR and CDS near the stop codons of placental mRNAs were strongly decreased in the GDM group, and the m6A levels of insulin receptor (INSR) and insulin receptor substrate 1 (IRS1), GDM-related genes, were also significantly reduced in GDM, which might be involved in GDM development [25].Linc00667, which may notably contribute to the development of GDM, has been shown to directly bind to YTHDF3 in HTR8/SVneo cells [11].The gene polymorphisms of IGF2BP2 and FTO might be associated with the risk of GDM or might affect some of the clinical parameters for newborns in some populations [26].However, Franzago et al. [27] found no association between placental FTO DNA methylation and GDM.RNA m6A methylation in the placenta with maternal obesity, a condition closely related to diabetes, was also decreased along with reduced gene expression of WTAP [44].However, the specific mRNA targets of m6A modification and their specific mechanisms in the HIP placenta are still not unclear, and further investigations remain to be done.
Fetal growth
m6A deposition in different locations may result in specific regulatory functions.Taniguchi et al. [39] proposed that m6A modification near the stop codon and at the 5 0 -UTR in placental mRNA may play important roles in fetal growth and disease.The m6A levels at the 5 0 -UTR in mRNAs of small for gestational age (SGA) placenta were increased and the m6A levels in the vicinity of stop codons were decreased in large for gestational age (LGA) placenta.By performing m6A-circRNA epitranscriptomic microarray analysis and MeRIP-qPCR assays, circMPP1 showed high m6A modification levels in both LGA and IUGR samples, indicating a possible correlation between circMPP1 and placental dysfunction [10].circMPP1 promotes placental inflammation and dysfunction by activating the NF-jB and signal transducer and activator of transcription (STAT)3 pathways.YTHDC1 can suppress the expression of circMPP1 via m6A modification in JEG-3 cells and inhibit the growth and development of neonatal rats stimulated by circMPP1 knockdown [10].As a potential marker of fetoplacental development, whether circMPP1 could be a new therapeutic target remains to be elucidated.
FTO is highly expressed in the placenta and is associated with increased placental weight, fetal weight and fetal length [28].A 2013 study revealed that reduced FTO gene expression in both rat and human placenta was linked with IUGR [29].The expression of FTO was also decreased in the low birth weight (LBW) placentas of piglets, and m6A levels were increased [4].However, placental FTO expression was not altered in macrosomia in human or pregestational maternal high-fat diet (HFD)-induced overgrowth in rat fetuses.Maternal food restriction reduces FTO expression in the placenta of rats, and maternal HFD reduces FTO expression in the female placentas of mice [29].In the Chinese population, the FTO promoter methylation level at a specific CpG site is negatively associated with birth weight, which might influence fetal intrauterine weight gain by reducing adipocyte lipolytic activity [28].FTO could be involved in the cellular sensing of amino acids through the mechanistic target of the rapamycin (mTOR)C1 pathway, which might regulate fetal growth [50].These studies strongly suggest that FTO-mediated demethylation of m6A may be a vital target for epigenetic transcription, which might be instrumental in the regulation of fetal growth and development, and the exact mechanism still needs to be explored.
Potential clinical application of m6A modification
Some studies have demonstrated that the expression levels of m6A-related miRNAs such as miR-423-5p and miR-181a-5p, and the m6A reader IGF2BP1 were changed in the blood plasma of pregnant women with PE [13,14], indicating that changes in m6A-modified target ncRNAs and m6A modification-related proteins in circulation could serve as biomarkers for the promising noninvasive predictive diagnosis of specific diseases as early as early pregnancy.In addition, based on bioinformatics analysis, the m6A-related module could be used for the diagnosis of specific diseases and regarded as a key target of disease mechanism research [11].The aforementioned studies have explored the various m6A-related mechanisms of some obstetrical syndromes from multiple perspectives and these findings have enriched the pathogenesis of these diseases and indicated a way to develop effective treatment strategies.To date, the therapeutic effects of METTL3 and ALKBH5 knockdown have been observed in PE rats [40,43].Moreover, m6A-associated ncRNAs, as a potential target, can also provide theoretical guidance for clinical treatment.However, the specific mechanism of noncoding RNAs for use in targeted therapy needs to be further confirmed.
In summary, the analysis of m6A modifications could provide multiple potential biotargets or biomarkers for diagnosis and treatment against these trophoblast-related adverse pregnancy outcomes, and more specific mechanisms remain to be further confirmed.
Conclusion
Although m6A modification is currently reported to be associated with the proliferation, invasion, migration, and apoptosis of trophoblastic cells, further studies are needed to elucidate the role of m6A enzymes and the modification of the biological functions of trophoblasts, which further provide a wide range of RNA epigenetic regulatory patterns in both physical and pathological pregnancies.
Table 1 .
Summary of current reported m6A studies in the great obstetrical syndromes.
|
2023-07-22T06:17:55.875Z
|
2023-07-20T00:00:00.000
|
{
"year": 2023,
"sha1": "c7d164aab05ede2f5cb6eb865b83d32e1f17637f",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/14767058.2023.2234541?needAccess=true&role=button",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "ce20dc0d86ab28909f0fd000e514bfb9ea937908",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
246683920
|
pes2o/s2orc
|
v3-fos-license
|
Metabolomics Signature and Potential Application of Serum Polyunsaturated Fatty Acids Metabolism in Patients With Vitiligo
Vitiligo is a depigmented skin disorder caused by a variety of factors, including autoimmune, metabolic disturbance or their combined effect, etc. Non-targeted metabolomic analyses have denoted that dysregulated fatty acids metabolic pathways are involved in the pathogenesis of vitiligo. However, the exact category of fatty acids that participate in vitiligo development and how they functionally affect CD8+ T cells remain undefined. We aimed to determine the difference in specific fatty acids among vitiligo patients and healthy individuals and to investigate their association with clinical features in patients with vitiligo. Serum levels of fatty acids in 48 vitiligo patients and 28 healthy individuals were quantified by performing ultra-performance liquid chromatography-tandem mass spectrometry. Univariate and multivariate analyses were carried out to evaluate the significance of differences. Moreover, flow cytometry was used to explore the effect of indicated fatty acids on the function of CD8+ T cells derived from patients with vitiligo. We demonstrated that serological level of alpha-linolenic acid (ALA) was markedly upregulated, while that of arachidonic acid (ARA), arachidic acid (AA) and behenic acid were significantly downregulated in patients with vitiligo. Moreover, ALA levels were positively associated with vitiligo area scoring index (VASI) and ARA was a probable biomarker for vitiligo. We also revealed that supplementation with ARA or nordihydroguaiaretic acid (NDGA) could suppress the function of CD8+ T cells. Our results showed that vitiligo serum has disorder-specific phenotype profiles of fatty acids described by dysregulated metabolism of polyunsaturated fatty acids. Supplementation with ARA or NDGA might promote vitiligo treatment. These findings provide novel insights into vitiligo pathogenesis that might add to therapeutic options.
INTRODUCTION
Vitiligo is an autoimmune skin disorder that presents as progressive depigmentation of skin due to the destruction of epidermal melanocytes caused by abnormal activation of CD8 + T cells (1). Multiple factors like metabolic abnormality, oxidative stress and phenolic compounds exposure were related to the pathogenesis of vitiligo (2)(3)(4)(5). Growing evidence suggests that fatty acid metabolism is closely associated with autoimmune diseases, such as psoriasis (6), systemic lupus erythematosus (SLE) (7) and rheumatoid arthritis (RA) (8). Besides, it is reported that the incidence of metabolic syndromes in patients with vitiligo is significantly higher than that in healthy controls (9). However, the in-depth knowledge of vitiligo, in particular on the basis of metabolic dysregulation, remains unclear and need further exploration.
Metabolomics is an emerging field that be used to monitor the alternations in all low molecular metabolites produced by cellular processes, which delineates an overall metabolic profiles have widely applicable to investigate clinical features of various diseases. Recently, the metabolome based on miscellaneous samples, including serum, plasma, blister liquid and urine so on, have revealed that metabolic disturbance of amino acids and lipid mediators are involved in physiological and pathological changes in vitiligo patients (10)(11)(12)(13). However, almost all previous studies on the basis of non-targeted metabolomics techniques, which the accuracy of quantitative methods are necessary to promote further. What's more, most of the research only focuses on the alternation of metabolites and dysregulated metabolic pathway, their specific role in the development and treatment various disorders still needed to further investigation. Therefore, targeted metabolomics method and confirmation experiments are of great significance to improve the accuracy of altered metabolites and to promote the management of vitiligo patients.
Several previous studies reported that supplementation with EPA and DHA contributes to the recovery of autoimmune diseases like RA and SLE (14)(15)(16). Another research disclosed that supplementation with ARA or aspirin, COX-2 inhibitor, also prevents healthy individuals from diabetes mellitus (17). What's more, growing evidence supports that omega-3 polyunsaturated fatty acids are conducive to alleviate CD8 + T cell-mediated inflammatory response in various disorders (18,19). Our previous non-targeted metabolomics study also found that levels of fatty acids manifested the most alterations among that of various metabolites. More importantly, the enriched Kyoto encyclopedia of gene and genomes (KEGG) pathway analysis revealed that several fatty acid metabolic pathways were significantly associated with vitiligo development (20). However, which specific fatty acid plays a key role and how it works in vitiligo remains ill-defined.
In our study, we conducted a targeted metabolomics assay to assess serum FAs concentration and demonstrated the major differentially expressed fatty acids in vitiligo, and further evaluated their correlation with clinical features. Additionally, we investigated the effect of ARA or its metabolic pathway on the activation and effect function of CD8 + T cells.
Study Subjects
Serum for fatty acids profiles was derived from a continuous of sample of 76 participants: 48 vitiligo patients and 28 age-and gender-matched healthy controls. The diagnosis of vitiligo was ascertained and VASI scores assessed by dermatologists. Corresponding healthy volunteers were recruited from physical examination center of Xi Jing hospital to undertake the same testing as vitiligo patients. All study participants with active autoimmune diseases, malignancies, diabetes mellitus, pregnant and breastfeeding women were excluded from this study. Besides, study subjects who took glucocorticoids, antibiotics and other drugs or diet habit that might affected fatty acid metabolism in the past three months were ruled out as well. Epidemiological data including gender, age, disease duration, BMI, disease activity, clinical presentations, treatments, was face-to-face assessed and recorded by dermatologists. The VASI scores were evaluated by two senior dermatologists and calculated by average. Laboratory measurements including TC, TG, HDL-C and LDL-C were measured or collected from electronic patient record in hospital.
Chemicals and Reagents Used for Metabolomics Analysis
Detailed information of chemicals and reagents was described in the Supplementary Data.
Sample Preparation and Fatty Acid Extraction
Serum samples were obtained by centrifugation of fresh blood samples for 5 min at 2500 rpm at 4°C and immediately quenched by liquid nitrogen and stored at -80°C until further analyses. Prior to the assay, serum samples were left at -20°C for 30 minutes and then thawed on ice-bath to diminish sample degradation. All fatty acid standards were prepared to storage solution at 5.0 mg/mL. An appropriate amount of individual stock solution was mixed to obtain working standard solutions, which were made to produce the calibration curves. The fatty acid extraction method followed by a previously described report (21,22). Briefly, 20 ml of serum or each standard solution were mixed with 120 ml cold methanol with internal standard solution were added to a 96-well plate. The mixed samples were centrifuged at 4000 × g at 4°C for 30 min. An aliquot of 30 ml the supernatant was transferred to a new 96-well plate. Subsequently, 20 ml of derivative reagent, 3-nitrophenylhydrazine (3-NPH) was added to each well, the plate was locked and the derivatization was performed at 30°C for 60 min. After derivatization, 400 ml of ice-cold 50% methanol solution was added and stored at -20°C for 20 minutes and followed by 4000 × g centrifugation at 4°C for 30 minutes. Finally, the supernatant was used for LC-MS analysis.
Analysis of Metabolites by UPLC-MS/MS
Metabolomics profiling was performed using a UPLC-MS/MS platform (Acquity UPLC-Xevo TQ-S; Waters). The samples were randomized and analyzed using an Acquity UPLC BEH C18 1.7 mM VanGuard pre-column (2.1 × 5 mm) and analytical column (2.1 × 100 mm). The column temperature and sample manager temperature were maintained at 40°C and 10°C, respectively. Water with 0.1% formic acid (solvent A) and acetonitrile/IPA (70:30) (solvent B) were used as mobile phases. The flow rate and injection volume were 400 ml/min and 5 ml, respectively. The gradient condition was scheduled as follows: 0-1 min (50% B), 1-3.5 min (50-78% B), 3.5-12.2 min (78-100% B), 12.2-14.2 min (100% B), 14.2-14.5 min (100-50% B), 14.5-16 min (50% B). The mass spectrometer was operated in the negative mode with a 2.0 kV capillary voltage. The source and desolvation temperatures were 150°C and 550°C, respectively. And the desolvation gas flow was 1200 L/hr. All samples were analyzed at beginning and end of each batch run and data for each ionization technique were gained in negative ion mode. The quality control (QC) samples were prepared by mixing equal volume of each serum sample (48 vitiligo samples, 28 healthy control samples) and then injected at regular intervals (after every 14 test samples for LC-MS) throughout the analytical run. Reagent blank samples consist of high purity solvents were randomly inserted among the real sample queue to serve as a useful alert to systematic contamination, as well as wash the column and remove cumulative matrix effects throughout the study.
Analytical Validation
The method validation was performed by the following ways: linearity and quantification limits, reproducibility and recovery of result. The detailed information are presented in Supplementary Data and Supplementary Table S1.
Cell Separation and Culture
Peripheral blood samples were collected by EDTA vacutainer tubes. Peripheral Blood Mononuclear Cells (PBMCs) were separated from the fresh blood samples of vitiligo patients by density-gradient centrifugation with lymphocyte separation medium. For CCK-8 assay, the CD8 + T cells were positively isolated from PBMCs with a magnetic-activated cell sorting CD8 + T Cell Isolation Kit (Miltenyi Biotec) according to the manufacturer's instructions. Cells were cultured in RPMI 1640 (Gibco) supplemented with 10% fetal bovine serum (Gibco), and 1% penicillin-streptomycin solution.
CCK-8 Assay
The effect of ARA on the survival of human naïve CD8 + T cells were measured using the CCK-8 assay based on the manufacturer's instruction. Briefly, the isolated CD8 + T cells were planked with (10 6 cells mL -1 ) at 96-well U-plate and incubated in the presence of different concentrations (0, 5, 10, 20, 50, 100 mM) of ARA for 48 h. The viability of CD8 + T cells was detected by the CCK-8 assay and the Optical Density was measured at 450 nm with a microplate reader.
Flow Cytometry Analysis
The detailed procedures of flow cytometry analysis are provided in Supplementary Data.
Data Processing and Statistical Analysis
In LC-MS/MS-based metabolomics analysis, the targeted raw data files were processed through using the MassLynx software (v4.1, Waters, Milford, MA, USA) to perform peak integration, calibration, and quantitation for each metabolite. The powerful package R studio and MetaboAnalyst 5.0 (a web-based software tool for metabolomic data analysis, https://www.metaboanalyst.ca/) were used for statistical analyses. To reduce the differences on the concentration of fatty acid between samples, auto-scaling was performed to make sure each variable comparable to each other. For multivariate statistical analysis, PLS-DA was used to visualized the difference between global metabolic profiles among the given groups that provides more valuable information, and RF algorithm was applied to enhance the quality of multivariate analyses and to avoid the risk of overfitting. For univariate analysis, discrete variables were summarized in percentage and compared between different groups using Chi-Square test or Fisher's exact test. Shapiro-Wilk test was applied to evaluate the distributions of continuous variables. Student's t-test or Mann-Whitney U test were implemented to determine the statistical significance of each fatty acid, according to the distribution type of data. Pearson correlation analysis was carried out for clarifying the association between fatty acid and clinical characteristics.
ROC curves were performed to assess the diagnostic performance of fatty acids by MetaboAnalyst 5.0 and Graph Prism 8.0. Supporting vector machine and 10-fold crossvalidation was established to avoid the problem of potential overfitting in producing combined predicted ROC curves. Binary logistic regression was used to evaluate the risk factor of vitiligo.
For determining the potentially altered metabolic pathways in vitiligo, pathway analysis was carried out by MetaboAnalyst 5.0. We evaluated altered fatty acids within a pathway and its function of the pathway through variations in pivotal junction points of the pathway.
RESULTS
The anthropometric and clinical features of study subjects were presented in Supplementary Table S2. Samples for targeted metabolomics analysis were donated by a total of 48 vitiligo patients (24 males and 24 females, median and IQR age (34.5, 25.75-43.75) years) and 28 healthy controls (11 males and 17 females, median and IQR age (35, 26.5-40.75) years). The average VASI of patients with vitiligo was 1.76. There were no significant differences in gender, age, body mass index (BMI), total cholesterol (TC), triglyceride (TG), high density lipoprotein cholesterol (HDL-C), or low density lipoprotein cholesterol (LDL-C), between the vitiligo group and healthy control group.
Fatty Acid Metabolic Profiles of Vitiligo Patients Differed From That of Healthy Controls
To clarify the connection between fatty acid metabolism and vitiligo, targeted metabolomics based on ultra-performance liquid chromatography-tandem mass spectrometry (UPLC-MS/MS) was applied on serum samples from 24 active vitiligo patients, 24 stable vitiligo patients and 28 healthy controls. Serological concentrations of 19 fatty acids of the cohort were shown in Supplementary Figure S1. Multivariate statistical analysis was carried out on the fatty acid metabolic profiles to demonstrate whether there were characteristic alterations that distinguished vitiligo patients from healthy controls. According to partial least squares discriminant analysis (PLS-DA), although principal components 1-5 elucidated 66.5% of the variance of the LC-MS/MS data, there was no clear clustering that could distinguish vitiligo patients from healthy controls ( Figure 1A). Moreover, the low R2 and Q2 values (R2 = 0.36, Q2 = 0.06), which both indicated the quality of the PLS-DA model, suggested that the model was overfitted. Sequentially, to improve the diagnostic accuracy based on the fatty acid metabolic profiles, random forest (RF) algorithm was performed to identify similar regions that distinguished vitiligo patients and healthy individuals with an out-of-bag error of 0.276 for categorization ( Figure 1B). RF analysis generated a series of fatty acids ranged by their importance to the classification program, and the top 15 fatty acids ascertained were shown in Figure 1C. Metabolite, especially ARA, was recognized as contributing to the division of groups. To summarize, the fatty acids that explain the difference of metabolic profiles between vitiligo patients and their healthy counterparts were determined.
To further clarify which specific fatty acid altered the metabolic profile of vitiligo patients, univariate statistical analysis was performed to hunt for differentially expressed fatty acids. As a result, serological levels of ALA were found to be the most significantly elevated in vitiligo patients compared with healthy controls, and levels of ARA, AA and behenic acid (BA) tended to diminish in the vitiligo group (Figure 2 and Supplementary Table S3). Additionally, binary logistic regression presented in Supplementary Table S4 revealed that ARA and AA were appreciated as a protective factor, while ALA was evaluated as a risk factor. These findings further specified the exact fatty acids that might be closely associated with vitiligo.
Furthermore, we asked whether some fatty acids could distinguish active vitiligo from stable vitiligo. Both the PLS-DA model and RF algorithm showed no clear differences among all fatty acids measured between progressive vitiligo and stable vitiligo (Supplementary Figures S2A, B). This disclosed that there were no marked differences in fatty acids among subgroups.
ALA Is Positively Correlated With VASI
Factors like sex, age, BMI, TG could influence fatty acids concentration in body fluids, thus the association between fatty acids and clinical characteristics should be considered. Even though strong correlation coefficients were found between long-chain saturated fatty acid (LC-SFA) and other fatty acids, notably, the strong correlation coefficient (Pearson correlation analysis, r = 0.523, P< 0.001) was discovered between VASI and ALA presented in Figure 3. What's more, a stronger correlation coefficient was uncovered between VASI and ALA (r = 0.692, P< 0.001). These results indicated that the concentrations of some fatty acids, such as ALA, might partly reflect the severity of vitiligo.
ARA Is the Most Characteristic Biomarker and Arachidonic Acid Metabolism Is the Most Enriched Metabolic Pathway in Vitiligo
The diagnostic efficacy of significantly altered fatty acids was assessed by MetaboAnalyst 5.0. As a result, the area under the curve (AUC) of ARA was 0.709 as shown by the receiver operating characteristic curve (ROC), the AUC of other differentially expressed fatty acids, such as BA, AA and ALA were 0.657, 0.616, and 0.600, respectively ( Figure 4A). To improve the diagnostic accuracy of vitiligo, a combined predictive model was set up using four differential fatty acids, in which the AUCs were 0.817 (95% confidence interval (CI): 0.719-0.915) with a sensitivity of 78.6% and specificity of 72.9% ( Figure 4B). In order to make the prediction model more reliable and avoid over-fitting issues, we adopted a machine learning model of a 10-fold cross-validation and supporting vector machine (SVM). The AUC value of the model remained 0.789 and 0.763, respectively ( Figure 4B). These data indicated that fatty acids metabolic profiles, especially ARA, could be utilized to differentiate vitiligo patients from healthy individuals. Metabolic pathways are also critical in characterizing the metabolic profile of a disease in addition to metabolites. According to KEGG metabolic database disturbed arachidonic acid metabolism, alpha-linolenic acid metabolism and linoleic acid metabolism were markedly enriched (Supplementary Figure S3). Moreover, the FDR < 0.05 of arachidonic acid metabolism denoted that it plays a critical role in patients with vitiligo.
ARA Inhibits the Proliferation and Activation of CD8 + T Cells In Vitro
Since ARA was significantly altered in vitiligo and served as a protective agent, we speculated that ARA supplementations might ameliorate the autoimmunity in vitiligo. Therefore, we focused on the influence of ARA on CD8 + T cells in patients with vitiligo in vitro. Firstly, the cell cytotoxicity assay indicated that there were no significant alterations in cell viability when the concentrations of ARA were under 50 mM ( Figure 5A). Thus, we chose 50 mM as a limit concentration of ARA in our experiments in vitro.
Our result showed that ARA inhibited human CD8 + T cells proliferation stimulated by CD3/CD28 monoclonal magnetic beads in a concentration-dependent manner, and the proliferation of CD8 + T cells was almost completely suppressed at 50 mM ( Figure 5B). Further, the expression of CD69 on CD8 + T cells was concentration-dependently inhibited at concentrations higher than 20 mM ( Figure 5C). These results indicated that ARA could inhibit the activation and proliferation of CD8 + T in a concentration-dependent manner.
ARA and 5-LOX Inhibitor Suppress the Expression of Effector Molecules of CTLs
We continued to clarify the effect of ARA on the expression of CD8 + T effector molecules. Results disclosed that ARA significantly suppressed the expression of IFN-g, granzyme B and perforin ( Figures 6A-C). Thus, ARA might inhibit the effector function of CD8 + T cells.
Considering that cyclooxygenase (COX) pathway and lipoxygenase (LOX) pathway both participate in the arachidonic acid metabolism, we further employed celecoxib, COX-2 inhibitor, and NDGA, 5-LOX inhibitor, to specify the signaling pathway that contributes to ARA-mediated functional suppress of CD8 + T cells.
Our results showed that NDGA, instead of celecoxib, could inhibit the expression of IFN-g, granzyme B and perforin (Figures 6A-C and Supplementary Figures S4A-C). Furthermore, we found the synergistic administration of ARA and NDGA performed better than their separate use in inhibiting the expression of IFN-g, granzyme B and perforin in CD8 + T cells activated by CD3/CD28 monoclonal magnetic beads. Collectively, these results indicated that ARA and 5-LOX inhibitor might inhibit the expression of effector molecules in CD8 + T cells in vitro.
DISCUSSION
In the present study, we found a significant increase of ALA and decrease of ARA in vitiligo patients, and that supplementation with ARA or NDGA might suppress the self-reactive CD8 + T cells, characterized by the inhibition of the proliferation and activation of CD8 + T cells and lower expression levels of IFN-g, granzyme B and perforin. Circulating free fatty acids like ALA, EPA and ARA, are reportedly related to several autoimmune diseases. We evaluated the concentration of free fatty acids and observed increased levels of ALA, decreased levels of AA and BA in the serum of patients with vitiligo (P < 0.05). Unlike our results, Siro et al. discovered that the serum level of ALA was significantly reduced in patients with active vitiligo, while the expression of AA and BA were not markedly changed (23). The contradictory results might be attributed to either limited sample size or distinct measured methods in different studies, and the targeted metabolomics methods used in our study were more accurate than nontargeted data even though correction for signal drift (24).
Besides, the downregulated concentration of serum ARA in vitiligo was observed in our study, which paralleled previous reports (23). The main sources of ARA in the human body are synthesis from linoleic acid by phospholipase (PLA2, PLC and PLD) and supplement by dietary intake, while the disposition was metabolized to ARA-derived lipid mediators, such as prostaglandins (PGs), leukotrienes and epoxyeicosatrienoic acids (EETs). The reason for ARA decrease is still unclear. We surmise that metabolic exhaustion might provide an explanation considering the reportedly elevated levels of PGs (PGE2, PGD2 and PGF2a) in vitiligo lesions and leukotrienes in urine (11,13), which are both metabolites of ARA. It's worth noting that abnormality of phospholipase in patients with vitiligo has not been previously reported, which needs further study.
In our present study, we also explored the relationship between the concentration of serum fatty acids and the vitiligo severity and activity. We observed that ALA was positively correlated with the disease severity (VASI scores), suggesting the potential of serum ALA as a new biomarker to assess vitiligo severity. Further pathway analysis disclosed that alpha-linolenic acid metabolism was critical to vitiligo pathogenesis. Previous global metabolomics profiles revealed that alpha-linolenic acid metabolism was significantly up-regulated in RA, and they suggest that long-term inflammation and nonsteroidal anti-inflammatory drugs intake may be responsible for the disturbed alpha-linolenic acid metabolism (25). Our study further validated that alpha-linolenic acid metabolic disturbance might be associated with chronic inflammatory status in vitiligo. Physiologically, alpha-linolenic acid metabolic pathway could be transformed into bioactivate mediators such as EPA and DHA. Thus, the observed positive correlation between ALA and VASI might corroborate with previous reports that EPA was inversely associated with the severity of RA and experimental autoimmune encephalomyelitis on account of the speculated low biotransformation rate of ALA in vitiligo (26,27). Besides, several clinical experiments have disclosed that supplementation with EPA, DHA and their metabolites, resolvins, could contribute to the recovery of autoimmune diseases, such as diabetes and multiple sclerosis (28)(29)(30), which indicated that the role of metabolites of ALA in vitiligo still needs further exploration. Our study also revealed that there were no differences in free fatty acid between active vitiligo and stable vitiligo. The association between fatty acid levels and vitiligo activity still needs further investigation owning to relatively small sample sizes in our research.
Our study also found that ARA was a sensitive biomarker for the predictive model of vitiligo. Although the diagnosis of vitiligo was relatively definitive by clinical observation and Wood's lamp, ARA, along with other unsaturated fatty acids could help reflect inflammation and autoimmunity status (31,32). Previous studies confirmed that ARA was strongly associated with the risk of chronic inflammatory diseases and diabetes-associated autoimmunity (33)(34)(35). Our study revealed the role of ARA in the pathogenesis and treatment of vitiligo.
Vitiligo is an autoimmune skin disorder mainly mediated by autoreactive CD8 + T cells that lead to skin depigmentation. Our results indicated that ARA and its metabolic pathway were closely associated with autoimmune diseases. A preceding study reported that ARA, the terminal metabolite of linoleic acid, could suppress CD8 + T cell proliferation and activation in healthy individuals (36), which backed up our findings. Additionally, several pieces of research showed that ARA could inhibit the production of IFN-g in various diseases, such as metal-induced allergy diseases or neuroinflammatory disorders (37,38). In our present study, we firstly demonstrated that ARA could inhibit the secretion of granzyme B and perforin.
Cyclooxygenase (COX) and lipoxygenase (LOX) metabolic pathway are regarded as the most important pathways in regulating arachidonic acid metabolism. Besides, metabolites of COX-2 and 5-LOX pathway have definitive immunomodulatory effects. A recent study found that celecoxib was effective in relieving swelling and inflammation in accordance with inhibition of IFN-g production in metal-induced allergy diseases (39). However, our results showed that celecoxib couldn't influence the expression of IFN-g, as well as granzyme B and perforin generated by CD8 + T cells in vitiligo patients. It may partly result from PGs, the metabolites of COX-2 pathway, because low concentrations of PGE2 potentiate Th1 cells differentiation, while high concentrations work right reversely (40). Whereas NDGA, as a well-known 5-LOX inhibitor, could significantly inhibited the expression of IFN-g, granzyme B and perforin of CD8 + T cells derived from vitiligo patients. In line with our findings, NDGA was . ARA, arachidonic acid; mM, mmol/l; ns, no significance, *P < 0.05, **P < 0.01, ***P < 0.001, ****P < 0.0001.
previously reported to inhibit IFN-g-induced inflammatory response in vitro (41). In addition, their results suggest that NDGA regulates IFN-g-mediated inflammation through mechanisms uncorrelated to the inhibition of 5-LOX metabolites. Inhibition of 5-LOX pathway could increase PGE2 generation through upregulating COX-2 pathway (42). And higher concentrations of PGE2 could inhibit the secretion of IFN-g to play a protective role in vitiligo. Therefore, inhibition of 5-LOX pathway in vitiligo treatment might be therapeutically exploitable.
Despite these findings, our study has several limitations. Firstly, although metabolomics can provide a comprehensive understanding of the disease, it still has some variability and inconsistencies due to the limited sample size (43). And a large number of population studies should be carried out in the future to avoid the limited statistical power. Secondly, our findings were derived from in vitro experiments, hence the effect of ARA and 5-LOX inhibitor on CD8 + T cells and their capability in promoting repigmentation in vitiligo treatment still need further validation in vivo.
In summary, we performed a targeted metabolomics study and provided evidence that metabolic abnormality of fatty acids, especially ALA and ARA, were tightly associated with vitiligo. Moreover, ALA and ARA might be used for assessing vitiligo severity and predicting disease. Finally, our results found that ARA and NDGA could inhibit the function of CD8 + T cells, which might offer a novel strategy for the treatment of vitiligo.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding authors.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Ethics Committee of Xijing Hospital. The patients/ participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
ZY, JC, PD, CL, and SL came up with the design of the experiment. All authors contributed to the performance of
|
2022-02-10T14:28:20.905Z
|
2022-02-10T00:00:00.000
|
{
"year": 2022,
"sha1": "9dd1474e6b7e4544f40f69ad4f99f16bba39ef3d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "9dd1474e6b7e4544f40f69ad4f99f16bba39ef3d",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
238261909
|
pes2o/s2orc
|
v3-fos-license
|
Mitochondrial Mutations in Ethambutol-Induced Optic Neuropathy
Background: Ethambutol-induced optic neuropathy (EON) is a well-recognized ocular complication in patients who take ethambutol as a tuberculosis treatment. The aim of the current study was to investigate the presence of mitochondrial mutations, including OPA1 and Leber’s hereditary optic neuropathy (LHON)-mitochondrial DNA (mtDNA), in patients with EON and to determine their effect on clinical features of these patients. Methods: All 47 patients underwent clinical evaluations, including best-corrected visual acuity, fundus examination, and color fundus photography; 37 patients were then followed up over time. Molecular screening methods, including PCR-based sequencing of the OPA1 gene and LHON-mtDNA mutations, together with targeted exome sequencing, were used to detect mutations. Results: We detected 15 OPA1 mutations in 18 patients and two LHON-mtDNA mutations in four patients, for an overall mutation detection rate of 46.8%. The mean presentation age was significantly younger in the patients with the mitochondrial mutations (27.5 years) than in those without mutations (48 years). Fundus examination revealed a greater prevalence of optic disc hyperemia in the patients with mutations (70.5%) than without mutations (48%). Half of the patients with mutations and 91% of the patients without mutations had improved vision. After adjusting for confounders, the logistic regression revealed that the patients with optic disc pallor on the first visit (p = 0.004) or the patients with the mitochondrial mutations (p < 0.001) had a poorer vision prognosis. Conclusion: Our results indicated that carriers with OPA1 mutations might be more vulnerable for the toxicity of EMB to develop EON.
INTRODUCTION
Ethambutol (EMB), one of the first-line drugs used to treat tuberculosis, works by inhibiting the arabinosyl transferase of mycobacteria (Belanger et al., 1996). However, its use is associated with a well-recognized ocular complication, ethambutol-induced optic neuropathy (EON), which is characterized by blurring of the vision, dyschromatopsia, and central or cecocentral scotoma (Carr and Henkind, 1962). The ocular symptoms usually appear after 7 to 8 months of treatment with EMB, and the development of EON is both time-and dose-dependent (Fraunfelder et al., 2006;Lee et al., 2008). The incidence of EON is about 1% in patients prescribed an EMB dose at 15 mg/kg/d (Santaella and Fraunfelder, 2007;Yang et al., 2016). The risk factors for the occurrence of EON include older age, low body weight, and renal dysfunction (Chen et al., 2012). About 30 to 54% of the patients have varying degrees of vision recovery at 3 to 7 months after EMB discontinuation (Lee et al., 2008;Ezer et al., 2013); however, some patients have severe and permanent visual loss, even without the known risk factors. Therefore, other predisposing factors might exist for EON.
At present, the pathophysiology of EON still remains unclear. Several previous studies have indicated that EMB disrupts energy metabolism and the network structure of mitochondria by inducing severe vacuole formation and by decreasing membrane potential (Chung et al., 2009;Guillet et al., 2010). Dotti et al. (1998) first described the m.G11778A mitochondrial DNA (mtDNA) mutation in an EON patient, pointing to a relationship between mitochondrial mutation and the development of EON. Leber's hereditary optic neuropathy (LHON) is caused by mtDNA mutations that disrupt complex I activity of the mitochondrial respiratory chain and ATP synthesis . Autosomal dominant optic atrophy (ADOA) is another form of mitochondrial optic neuropathy that is caused by mutations of the OPA1 gene (Delettre et al., 2000). The OPA1 gene encodes a dynamin-related GTPase, which is localized at the inner membrane of mitochondria and plays a role in mitochondrial fusion (Kao et al., 2015).
Both LHON and ADOA have a common pathophysiological outcome, which is mitochondrial energy deficiency and retinal ganglion cell apoptosis (Kao et al., 2015;Zhang et al., 2018). The current literature reports 8 patients carrying LHON mtDNA mutations and 2 patients harboring OPA1 mutations who developed optic neuropathy during the use of EMB (Dotti et al., 1998;De Marinis, 2001;Hwang et al., 2003;Chowdhury et al., 2006;Ikeda et al., 2006;Guillet et al., 2010;Pradhan et al., 2010;Seo et al., 2010). Most of these studies are case reports, and the final clinical diagnosis is controversial for those patients. The relationship between the mitochondrial mutations and the clinical features of EON has not been fully studied.
In the present study, we investigated the presence of mitochondrial mutations (OPA1 and LHON-mtDNA) in a cohort of 47 patients with EON. We also studied the effects of these mutations on the clinical features of the patients by performing genetic analysis and identified disease-causing gene mutations in 22 patients.
Patients
This study was approved by the Medical Ethics Committee of Beijing Tongren Hospital. All investigations followed the tenets of the Declaration of Helsinki. Clinical data were retrospectively collected from outpatient and hospitalized patients diagnosed with EON from 2011 to 2018 at the Department of Ophthalmology in Chinese People's Liberation Army General Hospital and at the Genetics Laboratory of the Beijing Institute of Ophthalmology at Beijing Tongren Hospital. Blood was taken at initial presentation for genetic analysis with the patients' or their parents'/guardians' consent. We recruited a total of 47 unrelated patients, and 37 patients were followed up either by revisit evaluation (two patients) or by telephone surveys (35 patients). All patients underwent ophthalmological evaluations, including the best corrected visual acuity (BCVA), slit-lamp biomicroscopy, fundus examination, and color fundus photography. Some patients had color perception tests (Lanthony 15-Hue, Farnsworth D-15, or Ishihara color plates), Octopus or Humphry visual field tests, and spectral domain OCT examinations. We extracted genomic DNA from peripheral blood leukocytes from all patients and from available family members, following the manufacturer's instructions (Vigorous, Beijing, China).
The inclusion criteria for EON were based on previous guidelines (Fraunfelder et al., 2006;Lee et al., 2008). The diagnosis had to satisfy the major criterion and at least two of the minor criteria. The major criterion is vision loss appearing only after taking EMB and within 2 months after discontinuation of the drug. The minor criteria are: (1) abnormal results on color perception tests, and (2) central, paracentral, or cecocentral scotoma or temporal hemianopia on visual field examinations, and (3) optic disc hyperemia or pallor on color fundus photography. Exclusion criteria include optic neuritis, glaucoma, and other retinal diseases.
PCR-Based Sequencing of the OPA1 Gene and Leber's Hereditary Optic Neuropathy-mtDNA All coding regions of the OPA1 gene and 19 primary LHON-mtDNA mutations were amplified by PCR in 36 patients. The primer sequences and the product lengths for amplification were described previously (Chen et al., 2014). The 19 primary LHON-mtDNA mutations included m.
and m.14568C > T 1 . Purified PCR products were sequenced with an ABI PRISM 3730 DNA sequencer (Applied Biosystems, Foster City, CA, United States). Sequencing data were compared with the GenBank sequence for the OPA1 gene (NM_015560) and mtDNA sequence (AC_000021.2).
Targeted Exome Sequencing
Eleven patients were investigated by TES with a capture panel including 194 known neuro-ophthalmological genes (Supplementary Table 1). The Illumina library preparation and capture experiments were performed as previously reported (Sun et al., 2018). Briefly, genomic DNA was fragmented by endonuclease digestion and used to capture the targeted genomic sequences. The amplicon-based enrichment library was sequenced on an Illumina NextSeq 500 (Illumina, Inc., San Diego, CA, United States). After removing the sequencing adapters, low quality reads, and duplicated reads, the high quality reads were aligned with the reference human genome (hg19) using the Burrows-Wheeler Aligner. Single nucleotide polymorphisms and insertions or deletions were called using the Genomic Analysis Toolkit Haplotype Caller.
Bioinformatics Analysis
The potential functional impacts of missense mutations were evaluated with Polyphen-2 2 , Mutation Taster 3 , and SIFT 4 . The effect of splicing mutations was analyzed with NetGene2 5 and BDGP 6 . The allele frequency of the variants was confirmed in the 1,000 Genome Project 7 and ExAC 8 . Cosegregation analysis was performed in available family members to verify the suspected mutations. We classified the variants into pathogenic, likely pathogenic, uncertain of significant, likely benign, and benign according to the guidelines published by the American Academy of Medical Genetics and Genomics (ACMG) (Richards et al., 2015).
Statistical Analysis
We converted the Snellen ratios into logarithm of the minimum angle of resolution (logMAR) values for statistical purpose. LogMAR values of 0, 1.0, and 2.0 are equal to a Snellen vision of 1.0, 0.1, and counting fingers, respectively (Holladay, 1997). The Wilcoxon rank sum test and Pearson Chi-square or Fisher's exact test were used to analyze the quantitative and categorical data, respectively. The Kruskal-Wallis test or multivariate logistic regression was used to analyze correlations. We performed all statistical analysis using SPSS version 22 software (IBM Corporation, New York, United States). The statistical significance level was 5%.
Mitochondrial Mutation Detection Rate and Related Mutations
We detected OPA1 and LHON-mtDNA mutations in 22 of the 47 patients with EON, for an overall mutation detection rate of 46.8%. Mutations in 15 patients were detected by Sanger sequencing and mutations in 7 patients were detected by TES (Supplementary Table 2). The average coverage of the TES was 99.8%. The average sequencing depth was 288X. About 99% of the data had a depth of 10X or more.
We identified 15 distinct mutations of the OPA1 gene in 18 patients (Supplementary Figure 1), for a detection rate of 38.3%. According to the ACMG guidelines, 13 mutations were defined as pathogenic and two mutations were defined as likely pathogenic (Table 1). Of these mutations, 8 were newly detected in the current study. These mutations included five (33.3%) nonsense, four (26.5%) frameshift indels, three (20%) splicing defects, two (13.3%) missense, and one (7%) large deletion mutation ( Table 1). The most common mutation was c.2708_2711delTTAG (p.V903Gfs * 3), with an allele frequency of 22.2% (4/18); the remaining 14 mutations were detected only once. The eight novel mutations included three frameshift indels, two missense, one splicing effect, one nonsense, and one large deletion. None of these novel mutations were observed in the public databases. The two missense mutations were predicted as disease-causing by three in silico analysis programs ( Table 1).
We detected the m.11778G > A mutation in two patients and the m.14484T > C mutation in two patients. The detection rate of LHON-mtDNA mutations was 8.5%.
Demographics and Clinical Characteristics of All Patients
The 47 patients in the current study included 31 males and 16 females, for a male-to-female ratio of 1.9:1 (Supplementary Table 2). Of these patients, seven had a family history of optic neuropathy, two carrying an OPA1 mutation and four harboring an mtDNA mutation. The mean age of presentation was 39 years (range 16-74 years). The mean daily dose of EMB was 12.9 mg/kg (range 3.8-29.2 mg/kg) and the mean medication duration was 4 months (range 1-24 months). All patients had a major complaint of blurred vision, and 41 (87.2%) of them experienced the visual impairment simultaneously in both eyes. The mean course of the disease, defined as the time interval from the vision loss to the first visit, was 2 months (range 0.5-24 months). Eight patients had a complaint of ocular pain, numbness of a lower extremity, or hearing loss. Their mean BCVA was 1.0 (logMAR; range 0.5-1.3), which was not correlated with their daily EMB dose (p = 0.646), their medication duration (p = 0.099), or the course of their disease (p = 0.939).
Fundus examination revealed a symmetrical fundus appearance in 43 patients (91.5%). Of the 94 eyes, 46 (48.9%) showed optic disc hyperemia and retinal nerve fiber layer (RNFL) swelling, 15 (16.0%) had temporal disc pallor, nine (9.6%) presented with nasal disc hyperemia and temporal disc pallor, four (4.2%) showed total disc pallor, and 20 (21.3%) presented a normal optic disc appearance (Figure 1). Unilateral retinal hemorrhage was observed in four patients (Figure 2). The result of logistic regression showed that the patients with optic disc pallor had a longer course of disease compared with patients with optic disc hyperemia or with normal optic disc appearance (p < 0.001).
Of the 33 patients (61 eyes) who had a color perception test, 23 eyes (37.7%) presented with red/green color vision impairment, 22 eyes (36.1%) showed total color blindness, eight eyes (13.1%) presented with yellow/blue color vision impairment, and eight eyes (13.1%) showed normal color perception. Of the 33 patients (66 eyes) who had a visual field examination, 31 eyes (47%) presented with central scotoma, 23 eyes (34.8%) showed cecocentral scotoma, six eyes (9.1%) presented with paracentral scotoma, and six eyes (9.1%) showed temporal hemianopia. The 37 patients were followed up over time (range 1-87 months), with a mean follow-up time of 15 months. Patient E005, carrying OPA1 mutation p.R52X, and patient A3124, without mutations, underwent revisit examinations. After 4 years of follow-up, the BCVA of patient E005 improved from 0.02 in the right eye and 0.04 in the left eye to 0.1 in both eyes. The RNFL thickness at the temporal side of binocular optic discs was below the limits (Figure 3). After 11 months of follow-up, the BCVA of patient A3124 improved from 0.05 in the right eye and 0.03 in the left eye to 0.5 and 0.3, respectively. Fundus photography showed the optic disc hyperemia resolved at follow-up in both eyes (Figure 3). Of the remaining patients, 26 patients stated their VA had improved, eight patients reported their VA was unchanged, and one patient said his VA had worsened. None of patients with VA improved underwent cataract surgery or other intervention that would affect the assessment of visual outcome. The mean recovery time of the VA was 4 months (range 1-12 months). After adjusting for any confounders, the logistic regression revealed that the patients with optic disc pallor on the first visit (p = 0.004) or the patients with the mitochondrial mutations (p < 0.001) had a poorer vision prognosis (Supplementary Table 3).
Comparison of Patients With Mitochondrial Mutations and Without Mutations
To simplify the description and statistical analysis, we divided the 47 patients into two groups: group 1 consisted of the patients carrying mitochondrial mutations and group 2 included the patients without mitochondrial mutations. We compared the difference in the demographics, EMB medications ( Table 2), and clinical characteristics (Table 3) between the patients in the two groups. The mean visiting age was significantly younger in group 1 (27.5 years) than in group 2 (48 years). The percentage of patients with a family history was statistically higher in group 1 (27%) than in group 2 (4%). The percentage of eyes with optic disc hyperemia was higher in group 1 than in group 2. The visual outcome was better for group 2 than for group 1. Among the nine patients whose VA did not improve, six patients (66.7%) carried OPA1 mutations and one patient (11.1%) carried the m.G11778A mutation.
DISCUSSION
The current study investigated the mitochondrial mutations in 47 unrelated patients with EON and described the clinical characteristics of the patients. We identified OPA1 mutations in 18 (38.3%) patients and LHON-mtDNA mutations in four (8.5%) patients. EON is a dose-dependent form of toxic neuropathy related to several other risk factors, such as renal dysfunction and older age (Santaella and Fraunfelder, 2007;Chen et al., 2012;Yang et al., 2016). In the current cohort, 91.5% of the patients with EON had taken low-dose EMB (≤ 15mg/kg/day),
FIGURE 1 | Five kinds of disc appearances and OCT images in five patients with EON. (A)
The disc appearance and the RNFL thickness were normal. (B) The optic discs were hyperemic. The superior, temporal, and inferior RNFL was thickened. (C) The border of the nasal discs was blurred and the temporal discs were pale. The temporal RNFL thickness was thinner than normal. (D) The temporal discs were pale and the temporal RNFL became thin. (E) The optic discs were pale, with the inferior border of the disc blurred in the left eye. 95.7% of the patients had a normal renal function, and 89.4% of the patients were younger than 60 years; therefore, the risk of developing ocular toxicity was relatively low in our cohort. Our results suggested that mitochondrial genetic variations, and especially OPA1 mutations, are major predisposing factors for the occurrence of toxic optic neuropathy.
Mutations of the OPA1 gene are responsible for 50-70% of ADOA, (Cohn et al., 2007;Yu-Wai-Man et al., 2010) which is the most common form of inherited optic neuropathy. OPA1-related ADOA is usually a mild and slowly progressive disorder (Cohn et al., 2007). Typically, patients suffer an insidious, symmetrical, and progressive visual defect in their childhood; however, the severity of visual impairment is highly variable (Cohn et al., 2007). About 10-20% of mutation carriers are "asymptomatic, " as they have a normal visual acuity or only a subtle visual disturbance (Cohn et al., 2007;Yu-Wai-Man et al., 2010). In the current cohort, we identified different kinds of mutations of the OPA1 gene, but the type and location of these mutations were similar to those reported in typical patients with ADOA, (Cohn et al., 2007;Ferré et al., 2009;Yu-Wai-Man et al., 2010;Chen et al., 2014) except for the low frequency (13%, 2/15) of missense mutations. This frequency was only the half rate observed previously in patients with ADOA (approximately 27%) (Chen et al., 2014). Another previous study indicated that vision loss was usually more severe in patients with missense mutations than with null mutations (Yu-Wai-Man et al., 2010). Unlike the typical patients with ADOA, the OPA1 mutation carriers in the current study all experienced a subacute visual loss while taking EMB therapy, and none of them noticed a visual abnormality before the treatment; therefore, they might be "asymptomatic" cases. In addition, 71% of the OPA1 mutation carriers presented optic disc hyperemia, which has never been observed in typical patients with ADOA. By contrast, only 20% of those carriers showed temporal disc pallor, which is a prominent optic appearance in patients with ADOA (41-86%) (Cohn et al., 2007;Yu-Wai-Man et al., 2010;Chen et al., 2014). Four patients harboring the most common mutation c.2708_2711delTTAG had different fundus appearance. We speculate that the difference in the fundus performance may be due to the different course of vision loss of patients. In a previous study, Pradhan et al. described a 36-year old man who suffered a bilateral, painless visual loss during his anti-tuberculosis treatment that included EMB FIGURE 3 | Fundus appearance and OCT images on the first visit and at follow-up for patient E005 (A-C) and patient A3124 (D-F). In the patient E005, the fundus appearance did not change. The temporal RNFL in both eyes was below the normal limits at follow-up (C). In the patient A3124, the color and border of the optic discs turned normal. The RNFL was thickened in both eyes on the first visit (F). (Pradhan et al., 2010). This patient also had optic disc hyperemia and peripapillary hemorrhage. After a series of differential diagnoses, the researchers finally identified a nonsense p.R38X OPA1 mutation in this patient and inferred that the visual loss in that patient had been exacerbated following EMB therapy. We speculate that optic disc hyperemia is an acute response to the toxic effect of EMB; consequently, a concomitant effect of OPA1 mutations and EMB toxicity renders OPA1 mutations carriers prone to optic disc hyperemia. This might be the reason why the OPA1 mutations carriers presented more optic disc hyperemia than was observed in the patients without mutations, even though both groups had a similar disease time course. The mutation rate (38.3%) of OPA1 mutations in this EON cohort was higher than that (9.6 and 7.6%) reported in a group of Chinese patients with suspected hereditary optic neuropathy (Chen et al., 2014) and other Han Chinese population (Zhang et al., 2017). One reason might be the patients harboring OPA1 mutations are more sensitive to the toxicity of EMB and present more obvious visual defects as our mentioned above. Another reason might be due to the small number of patients in the current cohort.
In this cohort, we only identified four male patients carrying a LHON-mtDNA mutation, and this rate (8.5%) was much lower than the rate (38.3%) in patients harboring OPA1 mutations and the rate (33%) in Chinese patients with LHON or suspected with LHON (Ji et al., 2008;. Hwang et al. (2003) screened a cohort of 24 patients with EON for LHON-mtDNA mutations but were unable to identify any LHON-mtDNA mutations. Therefore, the existence of LHON-mtDNA mutations might be uncommon in patients with EON. To date, twelve patients (including our four patients) have been reported to develop optic neuropathy while taking EMB (Table 4; Dotti et al., 1998;De Marinis, 2001;Chowdhury et al., 2006;Ikeda et al., 2006;Seo et al., 2010). Ten of these patients carried mutation m.11778G > A, which is the most common primary mutation of LHON (Chen et al., 2014). In the present study, we described two patients carrying the m.14484T > C mutation (the second commonest primary mutation of LHON) who developed LHON while taking EMB (Table 4). Unlike the eight previously described cases, the four patients in our cohort had a younger onset age (48 years vs. 21.5 years). In addition, these four patients all had a family history, and three of them showed optic disc hyperemia and RNFL pseudoedema, which is a typical fundus appearance in the acute stage of LHON (Chen et al., 2014). Not all individuals who carry LHON-mtDNA mutations will develop visual symptoms, as the occurrence of LHON usually needs other risk factors, like heavy smoking and alcohol consumption (Sadun et al., 2003). EMB might have been a trigger or an epigenetic factor for the manifestation of LHON in these four patients.
The toxicity of EMB is related to its zinc-chelating effect and its metabolites (Chung et al., 2009). At present, the exact mechanism for the toxic neuropathy induced by EMB remains unclear, but increasing evidence indicates a relationship with mitochondrial dysfunction. In an early study, Heng et al. found that EMB was specifically toxic to retinal ganglion cells and that it caused ganglion cell degeneration by a glutamate excitotoxic pathway (Heng et al., 1999). LHON-mtDNA and OPA1 mutations also cause damage to the small-caliber papillomacular bundle axons (Barboni et al., 2010) therefore, the clinical features of EON, LHON, and ADOA partially overlap. All three conditions show color vision abnormality and visual field defects. In the current cohort, most patients presented central or cecocentral scotoma, which could be observed in both EON and LHON or ADOA, whereas only three patients in group 2 showed temporal hemianopia, which is a typical visual field defect of EMB-related optic chiasmopathy (Jayanetti et al., 2017). In this cohort, the majority of patients presented with red/green color vision impairment or total color blindness, while only two patients carrying OPA1 mutations displayed yellow/blue color vision impairment, which is a typical color defect of ADOA (Cohn et al., 2007). Fundus hemorrhage is not common in LHON or ADOA, and three of the four patients with fundus hemorrhage were in group 2, indicating it to be one of the EON clinical features. Our results showed that the mean age was significantly younger in the patients with mitochondrial mutations (27.5 years; group 1) than in the patients without mutations (48 years; group 2), further demonstrating that mitochondrial mutation was an important risk factor for the occurrence of EON, especially in young tuberculosis patients. Patients carrying mitochondrial mutations (OPA1 or mtDNA) may be more vulnerable to the toxicity of EMB. Of the seven patients with a family history, six patients were mutation carriers; therefore, physicians should carefully question patients with tuberculosis or their family members about ophthalmic problems before EMB is prescribed. Consistent with the observation by Lee et al. (2008) we found that the visual prognosis was related to the initial fundus appearance of the patients with EON. Up to 79.8% of the patients in our cohort showed normal optic disc appearance or optic disc hyperemia, which suggested they were in the early stage of EON. This might be a reason why our patients had a higher visual recovery rate (75.7%) than the rates described in other EON studies (23.1-47%) (Lee et al., 2008;Ezer et al., 2013). Another reason for this high rate might be due to that the 94.6% VA improvement was reported by patients subjectively; this is one limitation of the current study. Nevertheless, we still observed that the visual outcome was better in the patients in group 2 than in group 1, suggesting that the mitochondrial mutations were a critical factor affecting visual prognosis of patients with EON. Of the eight patients carrying mutation m.11778G > A, only one patient who harbored a heteroplasmic mutation showed VA improvement, (De Marinis, 2001) whereas the other patients all experienced VA decreases or no change during their followup (Table 4). Another limitation of the current study is that we did not take into account the ocular toxicity of other antituberculosis drugs, such as isoniazid. The third limitation is that since the study was retrospective in nature, the examinations were not standardized.
In conclusion, our results indicated that carriers with OPA1 mutations might be more vulnerable for the toxicity of EMB to develop EON, whereas the exact effect of these OPA1 mutations has not been confirmed in future functional assays. Although mitochondrial mutation screening is not possible in all patients prior to anti-tuberculosis medication, genetic analysis should be strongly recommended for patients with a family history of optic neuropathy.
DATA AVAILABILITY STATEMENT
The data presented in the study are deposited in the NCBI GenBank, accession numbers NM_015560 and AC_000021.2.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Medical Ethics Committee of Beijing Tongren Hospital. The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
X-HZ participated in the data collection, data analysis, and manuscript preparation. YX, KX, and KC participated in the data collection and analysis. Q-GX contributed in the study design and data collection. Z-BJ, YL, and S-HW participated in the study design and the manuscript revision. All authors contributed to the article and approved the submitted version.
|
2021-10-05T13:11:14.676Z
|
2021-10-05T00:00:00.000
|
{
"year": 2021,
"sha1": "59e1d56a93537d48699dc67e3629265c59e983b5",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fcell.2021.754676/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "59e1d56a93537d48699dc67e3629265c59e983b5",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
29503343
|
pes2o/s2orc
|
v3-fos-license
|
Reconstructing meaning in bereavement : summary of a research program
Bereavement, in the form of the loss of a significant attachment figure, disrupts the self-narratives of survivors and typically pitches them into an unsought quest for meaning in the loss, as well as in their changed lives. A growing body of research on diverse groups bereaved parents, young people, elderly suffering loss through both natural and violent death, documents the link between the inability to find meaning in the experience and the intensity of complicated grief they suffer. This article reviews the literature, arguing that the processes of sense-making and benefit-finding play a crucial role in bereavement adaptation for many of the bereaved, and accordingly, that interventions that facilitate processes of meaning reconstruction can support effective psychotherapy for those struggling with intense and prolonged grief. Uniterms: Bereavement. Death. Meaning in loss
Loss and the quest for meaning
Just as philosophers, linguists and theologians emphasize the role of meaning in human life, so too do many psychologists.In particular, both classical and contemporary constructivists (Kelly, 1955(Kelly, /1991;;Neimeyer, 2000Neimeyer, , 2009) ) focus on the processes by which people punctuate the seamless flow of life events, organizing them into meaningful episodes and discerning in them recurrent themes that give them personal significance and lead them to seek validation in their relationships with others.Viewed in narrative terms, we ultimately construct a life story that is distinctively our own, though it necessarily draws on the social discourses of our place and time.The result is a self-narrative (Neimeyer, 2004b), defined as "an overarching cognitive-affective-behavioral structure that organizes the 'micro-narratives' of everyday life into a 'macro-narrative' that consolidates our selfunderstanding, establishes our characteristic range of emotions and goals and guides our performance on the stage of the social world" (p.53).From this perspective, identity can be seen as a narrative achievement, as our sense of self is established through the stories that we tell about ourselves and relevant others, the stories that others tell about us and the stories we enact in their presence.Importantly, it is this very self-narrative that is profoundly shaken by "seismic" life events such as the death of a loved one, instigating the processes of reaffirmation, repair or replacement of the basic plot and theme of one's life story (Calhoun & Tedeschi, 2006;Neimeyer, 2006).
Consider the experience of Gayle, struggling in the aftermath of the death of her son, Max, in a road accident on his way back to college.As a deeply thoughtful young man, exploring both Eastern and Western traditions of wisdom, Max had been drawn in the months before his death to the music of Cloud Cult, whose songs, like Journey of the Featherless, captured in a youthful, modern idiom the cosmic "flight" of sojourners skyward, beyond social convention, while in related tracks on the same CD, the voices of the performers intoned repeatedly, I love my mother/ I love my father/ And when it' s my time to go/ I want you to know/ I love you all.When Max was the only one to die in the rollover of the SUV in which he was riding as a passenger, the singed backpack containing his reflective journal and poetry was one of the few things that escaped the flaming wreckage.As she searched desperately for some meaning in the seemingly senseless death of her son, Gayle took heart in the Cloud Cult music found in Max's CD player in his bedroom, in the philosophic tone of the poetry and prose in his miraculously salvaged journal, and in the survival of Max's girlfriend in the same accident, as the young woman herself was moved to a deep search for significance in the months that followed the tragedy.Together, she and Gayle sought, and found, some sense in the death through an eclectic spiritual narrative centering on their mutual "soul contracting" with Max, between incarnations, to undergo this trial together in their present lives, so that each might learn what it had to teach them in their respective journeys.
Reinforced by a series of memorial services, rituals, and consultations with mediums and various spiritual guides, the new narrative of the meaning of Max's life and death consolidated into a stable resource for not only the two women, but also for an entire community of relevant others, who joined in spontaneous "strike force philanthropy" in honor of Max, thereby extending the story beyond one of consolation to one fostering social action to mitigate suffering in the world, including a massive medical aid effort to survivors of the earthquake in Haiti.
In the aftermath of life-altering loss, the bereaved are commonly precipitated into a search for meaning at levels that range from the practical (How did my loved one die?)through the relational (Who am I, now that I am no longer a spouse?) to the spiritual or existential (Why did God allow this to happen?).How -and whether -we engage these questions and resolve or simply stop asking them, shapes how we accommodate the loss itself and who we become in light of it.In Gayle's case, anguished and intermittent questioning impelled her forward in her search, ultimately deepening and broadening her existing sense of cosmic purpose, and galvanizing her efforts to live authentically and compassionately in relation to others who shared the same objective loss, or who faced losses and struggles in their own lives.The result was a revised self-narrative that found significance in the event story of her son's death, as well as in the back story of his life, braided together intimately with her own.
A growing body of research on meaning reconstruction in the wake of loss supports the broad outline of this model and is beginning to add clinically useful detail to our understanding of how the bereaved negotiate the unwelcome change introduced into their lives by the loss, both for better and for worse and how we as professional helpers might best support their search for significance.It is worth bearing in mind at the outset, however, that loss does not inevitably decimate survivors' self-narratives and mandate a revision or reappraisal of life meanings, as many will find consolation in systems of secular and spiritual beliefs and practices that have served them well in the past (Attig, 2000).Indeed, especially when the deaths of loved ones are relatively normative and anticipated, only a minority of the bereaved reports searching for meaning in the experience and the absence of such a search is one predictor of a positive bereavement outcome (Coleman & Neimeyer, 2010;Davis, Wortman, Lehman & Silver, 2000).Even in the case of normative losses such as late-in-life widowhood, however, evidence suggests that a significant minority of survivors struggles to find meaning in their loss over an extended period of time (Bonanno, Wortman & Nesse, 2004).Moreover, in this same prospective, longitudinal study of widows and widowers, those who reported a more intense search for meaning in the loss, 6 and 18 months after the death, evidenced a more painful and prolonged reaction of grief throughout 4 years of bereavement (Coleman & Neimeyer, 2010).Indeed, research on complicated, prolonged grief disorder documents that a struggle with meaninglessness is a cardinal marker of debilitating bereavement reactions across many populations (Prigerson et al., 2009).In a large cohort of bereaved young adults suffering a variety of losses, for example, inability to "make sense" of the death was associated with marked and preoccupying separation distress throughout the first two years of adaptation (Holland, Currier & Neimeyer, 2006).
When losses are more objectively traumatic, data suggest that a search for sense or significance in the loss is more common, characterizing the majority of those bereaved by the sudden death of a family member, or parents who lose a child (Davis et al., 2000).Evidence demonstrates that a crisis of meaning is especially acute for those bereaved by suicide, homicide or fatal accident, who report a far more intense struggle to make sense of the loss than do those whose loved ones died of natural causes.Moreover, the role of sense making -a key form of meaning-making -is so prominent in accounting for the complex symptoms of grief experienced by the former group, that it functions as a nearly perfect mediator of the impact of violent death, accounting for virtually all of the difference between those bereaved by the traumatic as opposed to natural death of their loved ones (Currier, Holland & Neimeyer, 2006).
Research on bereaved parents reinforces the powerful role of meaning-making in predicting bereavement outcome.Studying a large group of mothers and fathers whose children had died, anywhere from a few months to many years earlier, Keesee, Currier and Neimeyer (2008) found that the passage of time, the gender of the parent and even whether the child died a natural or violent death, accounted for little of their subsequent adaptation, whether assessed in terms of normative grief symptoms (e.g.sadness and missing the child) or complicated grief (e.g. an ongoing inability to care about other people and long-term disruption of functioning in work and family contexts).In contrast, their degree of sense-making proved to be a potent predictor of concurrent, complicated grief symptoms, accounting for 15 times more distress experienced by these parents than any of the above mentioned objective factors (Keesee et al., 2003).A further analysis, of qualitative responses to questions about the kinds of meanings made by these parents, also proved enlightening.Fully 45% of the parents confessed that they were unable to make sense of their child's death even 6 years later, on average, and over 20% could identify no unsought benefits (e.g.greater personal strength) to mitigate the great pain of the tragedy.Overall, parents discussed 32 distinct approaches to finding meaning in their child's death, 14 of which involved sense-making and 18 involved unsought benefits or a "silver lining" in the loss, each representing a means of finding meaning in a tragic experience.The most common sense-making themes involved religious beliefs (such as the conviction that the child's death was part of a divine plan or a belief in reunion in an afterlife), and the most common benefitfinding themes entailed an increase in the desire to help and compassion for others' suffering.Parents who invoked specific sense-making themes, including attributing the death to God's will or a belief that the child was no longer suffering, as well as those who reported benefits such as reordered life priorities, experienced fewer maladaptive grief symptoms (Lichtenthal, Currier, Neimeyer & Keesee, 2010).
On the other hand, nothing guarantees that spirituality will serve as a defense against the challenges of bereavement; indeed, one's spiritual orientation may itself suffer as a function of the assault on meaning posed by tragic loss.Burke, Neimeyer, McDevitt-Murphy, Ippolito and Roberts (2011) recently reported, for example, that African Americans, bereaved by the homicide of a loved one, frequently struggled with complex grief, and that this form of distress from bereavement predicted subsequent struggles with feeling abandoned by God and the faith community some six months later.
Finally, it is worth underscoring that adaptation to bereavement entails more than simply surmounting painful symptoms of grief and depression, insofar as significant numbers of people report resilience or even personal growth after loss, outcomes that are no less important to assess and facilitate (Neimeyer, Hogan & Laurie, 2008).Here too, it seems likely that meaningmaking contributes to adaptive outcomes, as longitudinal research on widowhood demonstrates that sense-making in the first 6 months of loss predicts higher levels of positive effect and well-being a full 4 years after the death of a spouse (Coleman & Neimeyer, 2010).Fostering reconstruction of a world of meaning would therefore seem to be a therapeutic priority for many bereaved clients, one that could bring benefits, not only in alleviating complicated symptoms of grief, but also in renewing a sense of hope and self-efficacy in their changed lives.The recent development of a carefully validated, multidimensional measure of the extent to which a survivor can integrate his or her loss into a fuller system of personal meaning, should advance this work in both clinical and research contexts (Holland et al., 2010).
How might such meaning reconstruction be facilitated in the context of support groups or psychotherapy?Research on bereavement professionals indicates that they routinely draw on a host of strategies to advance this goal, beginning with fostering a sense of presence to the needs of the grieving client, progressing to a delicate attention to the process of therapy and finding ultimate expression in a great variety of specific therapeutic procedures (Currier, Holland & Neimeyer, 2008).Presence, in the view of these practitioners, entails chiefly cultivating a safe and supportive relationship, one characterized by deep and empathic listening.Process goals involve psychoeducation about loss, promoting the client's telling of his or her story, exploration of spiritual and existential concerns, processing and regulation of emotions and utilization of existing strengths and resources, and finally, concrete therapeutic procedures include a wide range of narrative, ritual, expressive and pastoral methods for helping clients make sense of the loss and their changed lives, which are beginning to receive support as evidencebased treatments in randomized, controlled trials (Lichtenthal & Cruess, 2010;Wagner, Knaevelsrud & Maercker, 2006).Accordingly, a good deal of attention has been paid, within a framework of meaning reconstruction, to explaining and exemplifying these methods, in such diverse media as books (Neimeyer, 2001b;2009), chapters (Neimeyer, 2006;Neimeyer & Arvay, 2004;Neimeyer, van Dyke & Pennebaker, 2009), articles in journals (Neimeyer, 2001a;Neimeyer, Burke, Mackay & Stringer, 2010), training videos (Neimeyer, 2004a;2008) and online continuing education programs (Neimeyer, 2010) for grief professionals, as well as self-help resources for bereaved clients (Neimeyer, 2002).
In summary, a constructivist focus on the role of meaning-making in bereavement has received increasing attention in both research and clinical literature, as evidence increasingly documents the important role of reaffirming or reorganizing a world of meaning that has been challenged by loss.I hope that this brief introduction to this work encourages investigators and practitioners to deal with the significance of bereavement as well as its attendant symptomatology and shed further light on the efforts of many of the bereaved to reconstruct their life narratives in the wake of loss.
|
2017-10-16T17:41:45.852Z
|
2011-01-01T00:00:00.000
|
{
"year": 2011,
"sha1": "ab89ac7b797d05aec60ce74daf5cbb1a5e9d0e52",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/estpsi/a/vSSrpKGDMq7thkxgC5rzPYj/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ab89ac7b797d05aec60ce74daf5cbb1a5e9d0e52",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
233232504
|
pes2o/s2orc
|
v3-fos-license
|
The role of the isolation of the marginal seas during the Pleistocene in the genetic structure of black sea bream Acanthopagrus schlegelii (Bleeker, 1854) in the coastal waters of Japan
The black sea bream Acanthopagrus schlegelii (Bleeker, 1854) is a commercially important species in Japanese waters. Assessing its population structure is essential to ensure its sustainability. In the Northwestern Pacific, historical glacial and interglacial periods during the Pleistocene have shaped the population structure of many coastal marine fishes. However, whether these events affected the population of black sea bream remains unknown. To test this hypothesis and to assess the population structure of black sea bream, we used 1,046 sequences of the mitochondrial control region from individuals collected throughout almost the entire Japanese coastal waters and combined them with 118 sequences from populations distributed in other marginal seas of the Northwestern Pacific Ocean. As in other coastal marine fish with similar distribution, we also found evidence that the glacial refugia on the marginal seas prompted the formation of three lineages in black sea bream. These lineages present signatures of population growth that coincided with the interglacial periods of the Pleistocene. While the origin of Lineages B and C remains unclear, the higher relative frequency of Lineage A in the southernmost location suggests its origin in the South China Sea. The non-significant pairwise ΦST and AMOVA of Japanese populations and the presence of these three lineages mixed in Japanese waters; strongly suggest that these lineages are homogenized in both the Sea of Japan and the Pacific Ocean. Our results indicate that the black sea bream should be managed as a single stock in Japan until the strength of connectivity in contemporary populations is further addressed using non-coding nuclear markers.
INTRODUCTION
Collection sites of A. schlegelii specimens used in this study. Fully details of populations are described in Table 1.
Sample collection
A total of 1,046 individuals of black sea bream were collected at 13 different locations throughout almost the entire coastal waters of Japan. We also collected 30 individuals from the Miaoli County coast in Taiwan (Fig. 1, Table 1). For each individual, a piece of the pectoral or caudal fin was cut and preserved in 99% ethanol for DNA isolation.
DNA isolation and sequencing
Genomic DNA was isolated using the TNES-urea buffer (Asahida et al., 1996) followed by standard phenol-chloroform protocol. Fragments longer than 686 base-pairs of the mitochondrial DNA (mtDNA) control region was amplified by polymerase chain reaction (PCR) using the forward primer HDDloopF-54 (5 -CCTATTGCTCAGA GAAAAGGGATT-3 ) and the reverse primer HDDloopR-43 (5 -CCTGAAGTAACCAGATG-3 ), designed by Shi et al. (2015). For each individual, the PCR reaction was carried out in a total volume of 10 µL containing 6.75 µL of ultra-pure water, 1.0 µL of 10X Taq Buffer, 0.8 µL of dNTP, 0.05 µL of TaKaRa ExTaq DNA polymerase (TaKaRa, Shiga, Japan), 0.2 µL of each primer and 1 µL of template DNA. PCR was performed in a Master cycler Gradient 96-Well system (Eppendorf, Hamburg, Germany) with initial denaturation at 95 • C for 5 min followed by 32 cycles of 94 • C for 30 s, 56 • C for 30 s and 72 • C for 1 min 30 s; and a final extension at 72 • C for 10 min. PCR products were treated with ExoSAP-IT (Affymetrix/USB Corporation, Cleveland, OH) and then sequenced on a Genetic Analyzer (ABI3130x1, Applied Biosystems) using the BigDye v3.1 Terminator Sequencing Kit (Applied Biosystems) and the forward primer.
Data analyses
We pooled our sequences with additional fifty-nine sequences of the same DNA region reported in Shi et al. (2015) that correspond to three different locations in the coast of China (the East and South China Sea) (GenBank accession numbers: KJ586516-KJ586574). Sequences were aligned in Clustal X (Thompson, Higgins & Gibson, 1994), and collapsed into haplotypes after trimmed. Pairwise genetic divergence was calculated with the fixation index ST implemented in Arlequin v3.5 (Excoffier & Lischer, 2010) using the Kimura 2P substitution model (Kimura, 1980) and with the significance tested by 10,000 permutations and the Benjamini and Yukutieli's FDR approach (Benjamini & Yekutieli, 2001) for non-independent test. Additionally, hierarchical population structure was evaluated using the analyses of molecular variance (AMOVA) implemented in Arlequin. For this purpose, populations were grouped considering: (1) populations in Japanese waters, (2) populations in the marginal seas of the Sea of Japan, East China Sea, and the South China Sea; and the Pacific Ocean side, and (3) mitochondrial clusters (see BAPS methods and results).
To display the relationship between mitochondrial haplotypes, we constructed a TCS parsimony network using the program PopART ver. 1.7 (Leigh & Bryant, 2015). We also construct a maximum likelihood phylogenetic tree using the IQ-TREE software (Nguyen et al., 2015) with the ultrafast bootstrap approximation and the best model selected by ModelFinder (Kalyaanamoorthy et al., 2017). As the outgroup, we used Acanthopagrus berda (Genbank accession number: AM992253.1). Besides, we assessed the number of clusters using a bayesian approach implemented in BAPS v5.2 (Corrander, Marttinen & Mäntyniemi, 2006) with the linked loci model (Corander & Tang, 2007) and ten independent runs for K = 1:7.
The time of demographic expansion (T) was calculated from the relationship T = τ /2u, where u = 2µk and with µbeing the mutation rate and k the alignment length (686 bp here). We consider an average generation time of four years (Gonzalez, Umino & Nagasawa, 2008), and a mutation rate of 3.6 × 10 −8 per site per year (3.6%/Myr) reported for the mtDNA control region of teleost (Donaldson & Wilson Jr, 1999).
Past dynamics of the female effective population size were observed through Bayesian Skyline Plots (BSPs) calculated in BEAST 2.4.8 (Bouckaert et al., 2014). We used the strict molecular clock and the HKY substitution model. BSPs were run with three independent runs of 5 × 10 7 generations sampled every 5,000 iterations, and with the first 25% discarded as burn-in. All runs yielded an effective sample size (ESS) of more than 200 for the parameter of interest after burn-in. The skyline plots were generated in Tracer v.1.7.1 (http://tree.bio.ed.ac.uk/software/tracer/).
Genetic structure
Our alignment matrix of 686 base pairs yielded 597 different haplotypes with 205 variable sites. The pairwise ST between the 13 Japanese populations were only significant between Nagasaki (NG) and Hiroshima (HH) populations ( ST = 0.023, P = 0.006). When Japanese populations are compared with Chinese and Taiwanese population, the Zhenjian population (DJ) from the Northern East China Sea is only significantly different from the Hiroshima (HH) population ( ST = 0.041, P = 0.005). From the remaining 45 pairwise comparisons, 31 were significant and their ST ranged from 0.036-0.121. While some of the pairwise ST comparisons were significant, all of the values were still very low (Table 2).
Hierarchical AMOVA supported the genetic structure among groups based on marginal seas ( CT=0.00761, P = 0.0039); however, 98.5% of the variation was observed within populations. When only Japanese groups are considered, AMOVA indicates the absence of population genetic structure among Japanese populations ( ST = 0.002, P > 0.05) with 99.73% of the variation explained within populations. Similarly, the FST values were significant but very low (Table 3).
The haplotype network did not display signs of geographical structuring, supporting the high connectivity and low FST statistics between geographic populations and among the Fig. 2A). However, our phylogenetic tree did not support these lineages nor showed clustering of haplotypes (Fig. S1). In the TCS network, the three lineages provided by BAPS displayed many start-like shapes with a center haplotype which suggested a population expansion. These shapes were more clear in Lineages B and C, and some haplotypes from Lineage A showed a closer relationship with haplotypes from Lineage B. Pairwise FST of sequences that included these lineages was high and strongly significant ( FST of 0.211, 0.411, and 0.540 with P = 0 for all comparisons) and the AMOVA among them had accumulated a variation of 41.29% with an FST of 0.412 at P = 0 (Table 3). The TCS network of haplotypes and the distribution of these lineages are shown in Fig. 2B.
Demographic history
The genetic diversity indices of each lineage are shown in Table 4. The haplotype diversity for each lineage was high and similar (h ranged from 0.957-0.995), and the nucleotide diversity was overall low but higher in Lineage A (π = 0.0097) than in the Lineage B (π = 0.0048) and C (π = 0.0043). Supporting the shape for each lineage within the haplotype network, we observed strongly significant values for all the neutrality test and R2 statistics suggesting events of past demographic expansion ( Table 4). The mismatch distribution also fits the model of demographic growth rather than the model of constant population size (Fig. 3A). The expansion time calculated based on tau (τ ) ( Table 4) were similar to those calculated from the Bayesian skyline plot (Fig. 3B). Overall, the expansion time for Lineage A seems to have occurred between 300-340kya, and for Lineage B and C between 120-140 kya.
DISCUSSION
Population genetic structure of marine fishes are assumed to be very low or inexistent because of absence of physical barriers and wide-range dispersal of larvae by the ocean currents (Ward, Woodwark & Skibinski, 1994). In the Northwestern Pacific, however, mitochondrial sequences of marine fishes shows signatures of moderate and strong genetic differentiation depending on the life-history trait of species (Hirase & Ikeda, 2014;Liu et al., 2007;Shen et al., 2011;Shui et al., 2009;Song et al., 2017;Wang et al., 2008;Yu, Kai & Kim, 2016). These signatures are largely attributed to the isolations of marginal seas which produced topography variations, isolation, sea level fluctuation, and passive dispersal of ocean currents. These isolations have occurred during the Pleistocene epoch (Song et al., 2017;Voris, 2000;Wang, 1999;Yan et al., 2015).
For black sea bream, BAPS bayesian clustering detected the presence of three different lineages which explained 41.29% of the variation in the AMOVA analyses. The presence of these lineages within each of our geographical locations ( Fig. 2A) also produced nonsignificant pairwise FST and AMOVA (Tables 2 and 3) of black sea breams inhabiting Japanese waters. In the Northwestern Pacific, glacial periods during the Pleistocene also played a key role in the formation of three lineages in the coastal scaled sardine Sardinella zunasi (Wang et al., 2008), based the mtDNA control region; the flathead mullet Mugil cephalus (Shen et al., 2011), based on the mtDNA COI; and for the manila clam Ruditapes philippinarum (Mao et al., 2011), based on the mtDNA COI. These results support that Pleistocene glaciation might also have shaped the divergence of black sea bream populations into three lineages and that interglacial periods homogenized these lineages around its current distributions.
The South China Sea appeared composed mostly by individuals of Lineage A, suggesting the origin of this lineage in this northernmost location of our collections, very likely due to the enclosure of this marginal sea during glacial periods of the Pleistocene (Wang, 1999). The relative frequency of Lineages B and C, however, are similar in the Pacific Obs. Exp.
Const.
Obs. Exp. Ocean side, the Sea of Japan, and the East China Sea, making impossible to assign the correct origin of them and suggest that biological characteristics of this fish and physical factors of these waters might have homogenized them during Pleistocene interglaciations (Kimura, 1996;Kitamura et al., 2001). Supporting the formation of two lineages driven by Pleistocene glaciations in similar regions of Lineage B and C here, Bae et al. (2020) also found two lineages of the grass puffer Takifugu niphobles in the Japan Sea, East China Sea, Yellow Sea and the Pacific Ocean region, using the mtDNA COI. The frequency of the COI haplotypes in their samples allowed Bae et al. (2020) to assign these lineages to their correct origin. Thus, inferring the origin of lineage B and C would perhaps require the sequence of other genes with slower mutation rate than the mtDNA control region, as neither bayesian approach ( Fig. 2A) nor maximum likelihood analyses (Fig. S1) provide enough information to assign them to a unique marginal sea. The expansion time calculated the female effective population size of black sea bream lineages coincides with the rise of sea levels during interglacial periods around 300,000 and 150,000 years ago (Table 4 and Fig. 3B), which might have led this species to homogenize more intensively in the Northwestern Pacific.
Lineage
Further studies of populations genetics in black sea bream will benefit from the use of more polymorphic loci from the nuclear genome. These loci will be useful to assess the contemporary population connectivity of this fish. However, studies such as from Bae et al. (2020) suggest that the mito and nuclear patterns (with microsatellite loci) of coastal marine fishes in the Northwestern Pacific are similar.
CONCLUSION
In the present study, we assess the population genetic structure of black sea bream and test whether the glacial-interglacial periods of the Northwestern Pacific Ocean played a role in the divergence and distribution of its populations. To test this hypothesis, we used 1,046 sequences of the mitochondrial control region from black sea breams distributed in almost the entire coast of Japanese waters and combined them with 118 sequences from other marginal regions. We observed the presence of three different lineages of black sea bream with signatures of population expansion during interglacial periods of the Pleistocene. The sequences of these lineages are carried by black sea bream populations inhabiting Japanese waters suggesting that, although separated in the past, they have already homogenized. Our results highlight the role of historical climate fluctuations into the genetic structure of coastal marine fishes of the Northwestern Pacific and provide useful information for the fisheries management of black sea bream. Based on our results, we recommend managing black sea bream as a single stock in Japan until more studies using more polymorphic nuclear DNA markers clarify the absence or presence of any contemporary barrier.
|
2021-04-15T05:16:08.940Z
|
2021-04-02T00:00:00.000
|
{
"year": 2021,
"sha1": "efce8213c558b4022a915293987f55620848374e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7717/peerj.11001",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "d3e72bf5d1ef7e3cba76933ad10165a568fc35ab",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
227054412
|
pes2o/s2orc
|
v3-fos-license
|
Data Representing Ground-Truth Explanations to Evaluate XAI Methods
Explainable artificial intelligence (XAI) methods are currently evaluated with approaches mostly originated in interpretable machine learning (IML) research that focus on understanding models such as comparison against existing attribution approaches, sensitivity analyses, gold set of features, axioms, or through demonstration of images. There are problems with these methods such as that they do not indicate where current XAI approaches fail to guide investigations towards consistent progress of the field. They do not measure accuracy in support of accountable decisions, and it is practically impossible to determine whether one XAI method is better than the other or what the weaknesses of existing models are, leaving researchers without guidance on which research questions will advance the field. Other fields usually utilize ground-truth data and create benchmarks. Data representing ground-truth explanations is not typically used in XAI or IML. One reason is that explanations are subjective, in the sense that an explanation that satisfies one user may not satisfy another. To overcome these problems, we propose to represent explanations with canonical equations that can be used to evaluate the accuracy of XAI methods. The contributions of this paper include a methodology to create synthetic data representing ground-truth explanations, three data sets, an evaluation of LIME using these data sets, and a preliminary analysis of the challenges and potential benefits in using these data to evaluate existing XAI approaches. Evaluation methods based on human-centric studies are outside the scope of this paper.
Introduction
Research in interpretable machine learning (IML) has explored means to understand how machine learning (ML) models work since the nineties (e.g., Towell and Shavlik 1992). Popular methods to help understand ML models are referred to as attribution methods Yeh et al. 2018;Koh and Liang 2017), they identify features or instances responsible for a classification.
With the exception of human-centered studies (Hoffman et al. 2018), the evaluation methods being used in XAI and IML include comparison to existing methods, metrics and axioms, sensitivity analyses, gold features, and demonstration of image classification (details and references in Section Copyright © 2021, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
Background and Related Works). The problems with these methods include that they do not indicate where current XAI approaches fail thereby preventing consistent progress of the field. They do not measure accuracy as a way to validate correctness or to produce accountable agents (e.g., Diakopoulos 2014, Kroll et al. 2016, Doshi-Velez andKim 2017), and it is practically impossible to determine whether one XAI method is better than other or what the weaknesses of existing methods are, leaving researchers without guidance on which re-search questions will advance the field.
The intended purpose of this paper is to address these limitations with current XAI evaluation methods by proposing the use of data representing ground-truth explanations (GTE). In a variety of computer science tasks, it is a standard practice to treat some representation of data as ground truth. Ground-truth data, in its essence, represents data that is verifiable and considered as the most accurate against which a new system is tested against (Wikipedia contributors 2020). Various authors agree that the lack of ground-truth for evaluating explanations is a limitation (Tomsett et al. 2019;Hooker et al. 2019;Yang, Du, and Hu 2019;Montavon 2019). Consequently, we investigate the challenges in creating data representing GTEs. Our goal is to promote consistent and methodical progress of the XAI field. The scope of this paper is limited to neural networks (NN) for classification tasks.
The next section presents related methods, metrics and axioms to evaluate XAI methods. Then, we introduce how to generate three data sets representing GTEs. Next, we use the generated data to train NN models to submit to LIME (Ribeiro, Singh, and Guestrin 2016) and produce explanations while converting the GTEs into LIME's explanations format. We evaluate LIME, and analyze the evaluation, seeking support to our conclusions as a means to validate the evaluation. We conclude with a discussion of issues and benefits, and future work. (2017) categorize IML evaluations as application-, human-, and functionally-grounded. The authors propose that any method should be evaluated along those three categories, one informing the other. Yang, Du, and Hu (2019) are the only authors who actually present a reason against using ground truth to benchmark explanation methods, which is that of explanation quality is userdependent. These authors propose three metrics for IML, namely, generalizability, fidelity, and persuasibility. Their fidelity metric aims to measure explanation relevance in the applied context. Gunning and Aha (2019) propose XAI approaches are evaluated along five categories, namely, explanation goodness, explanation satisfaction, mental model understanding, user task performance, and appropriate trust and reliance. Considering that human-centered studies entail a lot of subjectivity, of those, only explanation goodness seems as an objective category of explanation quality. All other categories are evaluated by humans or an external task. Tomsett et al. (2019) conducted a meta evaluation of saliency methods by analyzing metrics previously proposed in the literature to evaluate saliency methods. To do this, they adopted psychometric tests to verify local saliency metric reliability and consistency of the saliency metrics that rely on perturbations. They conclude that saliency metrics can produce unreliable and inconsistent results, even if not always.
Generate Data Representing Ground-Truth
Explanations (GTE) Figure 1 gives an overview of the entire approach from generating the data up to evaluation. We start describing how to generate the data. We propose to generate data sets from existing processes, either natural or artificial, and identify classes from said processes. We propose to represent classes in a data set via mathematical equality or inequality constraints as a minimal canonical representation from which explanations can be aligned with the format of expla- Figure 1: In green, this diagram shows the steps to generate data representing GTEs and to use the data to train models. In orange are the processes for aligning GTE to LIME, and to send predictions for LIME to explain. In yellow, the evaluation compares the two orange processes nations produced by various XAI methods. We define the classes and the intervals to populate feature values to create instances. The intervals where instance feature values can be populated will determine noise and commonsense. The nature of values allowed for each feature in the generated equations will determine whether classes will remain disjoint. Overlapping classes will impact evaluations producing noise. Another consideration when defining intervals to populate feature values is commonsense. If an explanation indicates that the value of a feature is 0.3x10 −6 , then the feature should not represent someone's age. Next, we describe the generation of three data sets.
Generate Data Set Loan
The process we chose is loan underwriting. This is a small data set, consisting of 54 instances, two classes accept and reject, and three input features characterizing three wellknown factors considered in loan underwriting, namely, job condition, credit score, and debt-to-income ratio. We created this data set manually to characterize realistic instances. The instance space is given by the arrangement of the three features and their four possible values, given by 4 x 4 x 4 = 64. We eliminated 10 instances from the data because they were not realistic. The data is generated with a system of two equations as follows: As stated above, we considered class overlap and commonsense when defining the allowable values for the three features. The first feature, x 1 corresponds to the job condition of the applicant. This feature can be populated with integer values along the interval [2, 5], where 2 represents lack of a job, and values 3, 4, and 5, respectively that applicant has a job for less than one year, less than 3 years, or more than 3 years. The second feature, x 2 , refers to credit score, which assumes integer values in the interval [0, 3], distributed in ranges from less than 580, 650, 750 and more than 750. The third and last feature, x 3 , refers to the ratio of debt payments to monthly income, which assumes integer values in the interval [0, 3], distributed in ranges from less than 25%, 50%, 75% and more than 75%.
Generate Data Set Distance
We adopt the equation used to calculate travel consumption based on travel distance. The Data Set Distance has a total of 2,600,000 instances, described through five features, and 10 classes. The 5 variables are Trip Frequency (TF), Trip Distance (TD), Transportation Occupancy (TO) ,Energy Intensity (EI) and Travel Mode (m). The Data Set Distance is generated using Equation 4 for travel energy consumption based on travel distance.
Using the base equation, we created 10 unique variations with the following goals: the variations should be kept realistic, the variations are a set of operations (such as raising to an exponent or multiplying by a scalar) performed on one or more variables. Afterward, we generate the data for the base equation by creating every permutation of 4 equation variables within a specified range using a truncated normal distribution. The 4 variables are used as features along with a 5th variable, travel mode. For each of the 10 variations, we use the set of operations on the base equation data to generate the equivalent rows for that variation.
Generate Data Set Time
For Data Set Time and Distance, we used processes from the field of energy consumption analysis that describe various realistic processes with different focuses (e.g., distance or time) and include equations with a variety of features that can receive multiple values. These characteristics facilitate the generation of large data sets, so we can create conditions similar to those faced by XAI methods in the real world.
In this paper, we generate data from transportation energy consumption, which can be used to calculate travel time and travel distances related to household energy consumption.
Equation 3 is the basic equation to calculate energy (E) as a function of time. The four variables are Travel Time (TT), Speed, Fuel Economy (FE) and Travel Mode (m). The Data Set Time has a total of 504,000 instances and seven classes. Each class is defined by a small tweak to the equation. Using the base equation, we create seven unique variations with the same goals and process as we did for the Distance data set.
Train NN Models
The number of models, the type of models, and how they vary between them depends on the metrics selected in the previous step. Consider, for example, the selected metric is implementation invariance (Sundararajan, Taly, and Yan 2017). This metric requires multiple types of models. In this paper, we trained two models for the Loan and Time data sets, and one model for the Distance data set, which we summarize next (detailed architectures are given in the Github link given at the end of this paper). . This is certainly due to noise class overlaps that occurred during the data set generation.
LIME Explains Predictions
As depicted in the diagram in Figure 1, after training the models, the next steps can be concurrent. This section describes the step where LIME explains the predictions from the models. First, let us briefly review how LIME works. The Local interpretable model-agnostic explanations (LIME) is a feature attribution method formally described in Ribeiro, Singh, and Guestrin (2016). LIME assumes that the behavior of a sample point (e.g., instance) can be explained by fitting a linear regression with the point (to be explained) and the point's close neighbors. LIME perturbs the values of the point to be explained and submits them to the model to obtain their prediction, thus creating its own data set. Next, LIME measures the cosine similarity between the point to be explained and the points generated from its perturbations to select a region of points around it. LIME then utilizes a hyperparameter, number of samples (num sample), to select the number of points it will use in the final step, which is the fitting of a linear regression. The hyperparameter num sample determines how many of the perturbed points will be used with the point to be explained to fit a linear regression. This last step produces coefficients of the line that expresses LIME's explanation. For Data Set Loan, we submit to LIME all the 54 instances to be explained, and models NN1 and NN2. Note that all these instances with both NN models will have correct predictions because both models reached 100% accuracy. The number of samples selected was 25 to be used in the first evaluations, but we also created GTE for 5 and 50 number of samples. The output we receive back from LIME are two sets of 54 by 3 coefficients, one coefficient for each of the three features, one set for NN1, and one set for NN2.
The Data Set Time has 100,800 instances but the models did not reach 100% accuracy. Consequently, we randomly selected 10,000 that both models NN1 and NN2 predicted correctly and only submitted those 10,000 to LIME together with the two models. We made sure to select the instances from those correctly predicted because sending instances incorrectly predicted by the models would mislead LIME to produce bad decisions, and we had to make sure we could provide LIME with the best data for a fair assessment. The output produced by LIME are two sets of 10,000 by 4 coefficients, accounting for the four features in this data set, one for each model NN1 and NN2. We set hyperparameter number of samples as 1,000.
The accuracy reached by the Distance model NN1 was 82%. The number of testing instances was 520,000, so we randomly selected 50,000 instances from this data set from those correctly predicted. The data produced by LIME for NN1 is a 50,000 by 5 matrix, given the five features in this set. For number of samples, we used 5,000.
GTE Data Aligns with LIME Explanation Format
This step is represented in Figure 1, in orange, and is concurrent to the step "Lime Explains Predictions". As already noted, the data we produce representing GTEs are a specific way to represent explanations. We can only use them to evaluate any target XAI method after the data representing GTEs are at the same format or have been processed under the same conditions. As a general rule of thumb, this conversion may imply to take the ground-truth data and execute the last steps of the target method.
For LIME, an explanation consists of a fitted line from a target point whose prediction we want to explain and the number of points (based on the hyperparameter number of samples, num sample) that are the closest to the target based on the cosine similarity. Ultimately, this means that evaluating LIME means to determine how realistic are the perturbed points that are the closest based on cosine similarity in the number of the number of samples. Consequently, we take the points from the data representing GTEs and execute this same process, namely, measure the cosine similarity of each target point to be explained and then fit linear functions using the same Ridge regression method and the same regression parameters used in LIME using the same number of samples as established by the num sample hyperparameter. The result is that, for each data set, we have matrices with the same number of coefficients (one per feature per instance) as produced by each model.
Evaluation Measures Compare Lime Explanations Against GTE Propose Evaluation Measures
Euclidean distance (ED) We adopt the Euclidean distance (ED), which is an obvious choice of method to measure how far two points are in n-dimensional spaces. For this reason, we compute, for each instance of each data set and NN model, the ED between the point described in the GTE data and the point described through LIME's explanation coefficients. The range of the ED is (-∞, +∞), however, we do normalize the ED using the maximum and minimum points obtained for each data set and parameters. The goal is to keep the ED's results within the interval [0, 1] for better visualization. The purpose of computing the ED between the GTE data and LIME's explanation coefficients is to measure accuracy as a measure of explanation goodness. The ED, for being a distance, produces results in the opposite direction of quality. For this reason, later we will compute the Complement of ED, which we denote as C-of-ED, as its mathematical complement.
Implementation Invariance Sundararajan, Taly, and Yan (2017) proposed that explanation methods should produce identical attributions for networks that produce the same outputs while receiving the same inputs, which are referred to as functionally equivalent. This is why we created models with different architectures for two of our data sets.
Measures of Order
We propose to use the order of the explanation coefficients as measures of accuracy or explanation goodness. In LIME (Ribeiro, Singh, and Guestrin 2016), the explanation coefficients assign importance to each feature in the sense that the feature that is assigned the highest coefficient is the most important feature in the explanation. This is related to the use of gold features to evaluate explanations, as proposed by Ribeiro, Singh, and Guestrin (2016). When gold features are used, the evaluation often targets the inclusion or not of a feature in an explanation. In the studies in this paper, we do not discuss the inclusion or not of features because our data sets have three, four, and five features each. At these small numbers of features, LIME includes all of them; this way we do not have to evaluate whether a feature is present, but how important it is considered. Note that the order of features is particularly proposed to evaluate LIME given the format LIME presents its explanations, although this would be an important aspect to consider when evaluating any explanation method. We define two evaluation measures of order: Second Correct and All Correct. Respectively, Second Correct indicates whether the second feature in the descending order of importance of an explanation's coefficients is correct in the sense that it is the same feature ordered as the second most important in the GTE data. Then All Correct indicates that all the features are in the same order as the features in the GTE data. The values for these measures are counted as 1 or 0 for each instance. The comparisons include results for 100 runs, hence the values represent percentages. Comparing GTEs to LIME Explanations NN1 vs. NN2 We start by comparing ED across the two different NN architectures for data sets Loan and Time, NN1 and NN2 to assess Implementation Invariance. Both were executed for 100 runs and thus we compute average and standard deviation across the 100 runs for each instance. We use the parametric statistical testing t-test to measure whether the values differ significantly across the two samples NN1 and NN2. To conduct the t-test, we pose the hypothesis that the true difference between NN1 and NN2 is zero. The t-test determines that for p-values greater than 0.1, we cannot reject the hypothesis that the difference is zero between the samples. The p-values computed for Loan and Time data set are respectively, 0.979 and 0.661. These resulting p-values show that for both Loan and Time, the differences between NN1 and NN2 are not statistically significant. Sundararajan, Taly, and Yan (2017) suggests that explanation methods should satisfy Implementation Invariance for functionally equivalent NNs. This means that their explanations ought to be the same. As far as the t-test shows, the explanation coefficients are not significantly different, so at this level of specificity, they satisfy Implementation Invariance. Given these results, we will use only NN1 for the remainder of the studies.
Comparing C-of-ED against Second Correct and All Correct Now we compare the measures of order All Correct and Second Correct against the complement of the ED, C-of-ED. We use the complement given that measures of order are in the opposite direction of ED. Figure 2 shows the three measures for Loan, Time, and Distance.
The visual inspection suggests multiple ideas. First, if we look at the red line for C-of-ED, it shows that the quality of LIME's explanations seem to increase with larger values of the number of samples hyperparameter. Note that the charts for Time and Distance show only 100 samples because showing all 10,000 and 50,000 would make them indecipherable. For this reason, we include Table 2 with the averages for all instances to help the interpretation of the charts. The averages for Loan, Time, and Distance are, respectively, 0.47, 0.76, and 0.82. Recall the numbers set for number of samples submitted to LIME for these respective data sets were respectively, 25, 1,000, and 5,000. This seems reasonable as it allows LIME more chances to populate the region of the instance to be explained, thus increasing its chances of success.
Second, the measure All Correct (black line) in Figure 2 represents the number of times an instance has all its feature coefficients in the same order as GTE's coefficients. It is not surprising that this is the line further down with respect to the y-axis as it is the more demanding than Second Correct (in green).
Third, even with limited samples in the charts for Time and Distance, we see that the quality of LIME's explanations vary. This deserves a more detailed analysis. With the small data set Loan, we can see, for example, all three measures agree that Instance 50 is lower in quality than Instance 49. But before we examine the numbers and explore potential reasons for LIME having more difficulty to explain some instances over others, let's scrutinize these measures.
Validating Evaluation Measures
In this section, we investigate whether we have any evidence to support these results by further analyzing what these measures above can tell us about LIME explanations. To do this, we now focus on the Data Set Loan because its small scale allows us to conduct a detailed and comprehensive analysis. Above when we described the experimental design for the Loan data, we mentioned we selected the parameter number of samples to be 25. We now expand the results for two more number of sample values, 5 and 50. Figure 3 shows the measures C-of-ED, Second correct, and All correct for the different hyperparameter number of samples used with the Loan data. We kept the colors we used in earlier charts, making lighter hues for number of samples 5, darker for 50, and kept an intermediary tone for 25. For C-of-ED, the average at 5 number of samples is the highest, 0.60 against 0.47 and 0.41 for 25, and 50. For Second correct, the highest average is 0.35, obtained with 5 and 50 number of samples, against 0.32 with 25. For All correct, the highest is again at 5 number of samples with 0.22 against 0.18 and 0.16 for 25 and 50.
The first observation is that these results do not match the conclusion above that higher number of samples lead to more accurate explanations although this observation makes sense technically. Thorough examination of the results for every instance reveals that, at 5 number of samples, data representing GTE has a very high proportion of coefficients that are zero. The exact number is 18 zeros for coefficient x 1 , 23 for x 2 , and 27 for x 3 , corresponding to 18, 23, and 50%. These high numbers of zeros can be explained by the low number of samples that would make it hard to fit the linear regression and thus return zeros. We then examined the number of zeros in the coefficients produced by LIME and we noted that in the 100 runs of 54 instances, the total number of zeros is 26, 27, and 29, respectively for x 1 , x 2 , and x 3 , representing averages of 2.6, 2.7, and 2.9 for all 54 instances (these are around 5%). Consequently, given that LIME coefficients do not have such an abundance of zeros, ED will artificially show better results at number of samples 5 because these distances will be shorter. A distance between a number, which can be positive or negative, and zero will be shorter than a number and another that may also be positive or negative. The zeros also cause problems in computing the measures of order.
Two observations can be made from the identification of these high volumes of zero. First, the evaluations for 5 number of samples for the Loan data are artificial. They are revealed by the measures as good but the numbers are artificial, they do not originate from better explanations. Consequently, we do not have any reason to question that higher number of samples lead to better quality explanations.
Second, these artificially produced number do indicate better quality and all the proposed measures have shown them. This supports the quality of the proposed measures.
Finally, these studies suggest that the best quality of explanations from LIME for the Loan Data should be when using 50 number of samples, but the measures do not show this with consistency. Consider that 50 number of samples is almost as much as the total number of instances in the Data Set Loan. With both the data representing GTEs and LIME using 50 number of samples, what would be the cause of the difference in the coefficients? If we could tell LIME the range and precision of the allowable values for the data to use in the perturbations, with only three features and a NN with 100% accuracy, LIME would only generate pertur-bations that matched the actual data set, and given that we used the same cosine similarity and the same Ridge regression with the same parameters, LIME's perturbations would be all actual instances. When using 50, it would be 50 out of 54, exactly like the data representing GTEs. Consequently, the only point of information separating LIME from better (i.e., more accurate) explanations is not knowing the range and precision of allowable values. In practice, in a real-world model that needs explanation, there is nothing preventing us from asking the actual values allowed in data to create more accurate perturbations. This demonstrates how the use of data representing ground-truth explanations can lead to analyses that will improve existing XAI methods.
Discussion and Conclusions
The methodology we describe to generate data representing ground-truth explanations (GTEs) poses many challenges. It requires the identification of a data-generation process and needs equations to define classes. The possibility of class overlap, their benefits and limitations, and methods to avoid noise are questions for future work.
The need to align data representing GTEs with the method targeted to evaluate may pose challenges such as the one we faced when setting a low value to a hyperparameter that produced artificially good results. This suggests this approach may be far from being fully automated.
The proposing authors of implementation invariance (Sundararajan, Taly, and Yan 2017) suggest that explanation methods should satisfy it, which means producing the same explanation as long as NNs are functionally equivalent. If we envisage an explanation in support of accountability reports, then we want to have methods that can distinguish when a different architecture leads to a different explanation. Furthermore, when computing implementation invariance, we face the question of which level of specificity to compare the explanations from these models. This poses questions with respect to whether what being the same for explanations mean. Consider that this question will differ depending on how the XAI method formats explanations.
We analyzed the results of our evaluation of LIME and showed how that analysis led us to conclusions about how LIME could be improved. Although not explicitly shown, our proposed method is measurable and verifiable, allowing the comparison between two explanation approaches. Further work examining why a method performs better in a certain type of instance, such as in outliers vs. non-outlier instances can help direct how to improve said methods. Finally, this proposed approach sheds light into how to demonstrate accountability, to create benchmarks, and contribute to the progress of the field.
All data and code necessary for reproducibility is available at https://github.com/Rosinaweber forwardslash DataRepresentingGroundTruthExplanations/tree/master.
|
2020-11-20T02:01:10.723Z
|
2020-11-18T00:00:00.000
|
{
"year": 2020,
"sha1": "02b1e4247fe57c1f5621345c970ad1f22f7f1f94",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "02b1e4247fe57c1f5621345c970ad1f22f7f1f94",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
225067955
|
pes2o/s2orc
|
v3-fos-license
|
ATF3 Prevents Stress-Induced Hematopoietic Stem Cell Exhaustion
Protection of hematopoietic stem cells (HSCs) from exhaustion and effective regeneration of the HSC pool after bone marrow transplantation or irradiation therapy is an urgent clinical need. Here, we investigated the role of activating transcription factor 3 (ATF3) in steady-state and stress hematopoiesis using conditional knockout mice (Atf3fl/flVav1Cre mice). Deficiency of ATF3 in the hematopoietic system displayed no noticeable effects on hematopoiesis under steady-state conditions. Expression of ATF3 was significantly down-regulated in long-term HSCs (LT-HSCs) after exposure to stresses such as 5-fluorouracil challenge or irradiation. Atf3fl/flVav1Cre mice displayed enhanced proliferation and expansion of LT-HSCs upon short-term chemotherapy or irradiation compared with those in Atf3fl/fl littermate controls; however, the long-term reconstitution capability of LT-HSCs from Atf3fl/flVav1Cre mice was dramatically impaired after a series of bone marrow transplantations. These observations suggest that ATF3 plays an important role in preventing stress-induced exhaustion of HSCs.
INTRODUCTION
Hematopoietic stem cells (HSCs) possess the capability to self-renew and produce mature cells to replenish the blood system for lifelong hematopoiesis (Arai et al., 2004;Yoshihara et al., 2007, Orkin andZon, 2008). Under steady state conditions, HSCs predominantly exist in a quiescent state (Wilson et al., 2008). Under hematopoietic stress conditions such as infections, chemotherapy, or transplantation, HSCs exit quiescence, rapidly proliferate, and differentiate into mature cells to replenish blood cell numbers through hematopoiesis (Essers et al., 2009;Baldridge et al., 2010). A defect in the capability of HSCs to exit quiescence results in insufficient production of blood cells, whereas failure to re-enter quiescence after stress causes functional exhaustion of HSCs (Hou et al., 2015). Therefore, the homeostasis of HSCs depends on maintaining the balance between quiescence and activation. The mechanism by which HSCs orchestrate this balance remains unclear in the field of stem cell biology. Some transcription factors that participate in metabolism, cell cycle, and epigenetic modifications have been demonstrated to regulate HSC function (Cheung and Rando, 2013). In addition to intrinsic cell mechanisms, HSCs are tightly regulated by extrinsic mechanisms, such as signaling provided by mesenchymal stem cells, non-myelinating Schwann cells, and megakaryocytes residing in the bone marrow microenvironment (Mendelson and Frenette, 2014;Zhao et al., 2014). Activating transcription factor 3 (ATF3) is a basic-region leucine zipper transcription factor belonging to the cyclic AMP response element binding family. ATF3 plays important role in cellular responses to stresses by dictating the expression of genes involved in the cell cycle, DNA repair, or cell survival (Thompson et al., 2009). Its importance in defending against invading pathogens, and suppressing inflammatory responses has been well-documented (Hai et al., 2010). ATF3 has been identified as a negative regulator of Toll-like receptor signaling in macrophages (Hoetzenecker et al., 2011). Mice with ATF3 deficiency displayed enhanced susceptibility to endotoxin shock induced sepsis (Lai et al., 2013). ATF3-deficient mice showed aggravated allergic airway inflammation due to higher Th2 responses in the lung (Gilchrist et al., 2008). In the intestine, induction of ATF3 can prevent the development of colitis by facilitating the development of follicular helper T cells and microbiota homeostasis (Cao et al., 2019(Cao et al., , 2020. These studies indicate that the induction of ATF3 through environmental stimuli represents a protective mechanism in the host to avoid inflammation. However, the role of ATF3 in stress hematopoiesis remains largely unknown. Here, we showed that ATF3 was dispensable for normal hematopoiesis, and that stress induced dramatic down-regulation of ATF3 in HSCs. ATF3 deficiency in the hematopoietic system enhanced the proliferation of LT-HSCs after shortterm challenges with 5-fluorouracil (FU) or irradiation, which led to functional exhaustion of HSCs after long-term bone marrow transplantation. These observations indicate that ATF3 plays a protective role in stress hematopoiesis to maintain HSC self-renewal.
Animals
Atf3 flox/flox mice were generated at Nanjing Biomedical Research Institute of Nanjing University (Jiangsu, China) as described previously (Cao et al., 2019), and then crossed with Vav1-Cre transgenic mice (Jackson Laboratory, Bar Harbor, ME, United States) to produce conditional knockout mice (Atf3 fl/fl Vav1Cre). Their littermates (Atf3 fl/fl ) were used as controls. C57BL/6SJL (CD45.1 + ) mice were kindly provided by Dr. Haikun Wang (Chinese Academy of Sciences, Shanghai, China). The recipient mice used in bone marrow transplantation assays were (CD45.1/45.2 + ) heterozygotes, which were obtained by crossbreeding of wild-type (WT) (C57BL/6 background, CD45.2 + ) crossed with C57BL/6SJL (CD45.1 + ) mice. All mice were 8-10 weeks old and maintained in a specific pathogen-free animal facility. All animal experiments in this research were conducted according to protocols approved by The Animal Care and Ethics Committee of Tianjin Medical University.
5-FU Treatment and Sublethal Irradiation
Atf3 fl/fl and Atf3 fl/fl Vav1Cre mice were intraperitoneally injected (i.p.) with a single dose of 5-FU (150 mg/kg; Sigma-Aldrich, St. Louis, MO, United States). To administer a sublethal dose of radiation, the mice were irradiated with 5 Gy total body radiation. Blood cells were counted, and hematopoietic stem and progenitor cells (HSPCs) were analyzed by flow cytometry.
LPS Treatment
Mice were injected i.p. with phosphate-buffered saline or 0.25 mg/kg lipopolysaccharide (LPS; Escherichia coli 0111:B4; Sigma Aldrich), and the dynamic expression of ATF3 in LSKs or LT-HSCs was determined by flow cytometry or quantitative reverse-transcription polymerase chain reaction (qRT-PCR).
Cell Cycle Analysis and BrdU Incorporation Assay
The cell cycle was analyzed by Ki67 (1:100 in BD Perm/Wash buffer for 30 min) and DAPI (2 mg/mL for 10 min) staining using Cytofix/Cytoperm Fixation/Permeabilization Kit (BD Biosciences). For the BrdU incorporation assay (Hu et al., 2018), the mice were injected i.p. with 100 mg/kg BrdU (BD Biosciences) at 12 h before harvesting the BM. BrdU Flow kit (BD Biosciences) was used according to the manufacturer's protocol.
Complete Blood Cell Count
A hematology system (Hemavet 950FS; Drew Scientific, Miami Lakes, FL, United States) was used to analyze the complete blood cell count from the total blood collected from the mice.
qRT-PCR
Total RNA was isolated using TRIzol (Invitrogen, Carlsbad, CA, United States) and reverse-transcribed with a synthesis kit (Takara, Shiga, Japan). qRT-PCR was performed with SYBR Green (TaKaRa). The primer sequences are listed in Supplementary Table 1.
Colony-Forming Unit (CFU) Assay
A total of 1000 LSKs was sorted and plated in 1 mL of mouse MethoCult 3434 (Stem Cell Technologies, Vancouver, Canada) and then cultured at 37 • C. Colonies were counted with an inverted microscope on day 14.
Statistical Analysis
Data are presented as the means ± standard deviation from two independent experiments. Differences between two groups were determined by unpaired Student's t tests, and multiple groups were assessed by one-way analysis of variance with Tukey-Kramer post hoc analysis. Mann-Whitney U tests were used when the criteria for normal distribution were not satisfied. The survival curve was assessed by a log-rank (Mantel-Cox) test. All statistical analyses were performed with GraphPad Prism 8.0 software (GraphPad Software, Inc., CA, United States). Statistical significance is indicated by * P < 0.05; * * P < 0.01; and * * * P < 0.001.
ATF3 Is Dispensable for Steady-State Hematopoiesis
To explore the role of ATF3 in hematopoiesis, ATF3 in the hematopoietic system was specifically deleted by cross-breeding Atf3 flox/flox mice with Vav1-Cre mice to generate conditional knock-out mice, named as Atf3 fl/fl Vav1Cre (Supplementary Figure 1A). The frequencies of HSPCs in Atf3 fl/fl Vav1Cre mice were determined by flow cytometry. The frequencies of distinct HSPCs, including LSKs (Lin − C-kit + Sca-1 + ), long-term HSCs (LT-HSCs), short-term HSCs (ST-HSCs), and MPPs, were comparable between Atf3 fl/fl Vav1Cre and Atf3 fl/fl littermate controls ( Figure 1A and Supplementary Figure 1B). Cell cycling analysis revealed no changes in the proliferation of LT-HSCs under steady state conditions ( Figure 1B). The frequencies of lineage-determined progenitors, including common myeloid progenitors (CMPs), granulocyte monocyte progenitors (GMPs), megakaryocyte erythroid progenitors (MEPs), and common lymphoid progenitors (CLPs), were also comparable between Atf3 fl/fl Vav1Cre and Atf3 fl/fl mice ( Figure 1C). The complete blood cell counts, including white blood cell (WBC), red blood cell (RBC), platelet (Plt), and hemoglobin (Hb), failed to differ between Atf3 fl/fl Vav1Cre and Atf3 fl/fl littermates ( Figure 1D). These results indicate that ATF3 is dispensable for maintaining steady-state hematopoiesis.
ATF3 Is Down-Regulated in HSCs Upon Stress
To determine whether ATF3 plays a role under stress-induced hematopoiesis, we evaluated its expression in HSPCs upon hematopoietic stress. WT mice were injected with the cytotoxic reagent 5-FU at a single dose (150 mg/kg weight) for 7 days, after which the dynamic expression of ATF3 was evaluated. ATF3 was substantially down-regulated in both LSKs and LT-HSCs at day 2 after 5-FU injection and gradually rescued thereafter at the mRNA (Figure 2A) and protein levels ( Figure 2B). The dynamic expression pattern of ATF3 was not observed in lineage-committed myeloid progenitors (MPs) (Supplementary Figure 2). Administration with lipopolysaccharide, another cytotoxic reagent, induced similar effects (Figures 2C,D). For further confirmation, WT mice were irradiated at a sublethal irradiation of 5 Gy, and LSKs or LT-HSCs were isolated after 24 h. It was shown that mRNA expression of Atf3 was lower in irradiated mice when compared with those from non-irradiated controls ( Figure 2E). Furthermore, 1 × 10 3 LT-HSCs from WT mice (CD45.2 + ) were mixed with WT BM cells (1 × 10 6 , CD45.1 + ), followed by transplantation into lethally irradiated congenic recipients (CD45.1/2 + ). LSKs or LT-HSCs were isolated after 24 h. It was found that Atf3 was substantially downregulated in cells from transplanted recipients ( Figure 2F). These observations suggest that stress-induced proliferation of HSCs is accompanied by down-regulation of ATF3 in HSCs.
Loss of ATF3 Facilitates HSC Proliferation and Expansion in Response to 5-FU Treatment
We next examined whether a functional link exists between decreased expression of ATF3 and HSC functionality upon hematopoietic stress. It is known that 5-FU injury causes rapid loss of cycling hematopoietic cells at initiation, followed by recovery of hematopoietic homeostasis (Xu et al., 2017). As expected, depletion of LSKs and LT-HSCs before day 3 post 5-FU challenge was observed, followed by recovery from day 6 and peaked around day 12 in both Atf3 fl/fl Vav1Cre and Atf3 fl/fl mice ( Figure 3A and gating strategy shown in Supplementary Figure 3A). However, the absolute numbers of LSKs and LT-HSCs from Atf3 fl/fl Vav1Cre mice were significantly higher than those from Atf3 fl/fl littermates during 5-FU challenge ( Figure 3A). According to Ki67 and DAPI staining, LT-HSCs from Atf3 fl/fl Vav1Cre mice displayed much lower levels of quiescence (G0 phase) than Atf3 fl/fl controls; in contrast, HSCs in S/G2/M phase (proliferation) were significantly higher in Atf3 fl/fl Vav1Cre mice ( Figure 3B). The BrdU incorporation assay confirmed the higher proliferative capability of LT-HSCs in Atf3 fl/fl Vav1Cre mice ( Figure 3C). This appreciable proliferative potential of LT-HSCs led to increased levels of WBC in Atf3 fl/fl Vav1Cre mice during recovery after 5-FU treatment ( Figure 3D). Alternatively, we used sublethal irradiation as a non-pharmacological toxic insult. In agreement with the observations from 5-FU treatment, Atf3 fl/fl Vav1Cre mice displayed higher frequencies of LT-HSCs with enhanced proliferation (Supplementary Figures 3B-D), which led to faster recovery of hematopoiesis upon irradiation (Supplementary Figure 3E). Furthermore, BM cells from Atf3 fl/fl or Atf3 fl/fl Vav1Cre mice were cultured in medium containing stem cell factor (SCF), Flk-2/Flt3 ligand (FLT3L), thrombopoietin (TPO), and IL-6. These four cytokine combinations were reported to promote the proliferation of multipotent hematopoietic progenitor cells in vitro (Zhang and Lodish, 2008). It was found that deficiency of ATF3 resulted in significant increase in the percentages and cell proliferation of HSPCs (Supplementary Figures 3F,G). These observations Frontiers in Cell and Developmental Biology | www.frontiersin.org . (E,F) Atf3 mRNA level determined by qRT-PCR in WT LSKs and LT-HSCs isolated 24 h after irradiation (IR, E, n = 6) and transplantation (Trans, F, n = 3). For irradiation, mice were irradiated at a sublethal dose of 5 Gy, and control mice without irradiation. For transplantation, 1 × 10 3 LT-HSCs from WT mice (CD45.2 + ) were mixed with WT BM cells (1 × 10 6 , CD45.1 + ), followed by transplantation into lethally irradiated congenic recipients (CD45.1/2 + ), and control mice without donor cell transplantation. Data are representative of two independent experiments. Error bars show the mean ± SD. P values was determined using two-sided Student's t tests (E, F) and one-way analysis of variance followed by Tukey-Kramer multiple-comparisons test (A-D). *P < 0.05; **P < 0.01.
indicate that ATF3 deficiency facilitates the proliferation of HSCs under short-term stress.
Loss of ATF3 Impairs Long-Term Self-Renewal of HSCs
Enhanced proliferation of HSCs enables fast recovery during short term stress hematopoiesis but can cause exhaustion of the HSC pool after long-term exposure (Sato et al., 2009;Walter et al., 2015). We next evaluated the self-renewal of HSCs following serial competitive BM transplantation ( Figure 4A). A total of 1 × 10 3 LT-HSCs from Atf3 fl/fl Vav1Cre or Atf3 fl/fl animals (CD45.2 + ) were mixed with WT BM cells (1 × 10 6 , CD45.1 + ), followed by transplantation into lethally irradiated congenic recipients (CD45.1/2 + ). Secondary and tertiary transplantation was performed 16 weeks after the previous transplantation (1 × 10 6 BM cells). Evaluation of the peripheral blood revealed a significant reduction in the chimerism of donor cells from Atf3 fl/fl Vav1Cre mice when compared with those from Atf3 fl/fl controls during serial transplantation ( Figure 4A). Analysis of distinct lineages showed that ATF3 deletion did not affect the overall proportions between myeloid and lymphoid lineages ( Figure 4B). BM analysis showed that the absolute numbers of LT-HSCs derived from Atf3 fl/fl Vav1Cre donor mice were markedly decreased when compared to those from Atf3 fl/fl donor controls, which was most pronounced during tertiary transplantation ( Figure 4C). Cell cycle and BrdU staining analysis showed that LT-HSCs from Atf3 fl/fl Vav1Cre mice displayed higher proliferative rate than their Atf3 fl/fl controls after the tertiary transplantation FIGURE 3 | Loss of ATF3 leads to increased HSC proliferation and expansion in response to 5-FU treatment. (A) Representative FACS plots of LSKs and LT-HSCs from Atf3 fl/fl and Atf3 fl/fl Vav1Cre mice at 13 days after 5-FU treatment (left). The absolute numbers of LSKs and LT-HSCs at the indicated time points after 5-FU injection were determined by flow cytometry (right, n = 6). (B) Representative FACS analysis cell cycle of LT-HSC (CD150 + CD48 − LSK) from Atf3 fl/fl and Atf3 fl/fl Vav1Cre mice at day 9 after 5-FU treatment (n = 6). (C) The percentages of BrdU + cells in LT-HSC (CD150 + CD48 − LSK) at the indicated time points after 5-FU injection were determined by flow cytometry (n = 6-7). (D) Complete blood cell counts were analyzed by a Hemavet 950FS hematology system after 5-FU injection (n = 5-10). Data are representative of two independent experiments. Error bars show mean ± SD. P values was determined using two-sided Student's t tests. *P < 0.05; **P < 0.01. (Figures 4D,E). Meanwhile, Kaplan-Meier survival analysis revealed that recipients transplanted with Atf3 fl/fl Vav1Cre LT-HSCs displayed significantly lower survival rate when compared with those transplantation with Atf3 fl/fl LT-HSCs ( Figure 4F). These results indicate that Atf3 fl/fl Vav1Cre HSCs tend to be exhausted after stress, and therefore cannot protect against BM transplantation. Furthermore, LSKs from Atf3 fl/fl Vav1Cre displayed similar homing ability with Atf3 fl/fl controls when analyzed 16 h after transplantation (Figure 4G), suggesting that ATF3 did not affect migration and homing of HSCs. In addition, we performed in vitro CFU experiments by sorting LSKs from donors (Atf3 fl/fl or Atf3 fl/fl Vav1Cre) after secondary BM transplantation. The in vitro reconstitution ability of AFT3deficient HSCs was significantly reduced after long-term stress hematopoiesis ( Figure 4H). Taken together, these observations demonstrate that loss of ATF3 impairs the long-term selfrenewal of HSCs.
DISCUSSION
The maintenance of HSC homeostasis and function is crucial for replenishing somatic cells that are lost under physiological or stress conditions (Ito and Suda, 2014). ATF3 is a stressresponsive gene, and preliminary work suggested ATF3 as a novel regulator of HSC development (McKinney-Freeman et al., 2012). However, the physiological role of ATF3 in HSC biology remained unknown. Here, using ATF3 hematopoietic system FIGURE 4 | Loss of ATF3 impairs the long-term self-renewal of HSCs. (A) A total of 1 × 10 3 LT-HSCs from Atf3 fl/fl and Atf3 fl/fl Vav1Cre mice (CD45.2 + ) was mixed with competitor BM cells (CD45.1 + ) and injected into lethally irradiated recipients (CD45.1/2 + ). Four months later, the chimeric BM cells were re-transplanted into secondary or tertiary recipients (CD45.1/2 + ) (1 • /2 • /3 • defined as primary, secondary, and tertiary transplantations, n = 6). Percentages of donor-derived cells in the peripheral blood (PB) were analyzed by flow cytometry at the indicated time points during serial BMT (n = 6). (B) Analysis of distinct lineages (CD3 + T cells, B220 + B cells, and Gr-1 + myeloid cells) in PB from recipients 4 months after transplantation (n = 6). (C) Absolute numbers of donor-derived LT-HSCs in the BM from recipients 4 months after transplantation (n = 6). (D) Percentages of donor-derived LT-HSCs in the BM cells in different cell cycle stages from recipients at 4 months after 3 • transplantation (n = 6). (E) The percentages of BrdU + LT-HSCs in the BM from recipients at 4 months after 3 • transplantation (n = 6). (F) Survival curve of Atf3 fl/fl and Atf3 fl/fl Vav1Cre recipient mice after transplantation (n = 28). (G) Homing analysis of CFSE + LSKs from Atf3 fl/fl and Atf3 fl/fl Vav1Cre BM cells 16 h after transplantation (n = 6). (H) At 4 months after 2 • BMT, 1000 sorted LSKs from recipients were seeded in a colony-forming unit assay (n = 6). Data are representative of two independent experiments. Error bars show the mean ± SD. P values were determined using two-sided Student's t tests. *P < 0.05; **P < 0.01. condition-knockout mice (Atf3 fl/fl Vav1Cre), we demonstrate that ATF3 is dispensable for hematopoiesis under steady-state condition but ATF3 is required for maintaining HSC quiescence and self-renewal under stress hematopoiesis.
In the present study, the absence of ATF3 failed to affect homeostatic hematopoiesis and quiescence of HSCs, but its expression was down-regulated after stress stimuli, such as 5-FU, LPS, or irradiation treatment. This downregulation was mainly observed in primitive HSCs, indicating its potential role in stress hematopoiesis. Furthermore, deletion of ATF3 in the hematopoietic system significantly enhanced the proliferation of LT-HSCs, which resulted in faster recovery of hematopoiesis and provided benefit during short-term 5-FU challenge or irradiation (Figure 3). However, it is known that the maintenance of quiescence is critical for HSCs self-renewal function, and defective HSCs quiescence often results in HSCs exhaustion during long-time regeneration (Pietras et al., 2011). The higher proliferative advantage of LT-HSCs from Atf3 fl/fl Vav1Cre mice caused impaired selfrenew during long-term hematopoietic stress as represented in the competitive repopulation assays. This observations from ATF3-deficient mice were similar with the results from the absence of Sirt1-knock-out mice (Singh et al., 2013), as well as mice deficient in Ptch2 under long-time regeneration (Klein et al., 2016). These genes collectively play important roles in the regulation of HSCs quiescence and self-renewal under stress hematopoiesis.
An important limitation of this study is the lack of mechanism research. To elucidate the molecular consequences of ATF3 loss in HSC regulation, we plan to perform genome-wide expression analysis using RNA-sequence of purified Atf3 fl/f l and Atf3 fl/fl Vav1Cre LT-HSCs derived from short and longtime hematopoietic stress, further determining the critical role of ATF3 in the gene regulatory networks governing HSC fate during stress hematopoiesis. Meanwhile, single human cord blood unit containing limited number of HSCs has been an obstacle for clinical applications, such as HSC transplantation (Walasek et al., 2012). Our observation that loss of ATF3 resulting in decrease of HSC regeneration function prompted us to test whether overexpression ATF3 could facilitate human cord blood HSCs regeneration, thus, we plan to analyze BM cells from recipient NSG mice 16 weeks post transplantation and measure the functional HSC frequency after in vitro expansion by transplanting GFP + cells sorted from human cord blood CD34 + cells infected with control and overexpression ATF3 lentivirus.
In summary, this study revealed ATF3 as an important regulator of HSCs self-renewal under stress, which may have therapeutic value in the hematopoietic repopulation after chemotherapy or bone marrow transplantation.
DATA AVAILABILITY STATEMENT
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
ETHICS STATEMENT
All animal experiments in this research were conducted according to protocols permitted by The Animal Care and Ethics Committee of Tianjin Medical University.
AUTHOR CONTRIBUTIONS
YL and YC designed and performed most experiments and analyzed the data. XD participated in the experiments. JZ conceptualized, supervised, interpreted the experiments, and wrote the manuscript. All authors contributed to the article and approved the submitted version.
|
2020-10-27T13:06:10.539Z
|
2020-10-27T00:00:00.000
|
{
"year": 2020,
"sha1": "ca7957387d112bb0c373f509b99364d2599587e0",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fcell.2020.585771/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ca7957387d112bb0c373f509b99364d2599587e0",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
258488396
|
pes2o/s2orc
|
v3-fos-license
|
Role of consolidative thoracic radiation in extensive-stage small-cell lung cancer with first-line chemoimmunotherapy: a retrospective study from a single cancer center
Objective To investigate the role of consolidative thoracic radiation (TRT) in extensive-stage small-cell lung cancer (ES-SCLC) receiving first-line chemo-immunotherapy followed by immunotherapy maintenance. Patients and methods Outcomes of patients without disease progression after first-line chemotherapy were retrospectively reviewed (January 2020 to December 2021). Based on TRT or not, patients were allocated to TRT group or non-TRT group. Progression-free survival (PFS), overall survival (OS) and local-recurrence free survival (LRFS) were calculated by the Kaplan–Meier method and compared by log-rank test. Results Of 100 patients, 47 received TRT and 53 non-TRT. The median follow-up was 20.3 months. The median PFS and OS in TRT were 9.1 months and 21.8 months, versus 8.8 months (p = 0.93) and 24.3 months (p = 0.63), respectively, in non-TRT. The median LRFS time in TRT was not reached, but significantly longer than 10.8 months in non-TRT (HR = 0.27, p < 0.01). Second-line chemotherapy significantly prolonged survival compared to that with chemo-free patients (mOS: 24.5 vs. 21.4 months, p = 0.026). The subgroup analysis showed a trend of patients with brain metastases benefit from TRT (21.8 versus 13.7 months, HR 0.61, p = 0.38) while liver metastases did not. Of 47 patients with TRT, only 10.6% of patients experienced grade 3 radiation-induced pneumonitis, while no grade 4 or 5 adverse events occurred. Conclusion Consolidative TRT in the period of immunotherapy maintenance followed first-line chemo-immunotherapy did not prolong OS and PFS but associated with improved LRFS in ES-SCLC. Supplementary Information The online version contains supplementary material available at 10.1007/s12672-023-00666-7.
Introduction
Small-cell lung cancer (SCLC) is notorious due to its poor survival, while extensive-stage SCLC (ES-SCLC) exacerbated the poor prognosis further, with less than 3.0% of 5-year survival rate. In the past three decades, platinum-based chemotherapy dominated the treatment of ES-SCLC but 10 months of median survival darken the light from the 70-90% of high response rate [1,2]. How to improve survival in ES-SCLC is an urgent issue.
Immunotherapy, generally refers to immune checkpoint inhibitors (ICIs), has broken the treatment deadlock for ES-SCLC and has brought the light to practice. IMpower133 and CASPIAN trials illustrated that ICIs added to chemotherapy significantly improved the median survival to more than 1 year [3,4]. A growing number of studies have also shown that the addition of ICIs to chemotherapy prolonged survival [5,6]. However, the combined strategy is lost in a bottleneck that survival is very difficult to break through, with median survival ranging from 12.3 months to 15.4 months and a 1-year survival rate of 51.9% to 60.7% [6][7][8]. Furthermore, worse median survival from 8.85 to 11.0 months in the real world made the combination of chemotherapy and immunotherapy even less promising [9][10][11][12]. The needs to prolong the survival of ES-SCLC were still far away to meet.
Thoracic radiation (TRT) undoubtedly benefited local control and survival of ES-SCLC in the chemotherapy era [13][14][15]. TRT in addition to chemotherapy improved the median survival time from 9.3 months to 17 months (p = 0.014) in our retrospective study [16]. Radiotherapy (RT), previously considered as a local therapy, was also an immunomodulatory factor to improve the immune microenvironment and released tumor-associated antigens. Radioimmunotherapy brings more hope, but also more mysteries, for instance, the toxicity of radioimmunotherapy, the window of RT to ICIs, and the fractionation of RT. Regardless of the toxicity of TRT plus immunotherapy reported was controllable [17], the data from the real world was still lacking, however. What is unknown is whether TRT could further enhance the benefit of ICIs maintenance on the outcomes of ES-SCLC.
We sought to evaluate progression-free survival (PFS) and OS outcomes for ES-SCLC treated with first-line chemoimmunotherapy followed by ICIs maintenance, in the context of TRT or not in the period of maintained ICIs.
Patients
Between January 2020 and December 2021, patients with treatment-naïve ES-SCLC diagnosed by histopathology or cytology were retrospectively collected at Shandong cancer hospital and institute. The medical records of patients should contain whole-body systemic evaluation before treatment, including cervical (ultrasound examination was also eligible), chest and abdomen contrast-enhanced computed tomography (CT), brain contrast-enhanced MRI or CT, or positron emission tomography (PET)-CT (not routinely used in our cancer center). All patients were treated with chemotherapy concomitant with ICIs followed by ICIs maintenance (with or without thoracic radiation). Patients with disease progression after 4 cycles of chemo completed according to the criteria of RECIST version 5.0 were excluded. Considering TRT given or not, patients were divided into TRT group and non-TRT group.
All procedures performed in studies involving human participants were in accordance with the ethical standards of Shandong Cancer Hospital and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. This study was approved by the appropriate institutional review board, and the requirement for informed consent was waived.
Treatment approach
Platinum-based chemotherapy was used in this study. Thoracic radiation was performed using intensity modulation radiation therapy technique with 6MV photon therapy. The gross target volume included the residual thoracic disease and positive lymph nodes, and the clinical target volume included gross target volume + 8 mm margin and nodal regions involved before. Concomitant immunotherapy usually started on the first day of every treatment cycle and before chemoagents. Immunotherapy maintenance was conducted every 21 days until disease progression or intolerable toxicity.
Statistical analyses
Demographics and clinical characteristics were compared between patients given TRT and non-TRT. Two-sample t-tests or Wilcoxon ranked sum tests were used to evaluate the difference of continuous variables and chi-square or Fisher's exact tests for categorical variables between the two groups. Adverse events were evaluated according to the criteria of CTCAE v5.0. PFS (PFS2) was measured from the date of first-line (second-line) systematic therapy given to disease progression or death from any cause, or to the date of censor. OS was measured from the first day of chemotherapy to the date of death or last follow-up. Locoregional recurrence-free survival (LRFS) was defined as the duration between the date of first-line chemotherapy to the date of locoregional recurrence or death, whichever occurs first [18]. Locoregional disease refers to local tumor disease and local-regional lymph nodes generally, specifically lesions within the radiation field in this study. The Kaplan-Meier method was performed to evaluate PFS, OS and LRFS for the two groups, and comparisons were made with the log-rank test. P < 0.05 was considered to be statistical significance. Statistical analyses were conducted with SPSS 23.0 (SPSS, IBM Corp., Armonk, NY).
Univariate and multivariate analyses on overall survival
To further explore which factors mostly contribute to survival, univariate and multivariate analyses were conducted. Univariate analysis showed that KPS ≥ 80 and liver metastases correlated with OS (Table 3). These associations with N
Impact of second-line therapy on survival
Considering potential contributions of second-line therapy to overall survival, we performed further analysis only in patients receiving second-line therapy. The median PFS2 in TRT was slightly longer than that in non-TRT but without significant difference (7.8 vs. 7.0 months, p = 0.35; Supplementary Fig. 4). Stratification based on the main treatment modes, survival of patients with second-line chemotherapy was significantly longer than that with chemo-free patients (mOS: 24.5 vs. 21.4 months, p = 0.026; Supplementary Fig. 5). However, the survival of patients with immunotherapy or anti-angiogenetic therapy was no statistical difference compared to those who did not receive corresponding treatment (not detailed).
Treatment adverse events
The grade and incidences of radiation-induced pneumonitis (RIP) are listed in Supplementary Table 1. The incidence of grade ≥ 2 RIP was 29.8%, while only 10.6% of patients underwent grade 3 RIP, and none of patients experienced grade 4 or worse RIP. In addition, grade ≥ 3 of hematological toxicity was worse than that in the non-TRT group (44.7% TRT versus 26.4% non-TRT, p = 0.04; Supplementary Table 2). No grade 5 hematological toxicity occurred. The most frequent hematological toxicity was neutropenia (36.2%), followed by thrombocytopenia (6.4%), in the TRT group, compared to 24.5% and 1.9%, respectively, in the non-TRT group. In addition, no grade ≥ 3 treatment-related cardiac events were observed in the present study.
Discussion
Our study indicated that TRT was correlated with improved LRFS compared to that receiving chemo-immunotherapy only but failed to prolong the PFS and OS. Further subgroup analyses indicated that TRT had a trend of survival benefits in patients with brain metastases. In addition, TRT-induced RIP was also acceptable while no grade ≥ 4 pulmonary toxicities occurred. Regardless of no benefits of TRT on survival, TRT was still a potentially potent strategy for ES-SCLC due to the possibility of remarkable LRFS translating into survival benefits in selected settings.
The magnitude of survival benefit seen with consolidative TRT in the period of ICIs maintenance was not significant compared to that maintained ICIs without TRT in ES-SCLC, suggesting that the administration of TRT as consolidative therapy needs to be further investigated in certain subgroups. TRT benefits survival from a phase III randomized study in the era of two-dimensional radiotherapy ignited the study of TRT in ES-SCLC; however, TRT-mediated local control did not show an advantage [19]. CREST study indicated hypofractionated TRT in ES-SCLC patients responded first-line chemotherapy can benefit 2-year OS (13% vs. 3%, p = 0.004) and local control (19.8% vs. 46.0%) compared to that without TRT [13]. Based on the potent efficacy of TRT for ES-SCLC in the chemotherapy era, it seems to be more pivotal to investigate the role of TRT in ES-SCLC in the immunotherapy era. However, no survival benefits derived from TRT were observed but TRT-mediated LRFS was significantly prolonged in our study. That may be due to the survival benefit from TRT being weakened by the long ICI-induced survival; in addition, the second-line regents may contribute to the prolonged survival. Third, more proportion of distant progression may attenuate the benefits of TRT derived. However, the subgroup that can benefit from TRT were still hard to identify yet. Specific metastatic organ or metastatic load seems to have an impact on survival. Fewer metastases or oligometastasis seem to contribute to better survival but with liver metastases were not [20,21]. Advantage of maintained immunotherapy on survival does not seem to be true for cases with liver metastases. Liver metastases (OR 5.69, p = 0.069) were a trend with prognostic factors associated with reaching the maintenance phase in IMpower133 exploratory analysis [22]. In our study, liver metastases also decreased survival regardless of TRT delivered or not compared to those without liver metastases (mOS: 15.0 vs. 24.4 months, p = 0.004) and was a negatively independent factor on OS (OR 2.58, p = 0.01).
We noticed that patients with brain metastases have a trend to benefit from TRT. Unlike liver metastases without local-RT, all patients with brain metastases were treated with cranial radiation concomitantly with immunotherapy in 94.1% (32/34) of patients. The trend of better outcomes in BM with TRT may be attributed to the treatment of local lesions; moreover, broken blood-brain barrier due to brain radiation further promotes the permeability of immune agents. Previous studies demonstrated that radioimmunotherapy-induced abscopal effect was rare [23], while multisite radiation with high-and low-dose for selected lesions might be a curative strategy for systemic disease control [24]. In particular, the tumor immunogenicity (hot or cold), metastatic sites and numbers should be considered cautiously in the context of numerous unsolved mysteries of radioimmunotherapy; moreover, one-size regimen could not solve personalized problems [25]. Radiation as an immunomodulatory drug reverses tumor immune desertification relying upon mobilizing both adaptive and innate immunity [26]. Therefore, only TRT may be insufficient for systemic disease control, particularly in ES-SCLC. Liver radiation plus immunotherapy promoted systemic antitumor immunity, and was relevant to prolonged survival in practice [24,[27][28][29]. Therefore, local treatment of extrathoracic residuals integrated with TRT may be a potentially crucial approach to improve the survival of ES-SCLC.
Several factors affect the survival of TRT to ES-SCLC, not just those mentioned above. The patterns of radiation, such as radiation dose, hyper-or hypo-fractionation, RT frequency, may affect local control and survival [30][31][32]. In addition, the time to TRT given was more confusing, while TRT administration after 4-6 cycles of chemotherapy like this study or until thoracic disease progressed or other times were still unclear. Furthermore, second-line treatment is very important in ES-SCLC, but lacks the optimal agents after first-line therapy failed. In the present study, second-line chemotherapy significantly prolonged OS compared to chemo-free strategy in 61 patients with disease progression (mOS: 24.5 vs. 21.4 months, p = 0.026). However, second-line therapy based on immunotherapy or antiangiogenesis did not make a significance on survival. It should be noted that this conclusion cannot be generalized to a larger population because of the results concluded from a subset of a small sample.
Based on concerns about the increased toxicity of radiotherapy combined with ICIs, ICIs were commonly suspended during TRT in clinical practice. However, the toxicity of concurrent TRT and pembrolizumab in limited-stage SCLC was acceptable [33]. Moreover, TRT with pembrolizumab concurrently in ES-SCLC was also well-tolerated, with only 6% of patients experiencing grade 3 adverse events and no grade 4-5 toxicities observed [17]. Another retrospective study also showed that only 15% of patients occurred pneumonitis (3 grade 2 and 3 each) in patients with concurrent atezolizumab and TRT [34]. In our study, of 37 patients received TRT and ICIs simultaneously, only 10.6|% of patients underwent grade 3 RIP, while none experienced more serious adverse events.
This study has its own merits. First, this study thoroughly analyzed the impact of TRT versus non-TRT and secondline treatment on survival in ES-SCLC receiving chemo-immune agents, as well as on local control and toxicity, verified the superiority of TRT on local control and confirmed the feasibility of the combination of radiotherapy and immunotherapy. Second, this study creatively proposed that the management of distant lesions by local therapy might be a potentially curative approach for ES-SCLC. The attitude to metastatic sites should be more aggressive in the subsettings. However, small sample sizes of this retrospective study from a single cancer center increased the inherent flaw of selection bias, which further contribute to the insufficient power of statistical efficacy in certain
|
2023-05-05T14:16:22.812Z
|
2023-05-04T00:00:00.000
|
{
"year": 2023,
"sha1": "e0bf1752f43ee07651b1f929b71cc41c4033d517",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "e0bf1752f43ee07651b1f929b71cc41c4033d517",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
108836131
|
pes2o/s2orc
|
v3-fos-license
|
Exergy analysis of a solar combined cycle: organic Rankine cycle and absorption cooling system
In this paper, exergy analysis is used to evaluate the performance of a combined cycle: organic Rankine cycle (ORC) and absorption cooling system (ACS) using LiBr–H2O, powered by a solar field with linear concentrators. The goal of this work is to design the cogeneration system able to supply electricity and ambient cooling of an academic building and to find solutions to improve the performance of the global system. Solar ACS is combined with the ORC system—its coefficient of performance depends on the inlet temperature of the generator which is imposed by the outlet of the ORC. Exergetic efficiency and exergy destruction ratio are calculated for the whole system according to the second law of thermodynamics. Exergy analysis of each sub-system leads to the choice of the optimum physical parameters for minimum local exergy destruction ratios. In this way, a different connection of the heat exchangers is proposed in order to assure a maximum heat recovery.
List of symbols A
Heat exchange area (m 2 ) a 1 , a 2 Thermal losses coefficient (W m -2 K -1 )
Introduction
The use of renewable energy to power energetic systems leads to the diminution of the pollution and in the same time of the operation cost of the system. One of the actual concerns of the modern world is to assure the comfort in the office buildings.
In this paper, a system to supply simultaneously electricity and ambient cooling for an academic building is proposed. The main motivation to study and to install this kind of system is to reduce the annual electricity bill and to assure ambient cooling as actually cooling is assured only in a part of the building by several electrical individual systems.
Usually, these two services are assured by a local grid and an air conditioning system powered by electricity. In this work, a combined system, organic Rankine cycle (ORC) and an absorption cooling system (ACS) using LiBr/H 2 O, is studied. Both the sub-systems are connected to the same water flow heated at 140°C on a solar field installed on the rooftop of the building. Solar field area is about 300 m 2 , imposed by the geometrical dimensions of the building. The goal of this study is to estimate the mechanical (electrical) power able to be supplied by an ORC and to find if the cooling of the building could be assured. The refrigerating power of the ACS is estimated about 45.6 kW, determined by the dynamic analysis of the thermal behavior of the building. Solar energy could be the fuel for different cooling technologies: Stirling machine [1], ejection and absorption machines [2][3][4][5][6]. Using thermal energy to power the absorption system offers the possibility to consider the sun as fuel for this system and ensure cooling in summer when this source is abundantly available [7].
ORC system uses an organic fluid as working fluid, which in the vapor state drives the turbine to power electricity [8]. It is a hopeful system for heat conversion to mechanical power at low temperature, adapted to solar applications. The particularity of the organic fluids is that they could be used at an evaporation temperature much lower than a conventional water turbine does, providing high efficiency [9][10][11]. Organic fluids used usually are: refrigerating fluid hydro-fluoro-carbon (HFC), ammonia, butane, l' iso-pentane, toluene, which have generally high molecular masses.
Several fluids were compared in previous works and it was found that R245fa provide interesting performances for a solar system ORC [12,13]. For this fluid, the mechanical power is the higher for the same sun conditions and ambient temperature, which implies the higher thermal efficiency (for the same input energy or fuel).
The exit of the heat exchanger at the hot side of the ORC system is connected with the input of the generator of the ACS (Fig. 1). The choice of the binary solution (LiBr/H 2 O) of this second system is based on the following states: -high evaporation latent heat -rectifying column or dephlegmator are not required -not toxic, not inflammable, not explosive -low pressure, implying low wall thickness. The latest years, many experimental and numerical studies were conducted in order to determine absorption cooling system performances. Several scientists have studied the performances of the global system and its components for several operating points [14][15][16]. A simulation of a single-effect H 2 O/LiBr absorption system for cooling and heating applications was performed by Sencan et al. [17]. Their results show that the coefficient of performance of the system increase as the heat source temperature increases, while the exergetic efficiency of the system decreases. System behavior within condenser/absorber temperature variation was studied previously by Grosu et al. [18] and the exergy analysis of the absorber pointed out the particular behavior of this component. Several tests have showed that the use of a solution heat exchanger could increase the COP by up to 60 % and the solution circulation ratio had a strong effect on the system performance [6,18,19]. Performance analysis developed using EES has proved that the maximum exergy destruction could be noticed in the generator and the absorber [18,20].
Different researchers have used exergy analysis of refrigeration systems such as Bejan [21,22], Benelmir et al. [23], Dobrovicescu [24] and Grosu [25]. In this paper, each sub-system and the whole combined system have been analyzed using the first and the second law of the thermodynamics. A numerical model was developed to analyze the impact of several physical parameters on the performance of the system. Numerical simulation processed with EES, uses exergy approach to study the two systems and each component behavior, function of the solar field temperature. The components with high exergy losses and with important impact on the performance of the whole system are highlighted.
System description
The combined system is composed by three sub-systems: main flow (water flow within solar collectors), ORC and ACS (Fig. 1).
The organic fluid pressured by the pump is heated, then evaporated and superheated on the heat exchanger in contact with the water flow (main flow) from solar field. The superheated vapor obtained at the exit of the series of heat exchangers drives the turbine ORC, and the mechanical power is then converted into electrical power. The vapor at the exit of the turbine is directed to the condenser where is cooled by a cooling water flow.
The solar flow then is used to heat LiBr/H 2 O solution at the generator of the ACS. The water of this solution is thus evaporated. It constitutes the refrigerating fluid on the condenser, the throttling valve and the evaporator of the ACS. The cooling effect occurs in the evaporator, where chilled water is supplied at 7°C to refresh the atmosphere of the building.
Mathematics model
Organic Rankine cycle
Energy analysis
The starting point of this study is the geometrical and thermal building analysis. It is a 10-year-old building with an available area on the rooftop about 300 m 2 . Wall composition, linear and surface losses, fresh air input and the occupancy scenario of building's rooms are studied to calculate the total need in conditioning which is about 45.6 kW. The analysis of the solar collectors characteristics and the area constraint (300 m 2 ) imposed by the geometrical dimensions of the building, lead to consider initial parameters of the model shown in the Table 1. The mass flow rate about 0.5 kg s -1 implies a total variation of the main flow temperature about 60°C and a variation about 30°C on each sub-system (ORC and ACS). All the components of the system are studied first from energetic point of view, then from exergetic point of view.
Solar collectors
The efficiency of the solar collectors depends on the optical efficiency (g o ) and the thermal losses coefficients (a 1 , a 2 ), which are given by manufacturer data and is expressed as follows: Heat exchanger: Turbine: Condenser: Pump: Thus, the thermal efficiency of the ORC is: Some results of the simulation are shown on Table 2, corresponding to the initial parameters from the Table 1. Mechanical power, thermal efficiency and solar field efficiency are thus determined.
Exergy analysis
Since exergy is the maximal quantity of work received or provided by a system to attain thermodynamic equilibrium with its environment by a sequence of reversible processes, exergy analysis assesses the irreversibility of each component of the system. The exergy of each sub-system is a measure of its distance from equilibrium. Thus, an exergy balance assesses the local destruction of the exergy which represents the adequate indicator to define the performance of a component or a system [21][22][23][24][25].
Exergy destructions at the heat exchanger, I HE_ORC , at the turbine, I T_ORC , at the condenser, I Cd_ORC and at the pump I P_ORC are estimated by local exergy balances. Specific exergy as state parameter on each point of the cycle was obtained using the Thermoptim software.
Heat exchanger: Table 2 Simulation results for The heat exchanger's irreversibility may be determined using entropy balance or exergy balance which may be assessed as shown on Fig. 2.
Entropy balance: Entropy flow received by the organic fluid may be written as following: Entropy flow supplied by the hot water from solar collectors is: Thus, entropy generation due to the temperature pinch on this heat exchanger is the difference of two flows above: According to Gouy-Stodola theorem, the irreversibility associated to the heat transfer on this component is: The same equation could be obtained by an exergy approach, the heat exchanger irreversibility representing the exergy destruction during the heat transfer. A fuel, a dissipation and a product in terms of exergy will be associated to the heat.
Exergy balance: The fuel of the heat exchanger in terms of exergy represents the exergy flow available on the main flow: where T mH is the mean logarithmic temperature of the main flow (solar water): The product of this component is the exergy flow supplied to the organic fluid: The difference between fuel and product is dissipated, due to the temperature pinch between the two fluids: The exergy efficiency of the heat exchanger is the ratio of the product by the fuel: A reduced irreversibility may be defined in order to express the local irreversibility impact on the system fuel: Condenser: Following the same methods as previously, entropy and exergy flows are defined to assess the exergy efficiency of the condenser and its irreversibility impact on the fuel of the whole system (Fig. 3).
Entropy balance: Exergy balance: Thus, the irreversibility of the condenser becomes: with The exergetic efficiency of the condenser is the following ratio: And the reduced irreversibility on the condenser is: Turbine: The exergetic fuel on the turbine is the variation of the organic fluid-specific exergy multiplied by its mass flow rate (Fig. 4): Its product is the mechanical power to be converted into electricity: The local irreversibility is: The exergetic efficiency and the reduced irreversibility become as expressed here below: Pump: The pump receives the mechanical power _ W P ORC which represents the fuel of this component or in other terms the exergy consumed by the pump to assure a high pressure to the working fluid (Fig. 5): Therefore its role is to ensure a pressurized fluid at its exit, so its product is the R245fa exergy variation: Pump irreversibility is the difference between fuel and product: As for the components above, exergy efficiency and reduced irreversibility are expressed by: For the whole system, exergy efficiency takes into account the fuel of the system which corresponds to the fuel of the heat exchanger. The product is the effective mechanical power (mechanical power supplied by the turbine minus mechanical power consumed to power the pump): Absorption cooling system
Energy analysis
Initial parameters for ACS simulation, computerized with engineering equation solver (EES), are shown on Table 3.
Chilled water mass flow rate on the evaporator is calculated using cooling capacity, determined by thermal analysis of the building.
Refrigerant fluid temperatures on heat exchangers of the ACS are calculated using usual temperature pinches: Pressure into the system is imposed by evaporation and condensation temperatures.
Mathematic model is built using mass, energy and exergy balances: Evaporator: Thus, refrigerant (water) mass flow rate is: Condenser: Generator: Absorber: where f is the ratio between weak solution mass flow rate and refrigerant mass flow rate.
This energy analysis leads to generator heat flow rate, which is in this case about 70.4 kW. Thus, the coefficient of performance of the ACS can be calculated as following: Exergy analysis Exergy analysis of the ACS is developed as for the ORC, in order to highlight local exergy destruction. Each component of the system is studied separately, by associating a fuel (exergy availability at the inlet of the component), a product (exergy produced by the component) and an exergy dissipation. The global model was studied previously [18]. Reference state is defined at t 0 = 25°C and p 0 = 101,325 kPa. and Exergy efficiency of the generator is the following ratio: Its local and reduced irreversibilities are expressed here bellow: After analyzing all components as shown for the generator, exergy efficiency of the ACS is calculated by the following ratio: Energy and exergy performances of the combined cycle are assessed using the expressions below:
Results
The mathematical model described in the previous section of this paper was considered to simulate the ORC behavior within temperature variation at the exit of the solar collectors (t H ) from 115 to 140°C. Different exergy flows (fuel, product and destruction or irreversibility) are shown on Figs. 6 and 7 for two temperatures at the exit of solar collectors.
The two diagrams above lead easily and rapidly to the equation of exergy balance. In this way, results discussions are clear and lead to the choice of the optimum parameters. So, an increase in hot source temperature implies an exergy fuel _ Ex R HE ORC (exergy availability) increase from 13.36 to 16.6 kW. It corresponds to the maximum mechanical power which could be supplied by the system if all its components were perfect. It is therefore evident that effective mechanical power increases (from 4.67 to 5.24 kW).
On the other side, the irreversibilities _ I HE ORC and _ I Cd ORC increase, and consequently exergetic efficiency decreases. The same remarks could be observed on Fig. 11. For the same heat flow rate on the heat exchanger (the same temperature variation of the solar water and the same mass flow rate), both mechanical power and thermal efficiency increase. On the contrary, exergetic efficiency decreases, because of the increase of the exergy fuel (higher mean logarithmic temperature).
Exergy destruction on the ORC pump is neglected, being much lower than those of the other components. Thus, Table 4 and Figs. 8 and 9 show results of only components with high irreversibility. Figures 8 and 10 show the variation, always in opposition, of local irreversibility and exergy efficiency of each component, versus exit solar collectors temperature.
The reduced irreversibility, I r , is an important indicator to take into account for the components and the whole system performance analysis. It highlights the irreversible character of each component function on the availability in terms of exergy at the beginning of the process, indicating the impact of each local irreversibility on the exergy resource (fuel) of the system.
It is interesting to remark that the reduced irreversibility on the condenser Ir Cd_ORC decreases slightly ( Fig. 9) if exergetic efficiency decrease (and local irreversibility increase, Table 4). The product of this component is constant (constant heat flow rate exchanged with cooling water), while its fuel (exergy resource) increase. Thus, the local irreversibility increases. In the same time, the exergy flow received on the hot heat exchanger of the ORC increase, which implies a slightly decrease of the reduced irreversibility. The ACS simulation results are shown on the Table 5 for an exit temperature of solar collectors about 140°C. Thus, the performance of the whole system can be assessed for this temperature and for solar water mass flow rate about 0.5 kg s -1 through ORC heat exchanger and ACS generator. In these conditions, a solar collector area of 300 m 2 allows to provide simultaneously a mechanical power about 5.24 kW, a cooling capacity about 45.6 kW, with a thermal efficiency around 38 % and a global exergetic efficiency around 26 %.
Conclusion
The goal of this work is the study of a low temperature solar combined system to provide simultaneously electricity by an ORC using R245fa as working fluid and the ambient cooling of an academic building by a LiBr/H 2 O ACS. The main constraint of the model is the total available area on the top of the building. The combined system was studied from energetic and exergetic point of view.
The exergetic analysis allows a judicious choice of the parameters and the optimum system design. The use of a solution heat exchanger on ACS allows the increase in exergy efficiency of the ACS. The ORC components can be listed from the higher to the lower exergetic efficiency as: pump, turbine, hot heat exchanger and condenser. Thus, a way to improve the performance of this system is to add a recovery heat exchanger at the inlet of the condenser, because of its high exergy dissipation. Heat recovered could be used to preheat the working fluid before to be directed to the solar heat exchanger.
The decrease of the ORC exergetic efficiency is mainly due to the increase of the exergy resource (fuel) and of the solar heat exchanger irreversibility with the solar water temperature. The most penalizing components are hightemperature heat exchangers: the ORC solar heat exchanger, the ORC condenser, the ACS generator and the ACS absorber ACS.
The thermal storage was not studied in this paper, but is essential to ensure a constant temperature (140°C) at the inlet of the ORC solar heat exchanger, value for which the system is able to provide the overall ambient cooling demand of the building (45.6 kW).
Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
|
2019-04-12T13:58:30.614Z
|
2016-12-01T00:00:00.000
|
{
"year": 2016,
"sha1": "5cb13a3922ca13b067521bd43b12979c3aefb219",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40095-015-0168-y.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f1274414a99ee4f1564cc81cc4c9fe4d9ebd0b87",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science",
"Physics"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
212848731
|
pes2o/s2orc
|
v3-fos-license
|
Asthma and allergy crisis in Puerto Rico: a must do proposal
Allergy is the immune system response when a person comes in contact with an environmental allergen (pollen, certain plants, animals, etc.), which may be benign to many others. The Commonwealth of Puerto Rico (PR) is a Territory of the United States of America (US) where chronic disease such as asthma and allergy is much higher than other US states (1).
Allergy is the immune system response when a person comes in contact with an environmental allergen (pollen, certain plants, animals, etc.), which may be benign to many others. The Commonwealth of Puerto Rico (PR) is a Territory of the United States of America (US) where chronic disease such as asthma and allergy is much higher than other US states (1). In fact, this is not exclusively found in the adult population, as children living in PR have been disproportionately affected with asthma compared with children living in the US (2). To add fuel to the fire, Puerto Ricans have less access to healthcare providers due to a medical workforce labor shortage which has been hampering the Island Territory for decades (3). Since Hurricane Maria devasted the Island in 2017, more doctors have fled PR to work in the Continental US (4). For instance, one major gap left behind is the specialist/patient ratio for allergy sufferers. The Puerto Rican Physician Workforce Data Book reports only 12 allergy specialists 5 of whom are Board Certified with American Academy of Allergy, Asthma and Immunology in the Commonwealth of PR to manage the approximate 3.4 million population (5), which is a dramatic 1:310,000 specialist to population ratio. Anecdotal feedback from physicians currently practicing in PR report the actual direct patient to specialist ratio may be terribly worse given the population shift from commercial payors to State based payors as result of large employers relocating after the recent Hurricane. Without employment, families and children do not have access to commercial insurance plans that employers provide and therefore rely on the Medicaid 'capitated' Island Government sponsored health plan. Currently, this health plan does not thoroughly cover all specialty services. So, it's assumed by practicing physicians that there are less than the reported 12 practicing allergist's whom are actually treating the allergy and asthma patients in PR. This obvious health disparity only places more pressure and further burden on primary care and general pediatricians to care for their population.
It is a well reported fact that the allergy and asthma exacerbating factors vary as per the geographical, climate, and socioeconomical situation. Many researchers are studying the underlying causes, including genetics, vitamin D deficiency, increased allergen levels in homes especially after the recent hurricane, poor dietary choices, air pollution and chronic stress, etc. Atopy considerably mediated the asthma association in this population: allergic rhinitis accounted for 22% to 53% of the association with asthma, and sensitization to cockroach mediated 13% to 20% of the association with abnormal forced vital capacity and 29% to 42% of the association with emergency department visits for asthma (6). Skin, being the largest organ in terms of surface area and the first protection against pathogens, also provides visible indication of various inflammatory processes, including allergy. Allergy skin testing is a quick and easily administrable procedure that can be used in almost any outpatient setting. Results may be further confirmed by more specific blood testing and in most cases, desensitization treatment is the remedy.
While not giving up on Rose's epidemiological dictum (7) that "small changes in large populations are likely to be more effective than large changes in small numbers", it is evident that new methods for managing allergy in PR is called for. It is understood that it will take many years to recruit or train allergy specialists and immunologists to address the allergy suffering population. Studies with similar allergy population constraints have shown that up to 50 percent of allergy specialist referrals could be dealt with by primary care providers (PCPs) with interest and additional training in allergy management, allowing specialty services to focus on more complex cases thus reducing patients' wait for first-time appointments (8). With more specific training, PCPs would be positioned to provide diagnosis and management of mild to moderate allergic conditions, while referring complex and severe allergy and asthma sufferers to specialists. This referral process will enable specialists to focus on the more complex allergy and asthma problems while addressing this important health care disparity Puerto Rican local government, hospital administrators, universities, public health workers, primary care and pediatricians are collectively organizing a new professional society to be established in the City of Mayaguez, namely the Puerto Rican Allergy and Immunotherapy Society (PRAIS) with a charter to improve knowledge and practical skills of primary care and non-allergist medical professionals in allergy diagnosis and allergen immunotherapy. The new society will also aim to foster research in the areas of community, primary and population health, clinical and translational, as a model that could be replicated in other countries that are unfortunate to have asthma and allergy sufferers without the ideal medically trained specialist community. The aims and commitments of this society also dovetail with broader economic and social agenda items to reduce healthcare costs and improve quality of lives throughout PR.
The Society will 'strongly' advocate for appropriate physician reimbursement, cost reduction, and improved financial support for allergic disease and asthma management of Puerto Ricans, with a strict policy that it not be a self-serving member objective. This could be achieved through evidence-based business case analysis, collective input and pilot studies throughout the Puerto Rican population.
One such example of potential economic advantages are the reported health care savings of Florida Medicaid patients who were newly diagnosed with allergic rhinitis and treated with allergen immunotherapy that showed they incurred 38% lower healthcare costs after 18 months of follow-up compared to non-treated control patients. Savings from this study were observed as early as the three-month stage of treatment. Allergic children treated with allergen immunotherapy incurred 42% less health care costs compared to untreated kids: $5,253 versus $9,118 resulting in $3,865 savings per child for the studied period to the Florida State Government (9).
PRAIS proposes to have its first annual meeting in January 2020 and will invite scientists, medical practitioners, residents and students from the Continental US and around the world to visit the Island Territory and participate in scientific and medical exchange, collaboration and continuing medical education.
Conflicts of Interest:
The authors have no conflicts of interest to declare.
Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.
Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the noncommercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.
|
2020-03-19T19:53:53.236Z
|
2020-03-01T00:00:00.000
|
{
"year": 2020,
"sha1": "f15cce870693e3f7c1398de5ea8ef9c870cc85c9",
"oa_license": "CCBYNCND",
"oa_url": "https://jtd.amegroups.com/article/viewFile/34131/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "be63e4ee95d2ae1ef29ed8f4190948e6e6796c63",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
73270234
|
pes2o/s2orc
|
v3-fos-license
|
KAYSER-FLEISCHER RING EVALUATION IN WILSON ’ S DISEASE IN A TERTIARY EYE CARE CENTRE OF NEPAL
Wilson’s disease is a hereditary disorder of copper metabolism which is characterized by neuropsychiatric and hepatic manifestations as well as appearance of Kayser-Fleischer ring. This is a retrospective review of the records of the patients of Wilson’s disease who attended Neuro-ophthalmic clinic for the identification of Kayser-Fleischer (K-F) ring from January 2010 to June 2012. Detailed eye examination included visual acuity assessment, slit lamp biomicroscopy and intra-ocular pressure measurement. Data regarding clinical features, laboratory investigations and the status of K-F ring was recorded. Seven cases of Wilson’s disease with age range of 9-15 years were included in the study. Among them four (57.1%) had neuropsychiatric symptoms, two (28.5%) had hepatic disease and one (14.3%) was asymptomatic, diagnosed by positive family history and laboratory tests. Among four subjects having K-F ring, three (75%) subjects had neuropsychiatric symptoms and one subject had hepato-billiary disease. Besides K-F rings, other ophthalmic findings were sunflower cataract (14.3%) and vernal keratoconjuctivitis (14.3%). The identification of K-F ring is a simple and cost effective screening test for the diagnosis of Wilson’s disease. K-F is present in majority of the patients with neurological manifestations.
INTRODUCTION
Wilson's disease (WD) is an autosomal recessive disorder of copper metabolism that results in the deposition of copper in a variety of tissues throughout the body.2][3] The K-F ring which is the ocular hallmark of WD is characterized by deposition of copper on the descemet's membrane of the cornea at the periphery. 1,2,4In this study, we have described the clinical presentations, finding of laboratory tests and identification of K-F ring among subjects with WD.
METHODS
This is a retrospective review of the medical records of the patients with WD who attended Neuro-ophthalmic clinic of B. P. Koirala Lions center for Ophthalmic Studies (BPKLCOS) for evaluation of Kayser-Fleischer ring from January 2010 to June 2012.After taking a clinical and a family history, detailed neurological and systemic examination was carried out in all the study subjects.Anterior and posterior segment examination of eye was performed by using slit lamp biomicroscopy.Detailed examination of the cornea for the identification of Kayser-Fleischer ring was performed by slit lamp biomicroscopy and gonioscopy.
Relevant laboratory investigations included estimation of 24-hour urine for copper, serum copper levels, level of serum ceruloplasmin and liver function tests.Ultrasound of abdomen was also performed in all subjects to rule out hepato-billiary manifestations.The diagnosis of Wilson's disease was based on clinical presentation, positive results of the key laboratory tests such as low level serum ceruloplasmin and increased urinary copper excretion, and presence of K-F rings.Photographic documentation of the cornea and anterior segment of the eyes were performed after taking informed consent from the subjects.Then the patient were prescribed medications from the neurology clinic of Tribhuvan University, Teaching Hospital (TUTH) and asked for follow up examinations in Neuroophthalmology clinic at BPKLCOS.
RESULTS
The patient's particulars, presenting clinical features, laboratory findings and the result of K-F ring evaluation is presented in table 1.Seven cases of WD were diagnosed and had received treatment during the study period, were included in this study.All the patients belonged to the paediatric age group.Male and female ratio was 4:3.On presentation to the hospital, four subjects (57.1%) had neurological features, two subjects (28.5%) had features of hepatic disease and a subject had no symptoms but positive family history and decreased serum ceruloplasmin levels on laboratory investigations.), three of them (75%) had neurological disorders and one of them (25%) had hepatic involvement as well as vernal Keratoconjunctivitis (Figure 3).One subject with WD having K-F ring was also noted to have sunflower cataract (figure 4).There was a complete resolution of K-F ring after treatment with D-Penicillamine in two subjects (figure 5) after one year of the therapy.However, K-F ring was noted to be decreased in severity and width without complete resolution following the treatment in other two cases.Laboratory investigations of 24 hour urine for copper estimation were elevated in all subjects.A high level of clinical suspicion is necessary along with the laboratory investigation for proper diagnosis.However, a single test may not establish a diagnosis of WD.There must be a combination of clinical features, laboratory findings, and the results of mutation analysis to make proper diagnosis. 1A cost effective and simple way of screening a patient of suspected WD is to perform a detailed ophthalmic evaluation for the presence of K-F ring.K-F ring was first described by the German ophthalmologists Bernhard Kayser (1902) and Bruno Fleischer (1903) independently in a patient with multiple sclerosis. 5Later in 1912, Fleischer recognized it as a part of Wilson's disease.Later, Harry et al (1970) described the electron microscopic appearance of the K-F ring as electron-dense deposits of copper of varying sizes lying mainly in the Descemet's membrane.
The site of earliest pigment deposition is an arc in the superior periphery of the cornea from the 10-to 2-O'clock meridian.The arc spreads slowly toward the horizontal plane and gradually broadens.Later on the progression of the ring formation, a band appears inferiorly as a crescent stretching from the 5-to 7-o'clock positions then these two arcs meet with each other. 6With treatment, the sequence of events is reversed, the copper gets reabsorbed, and a pitted or beaten silver pattern may become apparent at the previous site of the ring.This is an indication that treatment has produced a negative copper balance. 7In most patients, the neurologic and hepatic lesions remit.However, K-F ring regression does not correlate with neuropsychiatric improvement. 8 0][11] The presence of K-F ring and low serum ceruloplasmin levels are considered sufficient to establish the diagnosis of Wilson's disease. 4,12However K-F rings are rarely seen in pediatric population. 1 In the Indian subcontinent, the early age of onset of symptoms, prolonged persistence of K-F rings and progression of symptoms among siblings despite proper therapy is of great interest that can be correlated to the copper intake by the practice of using brass or copper utensils for cooking. 13The absence of K-F rings does not necessarily exclude the possibility of Wilson's disease.
Patients with predominantly neurological symptoms, K-F rings are absent in only less than 2% of cases.In our study, among four children having neurological symptoms, three of them had K-F ring.Moreover the diagnosis of Wilson's disease can be more difficult in patients with liver disease. 14Many times the presence of K-F ring may be the only initial finding in the cases of Wilson's disease.Besides it also serves as a useful test to monitor the patient's compliance and the response to treatment. 15other important ocular finding in Wilson's disease is the sunflower type of cataract which is present in one patient having neurological manifestations.Sunflower cataract is brilliantly multicolored deposits of copper in the lens.The other uncommon ophthalmic findings are strabismus, optic neuritis or pallor of the optic disc, and night blindness. 16Abnormal ocular movements such as slow horizontal saccades, upward gaze restriction and impaired convergence are also reported in Wilson' disease. 17 In our paediatric case series, four children (57.1%) presented with neurological symptoms, two children (28.5%) presented with hepatic features and one case (14.2%) was asymptomatic sibling diagnosed on the basis of positive family history and laboratory findings.This result is similar to the report of Shakya (2004) mg/dl in 6 patients (85.7%).These findings are similar to other previous reports. 18,19 50% cases there was complete resolution of KF ring after treatment with D-Penicillamine during follow up of two years.In two cases there was evidence of decrease in width of KF ring after 6 months however complete resolution was not observed on the last follow.This is similar to the results observed in the Indian study where a large number of patients showed prolonged persistence of KF ring. 13
CONCLUSION
The identification of Kayser-Fleischer ring is a simple and cost effective screening test for the diagnosis of Wilson's disease.Kayser-Fleischer ring is present in majority of the patients with neurological manifestations.The ring may disappear in variable time following D-Penicillamine therapy.
Table 1 : Distribution of subjects based on clinical features, the K-F ring finding, Laboratory investigations in Wilson's disease Description Case one Case two Case three Case four Case Five
Abn= abnormal, N= Normal, VKC= vernal keratoconjunctivitis, LFT-liver function test, CLD= chronic liver disease, Lab= laboratoryAmong four subjects (57.1%) with Kayser-Fleischer (K-F) ring (Figures1, 2 & 4
al, Journal of Chitwan Medical College 2014; 4(9)
18study and different from the study ofManolaki et al (2009)among paediatric population in which hepatic manifestation was more common.
1All 7 patients in our series had high urinary copper excretion, whereas, serum ceruloplasmin level was found lower than 25 Sharma et
|
2018-12-12T02:22:51.537Z
|
2015-01-20T00:00:00.000
|
{
"year": 2015,
"sha1": "3a5ed4686132d2f8f651bf726081f40cdcd4ee01",
"oa_license": "CCBYNC",
"oa_url": "https://www.nepjol.info/index.php/JCMC/article/download/11934/9684",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "3a5ed4686132d2f8f651bf726081f40cdcd4ee01",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
18923544
|
pes2o/s2orc
|
v3-fos-license
|
Size sensors in bacteria, cell cycle control, and size control
Bacteria proliferate by repetitive cycles of cellular growth and division. The progression into the cell cycle is admitted to be under the control of cell size. However, the molecular basis of this regulation is still unclear. Here I will discuss which mechanisms could allow coupling growth and division by sensing size and transmitting this information to the division machinery. Size sensors could act at different stages of the cell cycle. During septum formation, mechanisms controlling the formation of the Z ring, such as MinCD inhibition or Nucleoid Occlusion (NO) could participate in the size-dependence of the division process. In addition or alternatively, the coupling of growth and division may occur indirectly through the control of DNA replication initiation. The relative importance of these different size-sensing mechanisms could depend on the environmental and genetic context. The recent demonstration of an incremental strategy of size control in bacteria, suggests that DnaA-dependent control of replication initiation could be the major size control mechanism limiting cell size variation.
Introduction
All dividing cells have to coordinate different steps of the cell cycle, such as DNA replication, chromosome segregation, and cytokinesis. In eukaryotes, cell cycle control has been widely studied and the overall logic is well-understood (Murray and Hunt, 1993). The orderly progression in the eukaryotic cycle relies on a biochemical engine composed of cyclins and cyclin-dependendent kinases (CDKs). The periodic activity of cyclin-CDK complexes regulates the cell cycle transitions such as the initiation of DNA replication or the entry into mitosis (Murray and Hunt, 1993). In addition, specific control mechanisms, often called checkpoints, ensure that late events do not occur before the completion of earlier events (Hartwell and Weinert, 1989). In contrast, the logic of the bacterial cell cycle, even though intensively studied for more than half a century, remains unclear. Bacteria do not possess any cyclins or CDKs. Numerous regulatory mechanisms have been discovered and characterized in details, such as the control of DNA replication initiation (Katayama et al., 2010) or the SOS response, which inhibits division in case of DNA damage or replication arrest (Simmons et al., 2008). However, how these different mechanisms are organized and connected to ensure an orderly progression into the cycle remains unclear (Haeusser and Levin, 2008).
In many organisms, progression through the cycle is coupled with cellular growth. In the budding yeast Saccharomyces cerevisiae, the duration of the G1 phase depends on the cell size at birth, smaller cells spending a longer time in G1 (Johnston et al., 1977;Di Talia et al., 2007;Turner et al., 2012). Likewise, in the fission yeast Schizosaccharomyces pombe, mitotic entry is delayed in smaller cells (Fantes, 1977;Sveiczer et al., 1996). In bacteria, cell cycle progression was as well-assumed to be under size control. In 1968, building on the seminal physiological studies of Schaechter et al. (1958) and Cooper and Helmstetter (1968) on Salmonella typhimurium and Escherichia coli, Donachie showed that at the population level, the average initiation mass, i.e., the ratio of cell mass to the number of replication origins is constant regardless of the growth rate and culture medium (Donachie, 1968). This lead him to propose a size control mechanism at the single cell level, in which the cell initiates DNA replication when it reaches a critical size, and divides a constant time after initiation. Although widely accepted for decades, this model became controversial in recent years (Voorn et al., 1993;Wold et al., 1994;Boye et al., 1996;Bates and Kleckner, 2005;Chien et al., 2012;Hill et al., 2012). Nevertheless, definitive evidence for size control in bacteria was provided by recent studies using quantitative data on single cell growth and division in both E. coli, Bacillus subtilis, and Caulobacter crescentus (Campos et al., 2014;Osella et al., 2014;Robert et al., 2014;Soifer et al., 2014;Taheri-Araghi et al., 2015).
However, no size-sensing module has been identified so far, and the molecular basis of size control stays unclear. Cells could sense their length,volume or mass. Here I use cell size as a catch-all descriptor and describe how cell size could be sensed through different pathways involved in the control of cytokinesis, chromosome segregation or replication initiation. Importantly, the potential size-sensing mechanisms are not mutually exclusive and size control could be implemented redundantly at several cell cycle stages. Such redundancy has been evidenced in the fission yeast S. pombe. In addition to the major size control acting at mitosis entry (Fantes and Nurse, 1977), a second size control mechanism acts at the G1/S transition (Fantes and Nurse, 1978). This second size-control is usually invisible since the G2/M control produces cells whose size at birth already exceeds the requirement of the G1/S control. Nevertheless, this second mechanism can be revealed when the primary G2/M control is perturbed, such as in the wee1 mutant (Fantes and Nurse, 1978). S. cerevisiae has also been proposed to exhibit an usually invisible size-control, acting at the G2/M transition (Murray and Hunt, 1993;Turner et al., 2012). Several size checkpoints could as well-exist in bacteria, where the cytokinesis, the chromosome segregation and the initiation of replication might all depend on cell size. As in yeast, the most stringent of these size control mechanisms would be responsible for the limitation of the variations of the cell size in bacteria. Interestingly, recent experimental results on size control in bacteria argue against the current critical size paradigm and suggest an incremental strategy: division does not occur at a critical size but rather when a constant size has been added to the size at birth (Campos et al., 2014;Soifer et al., 2014;Taheri-Araghi et al., 2015). Such phenomenological description sheds a new light on the mechanism limiting cell size variations. Therefore, after a description of all the potential size-sensing mechanisms involved in the control of cytokinesis, chromosome segregation and replication initiation, I will discuss which of these mechanisms could be responsible for the limitation of cell size variations, in light of this incremental principle.
The Min System: a Geometric Size-Sensor Controlling Cytokinesis?
A critical event in the bacterial division process is the polymerization of the tubulin-like protein FtsZ into an annular structure called the Z ring, which locates the division site and recruits the numerous proteins required to carry out cytokinesis (Bi and Lutkenhaus, 1991;Addinall and Holland, 2002;Margolin, 2005;Harry et al., 2006). In E. coli and B. subtilis, the positioning of the Z ring has been widely studied. It is generally assumed to rely on two inhibitory systems: Nucleoid Occlusion (NO) and the Min system (Lutkenhaus, 2007;Wu and Errington, 2011). The latter is based on the protein complex MinCD, an inhibitor of FtsZ polymerization that concentrates at the cell ends, and is therefore essential to prevent polar divisions (Adler et al., 1967;De Boer et al., 1989;Lutkenhaus, 2007). In E. coli, the localization pattern of MinCD emerges from remarkable oscillations of the complex from pole to pole, which create over time a gradient of concentration showing a minimum around mid-cell (Hu and Lutkenhaus, 1999;Raskin and de Boer, 1999;Hale et al., 2001). In contrast, in B. subtilis MinCD does not oscillate and the gradient is static (Lutkenhaus, 2007). In both organisms, the concentration of the MinCD complex along the cell axis may vary with cell length and MinCD has therefore been proposed to serve as a size ruler (Raskin and de Boer, 1999). Nevertheless, there is no experimental evidence yet allowing confirming or invalidating this hypothesis.
The function and localization of MinCD is very similar to those of the Pom1 kinase in the fission yeast S.pombe (Padte et al., 2006;Huang et al., 2007). This kinase exhibits a gradient of concentration in the cell, with maxima at cell ends and a minimum around mid-cell. This localization allows Pom1 to prevent the assembly of the division septum at the cell ends. It was also recently suggested to serve as a size sensing device responsible for size control at mitotic entry. The work of two different teams published in 2009 established that Pom1 is a dose-dependent inhibitor of mitosis, whose concentration at the cell middle decreases when the cells elongate (Martin and Berthelot-Grosjean, 2009;Moseley et al., 2009). Pom1 was shown to regulate the G2/M transition by inhibiting mitotic activators localized at the middle of the cells. The following model of size-dependent G2/M transition was proposed: when the cells are short, the high concentration of Pom1 at mid-cell where it can interact with mitotic regulators inhibits mitotic entry. When the cell grows, this inhibition progressively weakens as the concentration of Pom1 at mid-cell decreases. Pom1 was therefore proposed to be a dedicated size sensor involved in cell cycle control. This discovery appeared promising for understanding the molecular basis of size control. Nevertheless, a few years later, one of the team involved in this discovery presented evidence that Pom1 is in fact not the size sensor but rather a modulator that affects the link between the measured size and the division probability, thus changing the size threshold at mitosis (Novák, 2013;Wood and Nurse, 2013). This conclusion was supported by the behavior of the pom1 mutant, which is shorter but exhibits a wild type correction for cell size fluctuations, compressing or extending the duration of the cycle according to the cell's size at birth (Wood and Nurse, 2013).
Bacteria possess some geometric sensors similar to Pom1. In E. coli and B. subtilis, the potential Pom1-like geometric sensor is the MinCD complex. In C. crescentus, which has no MinCD orthologs, a gradient of the inhibitor of FtsZ polymerization MipZ emanates from the cell poles after the segregation of the replication origins (Goley et al., 2007). The mechanism that was proposed for size-sensing through Pom1 could in principle apply to its bacterial counterparts: cell elongation could lead to a decrease of MinCD/MipZ concentration in the cell middle, thus triggering the formation of the Z ring. Alternatively, these regulators could be, like Pom1, modulators of cell size at division.
FtsZ, a Size Sensor?
In the fission yeast, very recent results suggest that the cdr2 mitotic activator regulated by Pom1 could be the size sensor controlling mitotic entry (Pan et al., 2014). Cdr2 is localized in a band of cortical nodes at mid-cell, where it accumulates when the cell elongates. Pan et al. therefore proposed that a critical cdr2 concentration is attained when the cell reaches a critical size and triggers mitotic entry. The bacterial FtsZ protein, the target of MinCD regulation, has also been proposed to act as a size sensor (Chien et al., 2012). In this model, division would occur when the amount of FtsZ inside the cell reaches a threshold. In support of this hypothesis, FtsZ levels are known to modulate the size at division in E. coli (Chien et al., 2012). In C. crescentus, FtsZ expression and degradation are regulated during the cell cycle. As a consequence, the concentration oscillates during the cycle with a maximum at the onset of constriction, after which FtsZ is degraded (Quardokus et al., 1996;Kelly et al., 1998). This dynamics is in agreement with division being triggered by a critical level of FtsZ. In contrast, in E. coli and B. subtilis, FtsZ is constitutively expressed and its concentration is constant throughout the cycle. Its total amount is therefore proportional to cell size.
The capacity of the yeast protein Cdr2 to serve as a size-sensor mainly relies on its localization, so that the local concentration at mid-cell increases when cells elongate. Similarly, specific localization of FtsZ could be responsible for a size-dependent local concentration at mid-cell. In addition to ring structures, FtsZ can assemble into helices in E. coli and B. subtilis (Ben-Yehuda and Losick, 2002;Thanedar and Margolin, 2004;Harry et al., 2006). FtsZ has also been shown to oscillate from one pole to the other, out of phase with the Min system (Thanedar and Margolin, 2004;Bisicchia et al., 2013). The Z ring may therefore form from the reorganization of moving helices. However, helical structures of FtsZ have still to be confirmed. In particular, such helical structures can be artifacts of fluorescent fusions, as demonstrated for the MreB protein (Domínguez-Escobar et al., 2011;Van Teeffelen et al., 2011;Swulius and Jensen, 2012). Therefore, the localization of FtsZ is still unclear, but some sizedependence of the local concentration at mid-cell is possible, leading to a critical size for Z ring formation. Importantly, in E. coli and B. subtilis the ring is known to form well before constriction (Den Blaauwen et al., 1999). In addition, recent data suggests that at the single cell level, the timing of its formation is independent of the timing of constriction and the cell length at birth (Tsukanov et al., 2011). It is therefore possible that once the ring is formed, the amount of FtsZ it contains increases up to a threshold where the ring becomes able to recruit all the downstream components of the divisome and constrict. In this case, division timing could depend on cell size through FtsZ accumulation inside the ring.
Chromosome Segregation: a Size-Dependent Event?
Chromosome segregation occurs in three steps: separation of the newly replicated origins, bulk chromosome segregation and separation of the replicated termini (Wang et al., 2013). Bulk chromosome segregation appears as an important transition, leading to the appearance of a DNA free space at mid-cell and relief of the division inhibition mediated by Nucleoid Occlusion (NO). NO prevents the formation of the Z ring in the vicinity of the chromosome (Mulder and Woldringh, 1989;Wu and Errington, 2011). The mechanism of NO is not fully elucidated yet but some molecular bases have been provided by the discovery of the Noc protein of B. subtilis and SlmA protein of E. coli, which associate with DNA and inhibit FtsZ polymerization (Wu and Errington, 2004;Bernhardt and De Boer, 2005). Interestingly, these NO proteins bind to specific DNA sequences that decorate a large portion of the chromosome but are absent from the Ter region (Wu and Errington, 2011). Therefore, NO proteins are largely removed from mid-cell following bulk chromosome segregation.
The mechanism underlying bulk chromosome segregation are still unclear but entropic forces have been proposed to either drive or facilitate segregation (Jun and Wright, 2010;Di Ventura et al., 2013). Such forces can in theory segregate mixed chains of polymers in a situation of confinement. A single E. coli chromosome is more than 1 mm in length and has to be compacted more than 1000-fold to fit inside the cell (Wang et al., 2013). Replicated chromosomes are thus confined inside the cell, whose shape and size will determine the intensity of the entropic forces. Importantly, if segregation is entropy-driven, it should be easier in longer cells. During cell elongation, the intensity of the entropic forces increases, which could lead to size-dependent segregation. In agreement with this hypothesis, experimental results on cell cycle progression in single cells suggest that the timing of nucleoid splitting depends on cell growth rather than on replication progression (Bates and Kleckner, 2005). Therefore, if driven by entropic forces, the very act of segregation could be a size-dependent signal, transmitted to the division machinery through the relief of NO.
Interestingly, the MinCD gradient has recently been shown to be involved in chromosome segregation (Di Ventura et al., 2013). MinD was shown to bind DNA and tether it to the cell membrane. The Min system therefore provides a gradient of membrane sites for DNA attachment. Computer simulations showed that entropic forces alone could not ensure full chromosome segregation. In contrast, full segregation would be possible if chromosomes repeatedly bind to membrane-associated sites forming a gradient emanating from the pole, such as provided by MinD. The concentration of MinD along the cell axis is likely to depend on cell length, as was demonstrated for Pom1 in the fission yeast (Martin and Berthelot-Grosjean, 2009;Moseley et al., 2009). Its involvement in segregation suggests another potential size-dependence for chromosome segregation.
The Initiation of Replication:
Size-Sensing Through DnaA?
In the long-standing model of bacterial cell cycle initially proposed by Donachie, the coordination between growth, replication and division is mainly achieved through the control of replication initiation : the cell grows until reaching a critical size at which point replication is initiated and division occurs at a fixed time after initiation (Donachie, 1968). Donachie proposed that replication initiation could be controlled by a positive regulator, which accumulates proportionally to cell size and triggers initiation when reaching a critical amount. This critical amount would therefore correspond to a critical size. Later, DnaA was identified as the initiator protein in bacteria and has therefore been the natural candidate for Donachie's positive regulator (Løbner-Olesen et al., 1989).
I will focus here on E. coli, where replication initiation has been most studied and best characterized. In E. coli, the initiator DnaA is active when bound with ATP and inactive when bound with ADP. Active DnaA binds to specific DNA sequences in the replication origin region, leading to unwinding of the DNA sister strands in a neighboring AT-rich region and loading of the replisome (Katayama et al., 2010). This initiation event is tightly controlled in order to occur once and only once in each cell cycle. In particular, several mechanisms ensure the inactivation of DnaA following initiation, in order to prevent immediate reinitiation. These mechanisms include seqAdependent sequestration of OriC and transcriptional repression of the dnaA gene (Campbell and Kleckner, 1990), titration of DnaA to the datA chromosomal locus (Kitagawa et al., 1996), and DnaA inactivation through RIDA, a replicationcoupled mechanism involving a complex of Hda, ADP, and the DNA polymerase sliding clamp (Kato and Katayama, 2001;Katayama et al., 2010). All these mechanisms ensure that replication initiation occurs only once in the cell cycle. After this wave of inactivation, the newly produced DnaA, which is rapidly converted to ATP-DnaA (Messer, 2002), accumulates in the cell until the next initiation event. In addition, some reactivation mechanisms convert ADP-DnaA to ATP-DnaA (Katayama et al., 2010). A mechanism involving cardiolipin, a membrane phospholipid, has been suggested (Sekimizu and Kornberg, 1988), and two specific DNA sequence, DARS1 and DARS2, have been demonstrated to promote the conversion of ADP-DnaA to ATP-DnaA (Fujimitsu et al., 2009).
As an outcome, the cellular level of ATP-DnaA oscillates during the cell cycle, with a maximum at the time of initiation (Kurokawa et al., 1999). Initiation has therefore been proposed to occur when the amount of ATP-DnaA reaches a threshold.
Interestingly, when DnaA is overexpressed several folds, the initiation timing is only slightly perturbed (Atlung et al., 1987;Kurokawa et al., 1999). In these conditions, it was shown that the proportion of the ATP and ADP-bound forms is unchanged (Kurokawa et al., 1999). Since these two forms might compete for binding DNA, initiation could be triggered when the ratio of ATP-DnaA to ADP-DnaA reaches a threshold (Donachie and Blakely, 2003). Thus, a simple model of initiation control, developed in Donachie and Blakely (2003), postulates that DnaA is synthesized constitutively, so that its total amount is proportional to cell mass, and immediately binds ATP (Messer, 2002). The ratio of ATP to ADP bound DnaA therefore would increase in parallel with cell size, until it reaches a threshold and triggers initiation of replication, leading to the inactivation of DnaA. In this model, the constitutive expression of DnaA and its immediate binding to ATP leads to an amount of active DnaA and a ratio of the active to inactive form both increasing linearly with cell size after replication initiation. It is still unclear how this linear relation could be affected by the reactivation of DnaA during the cycle, which is not taken into account in this simple model.
Recent Revision of the Critical Size Paradigm of Size Control: the Incremental Model
In both yeast and bacteria, substantial correlations are observed at the single cell level between size at birth and size at division (Campos et al., 2014;Soifer et al., 2014;Taheri-Araghi et al., 2015). Such correlations argue against the classical model of cell size control in which division occurs at a critical size, up to some noise, with no memory of the size at birth. Indeed both yeasts and bacteria show some memory between one generation and the next: the cell's size at division is correlated to its size at birth, i.e., to the size at division of its mother.
Such a memory could in principle be the result of epigenetic inheritance of some molecules involved in the determination of the critical size. Gene expression naturally undergoes random fluctuations whose time scale is usually larger than a generation time. Such slow fluctuations can therefore generate correlations between the abundance of a protein in a cell and in its daughter (Rosenfeld et al., 2005;Longo and Hasty, 2006). When considering a molecular pathway involved in size measurement and control of cell cycle progression, such simple epigenetic memory would be likely to introduce some correlations throughout generations, such as between the size at division of a cell and its daughter's. The simple critical size model can be modified to take such memory into account. For instance an additional structuring variable can be added in the equations of the so-called sloppy size control model, using a division rate depending not only on the cell size but also on its size at birth (Osella et al., 2014). Using another mathematical framework, size at division can be described using an autoregressive model (Amir, 2014).
Strikingly, in very different organisms such as E. coli, B. subtilis, C. crescentus, and S. cerevisiae, the dependence of size at division on size at birth is the same, a linear relationship with slope one (Campos et al., 2014;Soifer et al., 2014;Taheri-Araghi et al., 2015). It is worth noting that the case of C. crescentus is currently debated, since another dependence have been suggested by Iyer-Biswas et al., who proposed that the size at division is a multiple (≈ 1.8) of the size at birth, indicating a timer mechanism of division control (Iyer-Biswas et al., 2014). Nevertheless, the data obtained by Iyer-Biswas et al. was then analyzed independently by Jun et al. in a way similar to the analysis performed in E. coli, B. subtilis, and S. cerevisiae, giving a linear relationship with slope ≈ 1.2 (Jun and Taheri-Araghi, 2015). The linear relationship with slope one is precisely the prediction of the so-called incremental model, where a cell tries to add a constant volume between birth and division (Sompayrac and Maaløe, 1973;Voorn et al., 1993;Amir, 2014). Therefore, even though some memory can be caused by slow fluctuations in gene expression, the value of the memory exhibited by the size at division through the successive generations and its conservation among several very divergent organisms suggest a more profound revision of the classical critical size model: the cell does not divide at a critical size but tries to add a constant size to its size at birth.
Control of Cytokinesis and Chromosome Segregation
Although the interpretation of the experimental data in terms of the incremental model cannot conclude on the underlying molecular mechanisms, it offers a phenomenological description and sheds a new light on the possible mechanisms limiting cell size variations. As detailed in the previous sections, several mechanisms may be responsible for the size-sensing and the sizedependent control of cytokinesis, chromosome segregation and DNA replication initiation. Among these mechanisms, one may lead to the limitation of cell size variations. This mechanism should implement an incremental strategy.
Potential geometric sensors such as MinCD are unlikely to implement an incremental strategy: they could link the instantaneous division probability with the current cell size but could hardly measure a size increment since birth. Importantly, Campos et al. showed that a minC mutant of E. coli also exhibits an incremental strategy of size control, therefore demonstrating that the Min system does not play a crucial role in sensing the size increment (Campos et al., 2014).
Likewise, the process of chromosome segregation may be sizedependent but is unlikely to be related to a size increment. Entropic forces potentially driving or facilitating segregation could depend on the instantaneous cell size and geometry but hardly on the difference between the instantaneous size and the size at a previous cell cycle event. Likewise, the potential involvement of MinCD in segregation would rather create a dependence on the instantaneous cell size than on a size increment.
In E. coli and B. subtilis, the concentration of FtsZ is constant during the cell cycle. Its total amount is therefore proportional to cell size and not to the size increment since birth. FtsZ is thus unlikely to be a size increment sensor in these organisms. The situation is different for C. crescentus, where FtsZ is degraded at the onset of constriction (Quardokus et al., 1996;Kelly et al., 1998). The FtsZ-dependent size measure would therefore be reset at the end of the cycle when FtsZ is degraded. FtsZ levels could therefore in principle be correlated to the increment of size since birth in this organism.
Therefore, in E. coli and B. subtilis the limitation of cell size variations seems unlikely to result from size control at the level of cytokinesis or chromosome segregation. In contrast, in C. crescentus FtsZ could perform a measure of size increment.
Control of Replication Initiation
The incremental model postulates a constant size increment between two successive events of the cell cycle, such as between birth and division or between an initiation event and the next one. Interestingly, Campos et al. studied the possibility that the size increment could be applied at a cell cycle event other than division, such as replication initiation (Campos et al., 2014). They performed simulations of such a "phase-shifted" model and found that it was not compatible with their data on E. coli and C. crescentus. In particular, their model produced a negative correlation between the size increment (between birth and division) and the size at birth, whereas no such correlation exists in the data. Nevertheless, recent work by Ho and Amir (in this Frontiers Research Topic: The Bacterial Cell: Coupling between Growth, Nucleoid Replication, Cell Division and Shape) shows that this correlation strongly depends on the variability in the durations from replication initiation to division. If this variability is small compared to the variability in the duration of the whole cell cycle (less than 30 percent), then the correlation is close to zero, as experimentally observed. Campos et al. also report that their phase-shifted model produces abnormal cell size distributions. Nevertheless, using a different phase-shifted model, Ho and Amir show that cell size can be robustly regulated. Therefore, the results of the simulations of the "phase shifted" model of Campos et al. cannot be used to rule out the possibility of size control at the level of replication initiation.
When E. coli cells are shifted from a poor medium to a rich medium, the rate of mass increase immediately changes whereas the rate of cell division is maintained at the pre-shift value for a lag of approximately 60 min, corresponding to the constant C + D period Cooper, 1969). This phenomenon, called rate maintenance supports the hypothesis of size control acting at the level of replication initiation (Cooper, 1969), such as proposed in Donachie's model (Donachie, 1968;Donachie and Blakely, 2003). Also in support of this hypothesis, cell size is exponentially dependent on growth rate , with an exponent of 60 min (i.e., C + D period). This relation can be easily derived in the framework of the incremental model when the size increment is added between two successive replication initiation events, as shown in Amir (2014). If initiation is triggered by the size-dependent amount of active DnaA, its subsequent inactivation would reset the size measure at each initiation event. Donachie's model could therefore be revisited to account for the incremental strategy: initiation would be triggered not at a critical size but when a critical size has been added since the last initiation event (see Figure 1). Division would follow a constant time after initiation, through an as-yet unknown mechanism. As mentioned above, in addition to the ATP-DnaA formed by de novo DnaA expression, ADP-DnaA can be partly reactivated during the cycle. How the amount of ATP-DnaA varies with cell mass between two initiation events is therefore not completely clear. In addition, initiation may be triggered by other DnaA-dependent variables, such as the ratio of the active to inactive forms (Donachie and Blakely, 2003). It is still unclear what is the triggering signal for initiation and how it is linked to cell size. Nevertheless, the inactivation of DnaA following initiation is crucial in initiation control and appears as an interesting basis for the implementation of an incremental size measure. The incremental model can satisfyingly describe E. coli, B. subtilis, and C. crescentus as well as the budding yeast S. cerevisiae. The conservation of this size control strategy among widely divergent organisms is striking and suggests some common organizing principle. Interestingly, even though the molecular mechanisms involved in replication initiation are different among these organisms, common control principles can be found. For instance the negative regulation of OriC after initiation is common to E. coli, B. subtilis, and C. crescentus (Katayama et al., 2010). Also, the principle of RIDA, which inactivates DnaA in E. coli in a replication-coupled manner through the action of the polymerase sliding clamp, is widely conserved (Katayama et al., 2010). The clamp-mediated inactivation of initiation proteins has also been demonstrated in B. subtilis (Soufo et al., 2008), C. crescentus (Collier and Shapiro, 2009), and in several eukaryotic organisms (Arias and Walter, 2006;Nishitani et al., 2006;Katayama et al., 2010). For all these organisms the initiation potential may therefore fluctuate in the cell in a way similar to the DnaA-related signal in E. coli. This may lead to a common size control strategy, such as described by the incremental model. Amir, 2014). For simplicity, a slowly growing E. coli cell is represented. The colored bar represents the dynamics of the size increment S through the cell cycle. After initiation of replication, S increases until division. At division, S is shared between the two daughter cells. In each daughter cell, S then increases up to the critical value , leading to replication initiation and resetting of the size increment (S = 0).
Conclusion
The identification of a size-sensing molecule has been a long lasting quest and has proven surprisingly difficult, as evidenced by the recent controversy on the role of Pom1 in the fission yeast. This might indicate that size-sensing is generally not the function of a single molecule but is rather a systems-level property. In other words, several proteins inside a regulatory pathway may participate to the size-dependence of a cell cycle event. As an example, in E. coli the local concentrations of MinCD and FtsZ at mid-cell might both change when the cell elongates and could in concert determine the size-dependence of cytokinesis. Importantly, numerous cell cycle regulators exhibit specific localization patterns, such as CtrA and MipZ in C. crescentus, MinCD, SlmA, DnaA in E. coli (Bernhardt and De Boer, 2005;Goley et al., 2007;Lutkenhaus, 2007;Boeneman et al., 2009;Nozaki et al., 2009). Such subcellular localization can readily result in size-dependent local concentration. When a protein is constitutively expressed, its total amount is proportional to cell size. If such a protein is localized in a subcellular volume that does not change proportionally to the total volume when the cell grows, its local concentration is size-dependent. As a simple example, if the subcellular volume is constant, the protein concentration is proportional to the total cell volume. Several regulators in a single pathway can be localized, in particular since some events such as cytokinesis are spatially controlled, and several size-dependent signals may therefore co-exist, leading to a complex size control of the event.
Here I described several regulatory pathways that could lead to size-dependent regulation of replication initiation, chromosome segregation and cytokinesis. In principle they could all be responsible for cell size control, i.e., for the observed dependence of the interdivision time on the size at birth at the single cell level. Old physiology experiments, in particular demonstrating the "rate maintenance" phenomenon Cooper, 1969), suggest that size control in E. coli acts primarily at the level of replication initiation (Cooper, 1969). In agreement with this hypothesis, the dynamics of DnaA activity could potentially explain the observed incremental strategy for size control. The inactivation of DnaA following initiation could reset some DnaA-dependent signal which could therefore be linked to a size increment. In addition, the widely conserved principles underlying replication initiation control are in agreement with the common incremental strategy found in three divergent bacteria, as well as in a unicellular eukaryotic organism. In the budding yeast, size control is known to act primarily at the Start transition, i.e., the onset of replication, with a modulation of the G1 duration according to cell size at birth (Johnston et al., 1977;Turner et al., 2012). Experimental data are also compatible with size control acting slightly earlier, for instance at the level of origin licensing (Soifer et al., 2014). In contrast, in S. pombe, size control is known to act primarily at the G2/M transition (Fantes and Nurse, 1977;Turner et al., 2012). Interestingly, a recent analysis of single cell growth and division in this organism indicate that it does not follow the incremental model (Nobs and Maerkl, 2014), suggesting that the incremental strategy could be a property of the size control provided by the regulatory mechanisms of replication initiation. It would be interesting to study at the single cell level the wee1 mutant of S. pombe, which exhibits size control at the G1/S transition and determine whether size control follows an incremental strategy. In bacteria, DnaA-dependent initiation may provide the principal mechanism limiting cell size variations. Other cell cycle transitions may be size dependent, such as cytokinesis or chromosome segregation, leading to secondary size control mechanisms that could be revealed in specific environmental or genetic contexts, as demonstrated in yeasts.
|
2016-06-17T06:07:49.677Z
|
2015-05-29T00:00:00.000
|
{
"year": 2015,
"sha1": "44870cc7568d3f2fd29b8075d6a26d705262e155",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2015.00515/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "44870cc7568d3f2fd29b8075d6a26d705262e155",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
14322427
|
pes2o/s2orc
|
v3-fos-license
|
Comparative evaluation of the cadaveric, radiographic and computed tomographic anatomy of the heads of green iguana (Iguana iguana) , common tegu ( Tupinambis merianae) and bearded dragon ( Pogona vitticeps)
Background Radiology and computed tomography are the most commonly available diagnostic tools for the diagnosis of pathologies affecting the head and skull in veterinary practice. Nevertheless, accurate interpretation of radiographic and CT studies requires a thorough knowledge of the gross and the cross-sectional anatomy. Despite the increasing success of reptiles as pets, only a few reports over their normal imaging features are currently available. The aim of this study is to describe the normal cadaveric, radiographic and computed tomographic features of the heads of the green iguana, tegu and bearded dragon. Results 6 adult green iguanas, 4 tegus, 3 bearded dragons, and, the adult cadavers of : 4 green iguana, 4 tegu, 4 bearded dragon were included in the study. 2 cadavers were dissected following a stratigraphic approach and 2 cadavers were cross-sectioned for each species. These latter specimens were stored in a freezer (−20°C) until completely frozen. Transversal sections at 5 mm intervals were obtained by means of an electric band-saw. Each section was cleaned and photographed on both sides. Radiographs of the head of each subject were obtained. Pre- and post- contrast computed tomographic studies of the head were performed on all the live animals. CT images were displayed in both bone and soft tissue windows. Individual anatomic structures were first recognised and labelled on the anatomic images and then matched on radiographs and CT images. Radiographic and CT images of the skull provided good detail of the bony structures in all species. In CT contrast medium injection enabled good detail of the soft tissues to be obtained in the iguana whereas only the eye was clearly distinguishable from the remaining soft tissues in both the tegu and the bearded dragon. Conclusions The results provide an atlas of the normal anatomical and in vivo radiographic and computed tomographic features of the heads of lizards, and this may be useful in interpreting any imaging modality involving these species.
Background
Nowadays, reptiles are treated at veterinary practices in a context where both reptile and amphibian medical expertise is constantly improving as more advanced knowledge of these species is scientifically validated [1]. Owners now expect and demand more targeted and expert diagnostic testing for their animals, on the basis of these advances [1]. Reptile medicine has in fact stirred a certain interest in recent years, which is illustrated by the numerous publications validating diagnostic, surgical and anesthesiological techniques in reptile patients [2][3][4][5][6][7][8][9][10][11][12][13][14].
Radiographic evaluation of the skull and vertebral column is the most economic and readily available imaging modality for the exotic animal clinician [1]. Furthermore, routine access to CT is becoming increasingly common and many specialty practices have a CT scanner on site [1]. Nevertheless, accurate interpretation of radiographic and CT studies requires a thorough knowledge of the gross and the crosssectional anatomy [15,16].
To the best of our knowledge, no comprehensive description of the radiographic and computed tomographic features of lizard heads is currently available. Therefore, the aim of this study is to describe and compare the normal radiographic and CT features of the head of some of the most popular pet lizards. Green iguanas and bearded dragons are common reptile pets [17,18]; tegus have recently become popular among collectors and breeders, although no official data exist on this topic. This study has resulted in a series of tables matching the normal anatomic, radiographic and computed tomographic features of these species.
To the best of the authors' knowledge, an updated, univocal anatomical reference for the species considered in this study is not available; nevertheless, the anatomy of closely related species has been already thoroughly described and several references describing the anatomy of the head of lizards are currently reported [19][20][21][22][23][24][25][26][27][28].
The heads from adult cadavers of 4 green iguanas (3 females and 1 male, mean length 85.1 ±18.1 cm, mean weight 2824 ± 856 g), 4 tegus (2 females and 2 males, mean length 42.5 ± 15.1 cm, mean weight 1587 ± 785 g) and 4 bearded dragons (1 female and 3 males, mean length 18.1 ± 7.1 cm, mean weight 385 ± 49 g) were dissected for this study. The animals were referred to the Radiology Unit of the Department of Animal Medicine, Production and Health at the University of Padua (Italy) for specialty examination and were euthanized because of advanced clinical conditions. A complete post-mortem gross examination was performed on each lizard and revealed pneumonia in 4 cases (2 iguanas and 2 bearded dragons), egg retention in 3 cases (1 iguana and 2 tegus), diffuse abscesses in 2 cases (1 tegu and 1 iguana); no lesions were evident in 3 cases (1 tegu, 2 bearded dragons).
Additionally, 13 live adult animals -6 iguanas (4 males and 2 females, mean length 90.8 ± 20.1 cm, mean weight 3298 ± 922 g), 4 tegus (1 male and 3 females, mean length 55.2 ± 22.1 cm, mean weight 2156 ± 655 g) and 3 bearded dragons (3 females, mean length 22.1 ± 5.4 cm, mean weight 455 ± 75 g)referred to the above facilities for specialty examination were included in the study. The pathologies affecting the animals did not involve the head in any of the cases and no pathological findings were evident at clinical examination of the head. For this reason, the imaging procedures were extended to the head of the animal upon prior consent from the owner a .
Imaging procedures
Complete radiographic studies of the head, including LL (left lateral), right lateral, VD (ventrodorsal) and dorsoventral projections, were performed on all specimens and live animals. Radiographs were obtained by means of a computed radiography device (Kodak Point of Care CR-360 System-Carestream Health, Inc -Rochester, USA). All radiographs were displayed with a bone-edge slight enhancement filter.
CT examination was performed on all the live subjects. These animals were sedated with sevofluorane (Sevofluorane 100%, Baxter Spa, Rome, Italy) administered through a face mask. All CT studies were performed in a cranio-caudal direction with the animals lying on ventral recumbency. Pre-and post-contrast CT images were obtained in transverse planes by means of a third-generation CT scanner (Tomoscan W LX, Philips Medical Systems, Amsterdam, Holland). The CT parameters were: axial acquisition mode, rotation time of 2.9 s, voltage of 120 kV, amperage of 125 mA, and slice thickness of 1.5 mm. Contrast medium (Optiray W 300 mg/ml, Covidien Spa, Italy) was injected directly into the caudal vein at the dose of 2.2 ml/kg bw. The images were then displayed in a bone tissue window (window length: 500; window width: 2,000) and a soft tissue window (window length: 40; window width: 400). Only post-contrast images have been reported.
Anatomical dissections
Stratigraphic gross anatomical dissection of the cadaver heads was performed in 2 green iguanas, 2 tegus and 2 bearded dragons. The dissections were performed within 24 h of death in each patient to minimise post-mortem changes.
Two green iguana, 2 Tegu, and 2 bearded dragon cadaver heads were designated for cross-sectional studies. Immediately after death these latter specimens were placed on a plastic support in ventral recumbency with the legs adherent to the body and successively stored into a freezer (−20°C) and kept there until completely frozen. Cross-sectional anatomic dissection was performed by means of an electric band-saw. Contiguous 5 mm sagittal slices were obtained starting at the snout and reaching the cranial portion of the lungs. The slices were cleaned with water, numbered and photographed on the cranial and caudal surfaces.
The individual anatomic structures were first identified in the anatomically dissected and cross-sectioned heads, on the basis of anatomic references, and then matched with the corresponding structures in the radiographs and CT scans.
Results
Most of the clinically relevant structures of the head were recognised both in cross-sectional and anatomic dissections (Figures 1, 2, 3, 4, 5, 6, 7, 8 and 9). Right lateral and dorsoventral radiographic projections were not shown because the radiographic images were identical to the LL and VD projections, respectively. Figures 1A, 4A and 7A show ventral views of the superficial plane of the stratigraphic dissection (only skin was removed). Figures 1B, 4B and 7B show ventral views of the stratigraphically dissected heads: 1) after removal of musculus constrictor colli and musculus intermandibularis, and, 2) at a deeper dissection plane (the deeper dissection plane is marked on the figure). Two dissection planessuperficial, where only skin was removed (Figures 2A, 5A, 8A), and deep ( Figures 2B, 5B, 8B)are shown in LL views of the stratigraphically dissected heads.
The bony structures were clearly defined on the radiographs of all species (Figures 1C, 2C, 4C, 5C, 7C, 8C). Some soft tissues, such as the oesophagus, the trachea and the masticatory muscles, could be also identified.
A selection of matched cross-sections and CT images are shown in Figures 3, 6 and 9. In Figures 3A, 6A and 9A, the lines superimposed on the photographs indicate the approximate level of the matched images displayed in the corresponding figures. The level of the crosssections displayed in the images is similar for all the species considered in order to emphasise the comparative imaging features. A small amount of fluid was noticed in the oesophagus of the cross-sectioned iguanas ( Figure 3H).
Discussion
The skull of lizards belonging to the infraorder Iguania is roughly triangular in dorsal view, with a short pre-orbital region. It retains all the characteristics of the ancestral lizard skull with no secondary closure of the skull openings [15]. The main difference between Iguanidae and Agamidae is in the nature of tooth implantationacrodontal rather than pleurodontalalthough most agamids have at least some pleurodont teeth in the front lower jaw [15].
Standard positioning is a prerequisite for good film quality [29]. As in snakes, the junctions between the bones composing the lower jaw are relatively loose [13]. Therefore, radiographic positioning quality should be evaluated, in our opinion, primarily through the symmetry and superimposing of the bilateral structures of the snout and neurocranium both in lateral and sagittal projections.
All the radiographs and CT images shown in the figures were obtained from live animals. It is the authors' opinion that, although a direct comparison between anatomical and diagnostic images was not possible, a significant correlation in the matched images was achieved. Moreover, the use of contrast medium enabled good visualisation of the soft tissues and overcame the lack of soft tissue detail encountered in other similar works performed only on cadavers [8].
The CT images of the iguana (Figure 3) provided good detail when displayed both in bone and soft tissue windows. The CT images of the tegu (Figure 6) displayed in a bone window provided good detail of the bony structures whereas in the images displayed in a soft tissue window only the eyes were clearly distinguishable from the remaining soft tissues. The CT images of the bearded dragon (Figure 9) were of a relatively lower quality in both the bone and soft tissue windows. This lack of detail is due to the small size of this species and to the impossibility to reduce the field of view of our CT scanner to less than 16 cm. Despite this lack of detail, most of the bony structures were identified in CT images displayed in the bone window. In the CT images displayed in the soft tissue window, only the eyes and the Harderian glands were distinguishable from the other soft tissues of the head.
The nasal cavity of all three species was clearly visible both in VD ( Figures 1C, 4C, 7C) and LL (Figures 2C, 5C, 8C) radiographs. The septomaxilla and the nasal glands were very prominent in the LL radiographic projection of the iguana ( Figure 2C), giving the nasal cavities an overall higher radiopacity than in the tegu ( Figure 5C) and the bearded dragon ( Figure 8C). The nasal cavity of the iguana appeared almost entirely occupied by the nasal glands both in cross-sections and CT scans ( Figures 6B, C, D). The nasal glands in the bearded dragon ( Figures 9B, C, D) were similar in appearance but less prominent in the nasal cavity. The nasal glands in the tegu were more medially located and less evident both in crosssections and in CT images ( Figures 6B, C, D).
The scleral ossicles were clearly visible both in the iguana ( Figure 2C) and the tegu ( Figure 5C) in LL radiographic projections, whereas they appeared less evident in the LL radiographic projection involving the bearded dragon ( Figure 8C). Furthermore, the scleral ossicles were evident in CT images of all the examined species ( Figures 3F,G, 6F,G, 9F,G) but were hardly visible in anatomic cross-sections ( Figures 3E, 6E, 9E).
The bones composing the lower jaw were not individually evident either in LL or VD radiographic projections in any of the examined species because the sutures between the bones of the lower jaw are smaller than the minimum radiologic resolution.
The oesophagus was well defined in LL radiographs of all the examined species ( Figures 2C, 5C, 8C). It appeared as a U-shaped radiolucency bordering the caudal aspect of the lower jaw in the iguana ( Figure 2C) and bearded dragon ( Figure 8C) whereas it appeared straighter in the tegu ( Figure 5C). Furthermore, the dorsal portion of the oesophagus is partially superimposed on the highly developed masticatory muscles in the iguana and bearded dragon (Figures 2A and 8A, respectively) and is grossly much larger, as can be seen in the radiographs.
The trachea was clearly identified in the LL radiographic projection partially superimposed on the ventral aspect of the oesophagus in the iguana ( Figure 2C) and tegu ( Figure 5C), while it was difficult to identify in the bearded dragon ( Figure 8C). In the iguana ( Figure 2C) the trachea showed a peculiar outline turning two ninety-degree angles at the inlet of the coelomic cavity. All the animals were deeply sedated during the imaging procedures; the authors are unable to determine whether this latter feature is due to muscular relaxation induced by anaesthesia or is normal even in unsedated iguanas.
The eyes were very evident in all the species examined, both in cross-sections and CT images displayed in a soft tissue window ( Figures 3E, G, 6E, G, 9E, G). The lens and the vitreous were clearly delineated both in crosssections and CT images in the iguana (Figures 3E, G) and tegu (Figures 6E, G). The lens and the vitreous were not individually identified either in cross-sections or CT images in the bearded dragon ( Figures 9E, G). In the iguana the Harderian glands were hardly visible in cross-sections ( Figure 3E) while they were very prominent and clearly distinguishable from the underlying sinus orbitalis in CT images ( Figure 3G). A radiodense line was noticed on the ventral aspect of the sinus orbitalis in post-contrast CT images of the iguana displayed in a soft tissue window ( Figure 3G). In the tegu the Harderian glands were clearly evident in cross-sections ( Figure 6H) but were difficult to identify in the CT images ( Figure 6L). In the bearded dragon the Harderian glands could be identified both in cross-sections ( Figure 9E) and CT images ( Figure 9G) but were less evident than in the iguana.
The head of lizards can be affected by several pathologies; abscesses, metabolic bone diseases, fractures, osteomyelitis, and neoplasia [30][31][32][33][34]. Bites from preys, traumas, foreign bodies, pyogenic infections may lead to the formation of abscesses in the head of lizards; this is especially true in captive specimens, as they are often immunocompromised and more prone to develop inflammation [31]. Successful treatment of the reptilian abscess is due to the complete removal of abscess cavity and surrounding fibrous capsule [31]. Therefore both radiographic and CT studies could be helpful to determine the extension and the number of involved structures for a good pre-surgical evaluation. Moreover, CT studies may become mandatory to achieve a correct diagnosis in deep abscesses that have no external protrusion.
Metabolic bone disease may result from different underlying pathologies in reptiles [32]. To this effect, conventional radiology and dual-energy x-ray absorptiometry (DXA) techniques of the head of lizards are obvious diagnostic tools, and may also successfully contribute to the follow up evaluation of the relative treatments [2,14,32].
Skull fractures of traumatic or pathological origin (i.e.: metabolic bone disease, neoplasia) are not uncommon in reptiles [34]. Plain or conventional radiology is not considered as the technique of choice to assess a skull fracture due to: 1) the relatively small size of the bones composing the skull, and, 2) the high risk of border effacement due to different tissues superimposition that is intrinsic to the technique. On the contrary it is the authors' opinion that CT may be useful to correctly assess skull fractures in lizards and, more in general, in reptiles.
The mandibula and the maxilla are the most common sites of osteomyelitis in the head; changes in their bony structure include osteolysis and less frequently new bone formation [30]. Although plain radiographs may be sufficient to diagnose this pathology [30]; nevertheless, in our opinion, CT scans may provide additional information on the extension of the infectious process and aid in planning a correct therapy.
Head/skull neoplasia is considered as a quite common disease affecting reptiles [33]. Since lizard patients often require sedation for diagnostic imaging procedures [30], it is the authors' opinion that, CT may be the imaging technique of choice if neoplasia of the head is suspected. The mayor advantages of CT are: the possibility to fully visualize the extension of the neoplastic process; and the possibility to perform CT guided fine needle aspirates or biopsies [35].
The skull of reptiles develops from a post-hatching cranium, the chondrocranium, to its definitive form in adults, the osteocranium [15]. Often the bones composing the chondrocranium have not reached full adult form [15]. All the tables presented in this work refer to adult animals and care should be taken when using these tables to interpret radiographs or CT studies of immature animals.
Conclusions
The radiographic and CT features described in this work will form a useful basis for interpreting imaging studies of the head of the green Iguana, tegu and bearded dragon, as well as providing a more general anatomic reference for structures of the head of these species. Nevertheless, if this work is used in the interpretation of medical images from species different from those described in this work, the interspecific differences should be known and considered.
|
2016-05-04T20:20:58.661Z
|
2012-05-11T00:00:00.000
|
{
"year": 2012,
"sha1": "0bc2269fcd8fff82da69b33d25983ff801bbcc19",
"oa_license": "CCBY",
"oa_url": "https://bmcvetres.biomedcentral.com/track/pdf/10.1186/1746-6148-8-53",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "577c357973ecd28bc9b7ecc2ee5b1b1b2fedd770",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
135310253
|
pes2o/s2orc
|
v3-fos-license
|
Potential Explanations for Why People Are Missed in the U.S. Census
Knowing the characteristic of people most likely to be missed in the Census is not the same as knowing why they are missed. In this Chapter information is provided on several of the leading ideas about why people are missed in the Census along with data related to many of the ideas. The topic is first approached from broad theoretical perspective then more detailed reasons are examined. The chapter draws heavily on the literature in survey research methodology.
Introduction
Most of this book has focused on the characteristics of people and groups that have high census net undercount and omissions rates. But knowing the characteristics of people who are missed is different from knowing why they are missed. This Chapter explores several possible reasons why people are missed in the U.S. Census and examines statistical data related to many of the ideas. Some of the ideas examined here reflect broad factors (like the characteristics of households and living arrangements) and some reflect narrow factors (like the imputation algorithm used by the Census Bureau).
It is widely believed that there is no single reason why people are missed in the Census. With respect to differential undercounts of racial and ethnic groups, de la Puente (1993, p. 2) captured the dominant perspective more than 20 years ago, Empirically and in the aggregate, there is no single reason why a disproportionate number of ethnic and racial minorities were not counted by the 1990 census. Rather, there are a constellation of factors that interact and contribute to the differential undercount.
The quote above focuses on undercounts by race and ethnicity, but the lack of a single predominant reason for people being missed in the Census goes beyond race and ethnic groups. With respect to the high net undercount of young children, the Census Bureau Task Force on the Undercount of Young Children (U.S. Census Bureau 2014) concluded, "The task force is convinced that there is no single cause for this undercount".
Despite several reports that address the issue of why people are missed and/or barriers to enumeration in the Census (U.S. Census Bureau 1992;de la Puente 1993;Martin and de la Puente 1993;Simpson and Middleton 1997;Schwede and Terry 2013;West and Fein 1990) there is no consensus framework for examining this issue. Because the Decennial Census is a large and complex operation, there is widespread belief that people are missed in the Census for many different reasons, and researchers approach census omissions from many different disciplines and perspectives, it is not surprising that there is no consensus on the best framework to use for understanding this problem.
What Is an Omission?
Determining what is an omissions in the Census is not as easy as it may seem. For example, in the 2010 Census there were 6 million whole person imputations (U.S. Census Bureau 2012a, Table 6). People are imputed into the Census count if they don't self-respond and there is good evidence they exist (for example, a housing unit that looks occupied), but the Census enumerator can never find anyone at home. If a person does not respond to the Census but data are imputed to represent them in the census count is that an omission? Also, some households and persons are included in the Census by proxy respondents. If a Census enumerator is unable to contact residents of a household after several attempts, the enumerator may seek data on the household from someone like a neighbor or a landlord. These are referred to as proxy respondents. Should people included in the census count by proxy respondents be called omissions?
It is also important to make a distinction between people who are totally missed in the Census and those who are included but misclassified. For example, if a Black person is included in the Census, but has their race coded as White (perhaps because that characteristic was imputed) that would not impact overall omissions, but it would impact the omissions for Blacks and Whites.
Broad Ideas About Why People Are Missed in the Census
Many of the ideas or theories about why people are missed in the Census are linked to broader questions of non-responsiveness in surveys (National Research Council 2013; Groves and Couper 1998). Several leading theories for understanding omissions in surveys and censuses from a theoretical perspective, including social capital theory, leverage-saliency theory, social exchange theory, are discussed in a report by the National Research Council (National Research Council 2013, pp. 33-36). Robinson et al. (2007) A report by Robinson et al. (2007) lists several barriers to census enumeration and in Appendix 1 of the report they explain how each barrier is tied to the likelihood of being missed in the Census. Table 13.1 lists the barriers offered by Robinson et al. (2007) and a short explanation of how each barrier could results in someone not be counted in the Census.
Other researchers have provided similar ideas about why people are missed in the Census (West and Fein 1990;de la Punte 1993;Schwede et al. 2015).
People Missed in the Census Due to Failure of Steps in the Data Collection Process
Another approach to understanding why people are missed in the Census is offered by researchers who examine the steps in the data collection process and try to understand which step failed and why (Beimer et al. 1991). Many researchers start by decomposing Census coverage along the lines of Olson (2009) who posits that any omissions in the Census-taking process must come from a failure of one of three steps below; • Failure to enumerate housing unit, • Failure to get a complete and accurate roster of household members, • Failure to get information for a person on the roster.
To reconfigure these three reasons for omissions, the first means the household did not get enumerated, and the second and third, means the household was enumerated, but not everyone in the household who should have been included, was included in the Census count. (U.S. Census Bureau 2012a). It is easy to understand why someone living in an overlooked housing unit might be missed in the Census. It is unlikely that the omitted housing units received a census questionnaire and therefore it is unlikely that occupants of those housing units were included in the 2010 Census.
Missing Households
The number of persons in the missed households is not directly available from the Census Bureau, but if the 2.2 million occupied households that were missed had the average number of persons per household in the 2010 Census (2.58) (U.S. Census Table 3 Bureau 2018a) the missed households would account for 5.7 million missed persons out of a total of almost 16 million omissions in the 2010 Census (U.S. Census Bureau 2012b, Table 2). Missed housing units appears to be a big reason for the high number of omissions in the 2010 Census. There are several reasons why housing units may be missed in the Census. In some cases, homeowners turn a basement or a garage into a separate housing unit so what appears to be a single-family residence is a multi-unit structure and the new housing units are easy to miss. Mahler (1993) indicates landlords sometime create illegal units in multi-unit structures and they are reluctant to reveal the existence of these units.
In many rural areas housing units may be far off the road and hidden from view and so they are not included in the Master Address File. For example, de la Puente (1993, p. 15) states, In rural areas, unlike in urban areas, unmarked and/or hidden roads and mismatches between mail delivery address of housing units and the actual physical location of the housing unit are conditions associated with the omissions of housing units from the Census.
Also, in both urban and rural settings Census Bureau canvassers must determine if a housing unit is inhabitable. If a building is deemed uninhabitable it will not receive a Census questionnaire. Making a judgment about the inhabitability of a structure is difficult, despite guidance provided by the Census Bureau; but just as important, some structures that are deemed uninhabitable may actually have people living there. Kissam (2017) and Fein (1989) provide detailed descriptions of some of the many ways housing units are likely to be missed in the Census.
People Omitted on Census Questionnaires that Are Returned
In addition to people who are not counted because their whole household was missed, some people may be missed because they were left off Census questionnaires. People may be omitted because they are left off a questionnaire that was mailed back, missed by an enumerator during the Non-Response Followup operation or possibly missed in another way such as an incorrect proxy response, a data processing error, or incorrect imputation.
Most omissions can be classified as the result of either confusion or concealment. Confusion can lead to a missed person if the respondent is confused about whether a given person should be thought of as part of the household or family and included in the census questionnaire. Confusion can also lead to a person being missed if the respondent is unsure if a certain type of person (like a young child or noncitizen) is supposed to be included in the Census.
Other people may be left off Census questionnaires because the respondent wants to conceal a person or whole household from the authorities. There are many reasons why a person might be concealed. For example, some people fear the federal government and they see the Census Bureau as just another federal government agency. Growing fear of the federal government in many communities is likely to be a big factor in the 2020 Census.
People Omitted in the Census Because of Confusion
The "usual place of residence" is a key concept used by the Census, but Martin (1999Martin ( , 2007 argues that concept is not always clear to respondents and attachment to a single household may not apply to some people. Martin talks about the concept of "residential ambiguity" which reflects uncertainty about whether an individual belongs to a housing unit or household. Moreover, most of the rules respondents usually use to determine who they think lives in their household (economic contributions, doing household chores, receiving mail at the address) are not the rules used in the Census, so respondents may be confused about the concepts of families and households used in the Census. According to West and Robinson (1999, p. 10), The Census rules of residence instruct that the person in whose name the house or apartment is owned, being bought or rented be listed as person 1 on the form. The respondent is then asked to identify members of the household in relation to person 1. This often contradicts the respondent's notion of family or household.
It is easy to understand how someone who is only marginally attached to a housing unit may be missed in the Census. Whoever is filling out the Census questionnaire for a household may think the marginally attached person does not really live in the housing unit, so they are not entered on the Census roster. It is also feasible that the person filling out the census questionnaire may think someone who does not live at a housing unit continuously, for example, children in joint custody or someone who travels on business regularly, is being counted elsewhere. In reporting on confusion on the part of respondents in the 2010 Census, Schwede and Terry (2013, p. 89) concluded, "Additionally, the situation of mobility of people cycling between housing units and trying to determine from time spent in each where they should be counted was a reason for inconsistencies".
The problems with residence rules have been noted before. According to The National Research Council (2004, p. 153), The current definition of residence rules is confusing both to field enumerators and to residents. Difficulties arise for people with multiple residences, including those with movement patterns that are primarily within a week, or those that move seasonally. Such movement patterns are typically true of retirees, those involved in joint custody, arrangements, those with weekend homes in the country, students away at college during the school year, and people temporarily overseas.
One common situation related to confusion about the occupants of a housing unit is people living in complex households (Schwede 2018). For example, West and Robinson (1999, p. 9) describe one situation that may lead to a child being missed in the Census.
A child who resides in a diverse household structure and in a unique living arrangement among multiple nuclear families…Unusual living arrangements involving people that make it difficult for the respondent to roster the household correctly on the Census form, e.g. presence of multiple nuclear families, unrelated people or step people of the respondent.
Complex households often involve the presence of subfamilies in a household and that can make correct rostering of household members more complicated. One official at the Census Bureau "…noted that she was aware of instances with multiple families, for example, where the household respondent did not include people in the second family" (cited in U.S. Census Bureau 2014, p. 16). Martin (2007) contends people more remotely linked to the person filling out the census questionnaire, are more likely to be missed.
A report from the U.S. Census Bureau (2016) indicates that many young mothers were not included in the 2010 American Community Survey (ACS) and therefore they argue many were probably unreported in the Census. Many of the young mothers missed in the ACS were probably single mothers with their child(ren) living with the parent(s) of the mother or with some other householder. It is feasible that the respondent believes the young mother and her children are only living with them temporarily and so does not enter them on the Census questionnaire roster. It is also easy to imagine a grandmother filling out the Census questionnaire may think of the young grandchild in her household as part of her daughters' family rather than part of her family.
A rapidly growing type of living arrangement for people (percentage-wise) is cohabitating households. Since cohabiting couples reflect living arrangements that are relatively unstable (compared to married-couple families) and the relationships among adults and children are different from those in a nuclear family, it would not be surprising if a disproportionately high share of people in cohabiting households were not being reported in the Census.
Newborns may be particularly likely to be living in complex households A recent report from the Census Bureau (Monte and Ellis 2014, p. 2) found "more than one in five women with a birth in the past 12 months reported at the time of the survey that they were living in someone else's home". In another analysis (Gooding 2008) shows that 13% of mothers were not co-residing with their biological child under age 1 and rates are higher for Blacks and Hispanics where the net undercount of people is also higher. This may help explain the high omissions rate for young children (O'Hare 2015).
Large and Complex Households
People living in large households may be missed for many different reasons. First, large households are often complex households and people may be left off the questionnaire for reasons described in the previous section. Martin (1999Martin ( , 2007 shows that being closely related to the respondent in a survey greatly decreases ambiguity related to residential attachment and complex households are more likely to have occupants who are not closely linked to the householder. Examination of survey data taken prior to the 2010 Census shows that having more than two people in the household lowers the likelihood of respondents saying they will participate in the 2010 Census independent of other factors (Walejko et al. 2011). Data from the 2010 Census show that 84% of 2-person households mailed back their Census questionnaire compared to only 72% of 7-person households (Letourneau 2012).
Larger households are likely to contain children and child care demands may interfere with completing the Census questionnaire. This is a point made by Hillygus and colleagues (2006, p. 103) who note, Respondents who are married with children have a lower mail-back rate (83 percent) than those who are married without children (90 percent), suggesting that the time demands of child care work against taking on this particular duty.
In a poll conducted just prior to the 2010 Census (Pew Research Center 2010) about a third of those who said there were not planning to participate in the 2010 Census cited "too busy or not enough time" as the reason.
Confusion About What Types of People Should Be Included in the Census
Another reason people are missed in the Census is because respondents may believe the Census Bureau does not want some categories of people included in the Census. In a series of short surveys by the Census Bureau (Nichols et al. 2014a, b, c) respondents were asked, "What information do you think the Census typically collects every 10 years?" and were offered several choices. The percentage who thought the Census Bureau collects "Names of children living at your address" was 7-9 percentage points lower than the percentage who thought the Census Bureau collects, "Names of adults living at your address". While this question asks about names rather than about information on individuals, it suggests that some people think the Census does not request information on children.
A recent report (Vargas 2018) based on a nationwide survey of Latinos conducted by the National Association of Latino Elected Officials found that 15% of respondents who had a child under age 5 in the home would not count them in the Census or do not know if they would count them. These attitudes help explain why young children have a higher omissions rate than any other age group in the 2010 Census (O'Hare 2015). Also, in their qualitative study of 2010 Census respondents Schwede and Terry (2013) indicated many respondents do not believe the Census Bureau (or the federal government) wants children included in the Census count.
Young children are not the only group some people think are not supposed to be included in the Census. Data from the Census Barriers Attitudes and Motivators Survey done in 2008, indicates only 76% of the people interviewed think the Census Bureau wants non-citizens included in the Census (U.S. Census Bureau 2009, p. 88). In a similar study in 2018 (U.S. Census Bureau 2018b) only 55% of respondents were sure that noncitizens were supposed to be counted in the Census.
People Deliberately Concealed
There is evidence to support the idea that some people may be left off Census questionnaires on purpose. In some situations, there may be an effort to conceal an entire household, and in other situations the effort may be focused on concealing selected individual(s) within a household.
Some people may also be purposively left off census questionnaires based on rules about the housing unit where they are living. People could be left off because there are too many people living in the housing unit relative to rules about maximum capacity. If a housing unit is only supposed to have four people, by rule, but there are five people who regularly live there, one may be left of the census questionnaire because the respondent fears reporting five people in the unit might jeopardize continued occupancy.
Some people may be left off a census questionnaire because of the types of people who are allowed to live in a housing unit. For example, the report of the Census Bureau Task Force on the Undercount of People (U.S. Census Bureau 2014, p. 16) states one of the reasons people are left off forms is, "Respondents deliberately not mentioning kids for fear of some reprisals or bad outcomes from landlords, immigration agencies, social service agencies, etc". West and Robinson (1999, p. 7) also conclude, Listing some members of the households may have other negative consequences. For example, a respondent may fear that disclosure of certain members of the household will affect eligibility for social services, that people illegally in the country will be deported, or that the whereabouts of a child in hiding from a custodial parent will be detected. Pitkin and Park (2005) also mention "systematic concealment" as a potential reason children are undercounted in the Census. Concealment may be driven by lack of trust in the Census Bureau and/or the federal government. Despite assurances from the about the confidentiality of responses to the Census (Congressional Research Service 2018), many people believe that data given to the Census Bureau may be shared with other government agencies. A poll (Pew Research Center 2010, p. 5) conducted just prior to the 2010 Census indicated that among those who said they were not planning to participate in the 2010 Census, 18% cited distrust of government and 8% cited privacy concerns. In the 2010 Census Barriers Attitudes and Motivators study (U.S. Census Bureau 2011) researchers classified 10% of the population as cynical and 14% as suspicious of the Census Bureau.
Census Bureau staff report growing number of respondents are refusing to cooperate because of fear (Meyers and Goerman 2018;U.S. Census Bureau 2017a, b). There is growing distrust of the federal government and for most people the Census Bureau is seen as another branch of the federal government. Abowd (2018) indicates that fear among undocumented population is also likely to be have an impact on many of the people who are in the country legally because they live in a household where undocumented people are living. One recent report concludes (Alsan and Yang 2018, p. i). "Though not at personal risk of deportation, Hispanic citizens may fear their participation could expose non-citizens in their network to immigration authorities". This report was focused on participation in safety net programs, but I think it applies to the Census as well.
Such distrust affects certain populations more than others. Immigrants are one population where concealment may be common because of fear of the federal government. The 2016 ACS (Table S0501) shows there are 43.7 million foreign-born residents in the U.S. and 22 million noncitizens. The Pew Research Center (2018) reports that there were about 11 million unauthorized immigrants in the U.S. in 2015. The estimate from the Pew Research Center is similar to estimate for January 2014 from Homeland Security (2018) of between 10.8 million and 12.1 million. This impacts not only noncitizens and undocumented immigrants but the people living in a household with them. There are 6.4 million children living in a household with at least on undocumented immigrant. Abowd (2018, p. 6) states, "From the 2016 ACS we estimate that 9.8% of all households contain at least one noncitizen". Almost all children less than 5 years old are citizens but 20% of children in this age range live with at least one noncitizen (Population Reference Bureau 2018).
Barriers Posed by Questionnaire Design
There are couple of aspects of the design of the Census questionnaire that may contribute to omissions in the Census. Both are related to rostering or listing all the people in the housing unit.
The first step in the Census-taking process is getting a respondent to list all the people in the household. This is called rostering. There is a lot of evidence indicating that the way rostering is done can impact who is included in the roster. For example, West and Robinson (1999, p. 6) conclude, "Coverage errors are likely to occur because the respondent has difficulty rostering his or her household". A recent paper by Battle and Bielick (2014) suggests that the inclusion of children may be particularly sensitive to the way rostering is done. An experiment by Tourangeau and colleagues (1997) found substantial variation in persons who were deemed to live at a given address with differing rostering instructions. Other researchers (Lin et al. 2004;Waller and Jones 2014) found rostering instructions very important in determining who is included in a given household. As stated earlier, the rules for whether someone should be listed as part of the household are not always clear to respondents. One perspective on the mismatch between the Census questionnaire and changing American families is provided by Jacobsen (2017) who contends, the Census Bureau's data collection methods have not kept pace with the rapidly changing American family.
It should be recognized that alternative ways of rostering households often have tradeoffs. Some rostering methods result in higher respondent burdens and rostering methods that impose a higher burden on respondents are likely to reduce response rates.
On the Mailout/Mailback Census questionnaire that was used in the 2010 Census there was only room for complete demographic information for six people in the household. There is limited room for the names and a few characteristics for the 7th through the 12th person. If more than 12 people lived in the household, the Census Bureau had to follow up to get information for these people. For people living in the largest households (13 or more people) they may be missed because there is not enough room on the questionnaire for everyone in the household to be listed and follow up failed.
When incorrect or incomplete information is provided on the 2010 census questionnaire, the Census Bureau followed up with the household to get complete information for persons. But followup was often problematic, and the Census Bureau was only able to contact a little more than half the people in the 2010 Census followup operation (U.S. Census Bureau 2012d). The fact that followup was only done by telephone (not face to face follow up) also hampered data collection. Flaws in the followup operation may result in an omission. Heavy use of the internet for data collection in the 2020 Census may help lessen this issue because followup will be immediate on the internet.
People Missed Because of Estimation and Processing Errors
In addition to whole households being missed and people being left off census questionnaires, some people may not be reflected in the Census count because of processing errors in the Census operations. In particular, the final census counts include many people who are imputed and a large number who are included by proxy responses. Imputations take several forms. The simplest form is item imputation. If a respondent leaves a census question (an item) blank, say race, the Census Bureau uses data from the person, the household, and the neighborhood to impute a race for the person. This might impact omissions figures for certain groups. For example, if a person who is really Black had their race imputed as White, the omissions for Blacks would look higher and the omissions for Whites would look lower.
A bigger issue in terms of omissions is whole-person imputation. In the 2010 Census there were about 6 million persons imputed (U.S. Census Bureau 2012c, Table 9). This also takes several forms. If the Census Bureau enumerator has failed to find someone at home after repeated attempts, they may ask a proxy respondent such as a landlord of neighbor about the people living in the housing unit. It is not difficult to imagine a neighbor or landlord saying there are three people living in the housing unit when in fact they are four which results in an omission. Proxy responses provide low-quality data compared to self-reporting. The U.S. Census Bureau (2012b, Table 12) reports that 93% of responses from household members were correct compared to only 70% of those from proxies. The imputation methods of the Census Bureau have been refined over many censuses, but still may result in errors.
If after repeated visits when the Census Bureau enumerator found no one was at home, and no proxy respondent can be found, and the occupancy status of a housing unit is unclear, the Census Bureau imputes an occupancy status (occupied or vacant). If the imputed occupancy status is "occupied" the Census Bureau, then imputes the number and characteristics of people for the housing unit. It is easy to imagine a housing unit that really has four people living there, only gets three people imputed, and thus one person is omitted. It is also possible that several squatters may be living in a building that is imputed as vacant, so they are all missed in the census.
Summary
Several potential explanations for census omissions were examined and statistical data were provided for most potential explanations. While there is more support for some potential explanations that for others, no single reason or theory seems completely compelling. Perhaps the most fundamental conclusion from the material reviewed in this Chapter is that there are many different reasons why people are missed in the Census.
Some people are missed because the housing unit where they live is not included in the Census and others are missed because they are not captured in the Census even though others in the housing unit where they live are. Other findings include: • People may be missed because they are more likely to live in complex or nontraditional households where their status in the household is unclear. • Some people are missed because respondents are confused about who should be included on their Census questionnaire. • People may be missed because respondents want to conceal them from the government, in part, because of fear or reprisals or negative outcomes. • Some aspects of the census-taking process (like the construction of the questionnaire) result in some people being missed.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
|
2019-04-27T13:13:01.415Z
|
2019-01-01T00:00:00.000
|
{
"year": 2019,
"sha1": "06a0b2621443caf096824a610b722375dc97616c",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/978-3-030-10973-8_13.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "8685fc3bffc51b32c51885fa95b637e30b03b6bf",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"History"
]
}
|
40430307
|
pes2o/s2orc
|
v3-fos-license
|
Metabonomics studies on serum and urine of patients with breast cancer using 1 H-NMR spectroscopy
The aim of this study was to describe a metabolomic study of breast cancer using 1H-NMR combined with bioinformatics analysis. 1H-NMR spectroscopy combined with multi-variate pattern recognition analysis was used to cluster the groups (serum and urine samples from breast cancer patients and healthy controls) and establish a breast-cancer-specific metabolites phenotype. Orthogonalpartial least-squares discriminant analysis (OPLS-DA) was capable of distinguishing serum and urine samples from breast cancer patients and healthy controls and establishing a breastcancer-specific metabolite profile. A total of 9 metabolites in serum concentration and 3 metabolites in urine concentration differed significantly between breast cancer patients and healthy controls. Serum samples from breast cancer patients were characterized by decreased concentrations of choline, glucose, histidine, valine, lysine, acetate, tyrosine and glutamic, accompanied by increased concentrations of lipid relative to healthy controls. In urine samples, the level of phenylacetylglycine and guanidoacetate was significantly lower, while the level of citrate was significantly higher in breast cancer patients relative to healthy controls. In conclusion, this study reveals the metabolic profile of serum and urine from breast cancer patients. NMRbased metabolomics has the potential to be developed into a novel clinical tool for diagnosis or therapeutic monitoring for breast cancer. However, because of limitations of methods and technique, further research and verification is needed.
INTRODUCTION
Breast cancer is one of the most common cancers and the fifth leading cause of cancer-related deaths among women worldwide [1].The clinical diagnostic methods for breast cancer include physical examinations, mammography and histopathology.In order to avoid poor prognosis and increase long-term survival, it is important to make an accurate diagnose as early as possible.However, a major factor that contributes to poor prognosis is the fact that diagnosis is often delayed due to limitation in the conventional diagnostic screening methods [2].Although several tissue biomarkers have been identified, biopsy cannot be frequently repeated.Therefore, new sensitive and noninvasive biomarkers are still urgently needed to improve early detection rates of breast cancer.
Metabolomics, as the downstream of transcriptomics, genomics, and proteomics, is an emerging research field for detection, identification and quantification of low-molecular-weight metabolites that are involved in the metabolism in an organism at a specified time under specific environmental conditions [3].Metabonomics can provide complementary information that cannot be obtained directly from the genotype, geneexpression profiles, or even the proteome of an organism [4].In addition, it can identify early signals/biomarkers of cellular abnormalities that occur before the changes of gross phenotype [5].Currently, metabolomics has been widely used in biomarker detection, disease diagnosis and evaluation of treatment and prognosis [6].
Among the various techniques of metabolic profiling, nuclear magnetic resonance(NMR) spectroscopy has been widely applied in metabolite identification and quantification as a reproductive, non-targeted and non-destructive method that requires minimal sample preparation [7].Proton nuclear magnetic resonance ( 1 H NMR) spectroscopy is especially sensitive because protons are present in virtually all metabolites [8].
Currently, cancer metabolomics is gradually becoming a hot topic.Metabolomics methods could be used to monitor changes of specific metabolism in process of tumor development, to predict tumor progression, to monitor tumor response to intervention, to determining a characteristic metabolic pattern for cancer patients, to identify tumor associated biomarkers and to provide helps for early diagnosis, prognosis evaluation and efficacy analysis for cancer patients.Metabolomics has been successfully applied to biomarkers screening for many cancers, such as bladder [10], colon [11], lung [12] and prostate cancers [13].However, metabolomics studies on breast cancer is rarely reported.Hence, serum and urine metabolomic profiles from breast cancer patients and healthy controls were obtained using 1 H-NMR spectroscopy coupled with pattern recognition.The aim is to tap the potential biomarkers for early diognosis for breast cancer and to try to enhance understanding of the pathobiology of the disease.
H-NMR spectrum of serum and urine
1 H-NMR CPMG of serum and urine samples of group A and B are depicted in Figure 1.More than30 different metabolites were identified and quantified according to their chemical shifts and signal multiplicity.The main different peaks between the two groups are concentrated in the area of 0.5-5.5 and 6.5-9.0 ppm for serum samples and 0.5-9.0ppmfor urine samples (Figure 1 and Figure 2).To conduct an overview of discrimination between group A and B, further analysis was applied.
PCA
The PCA was first carried out and the score plot was obtained as in Figure 3.As can be seen from Figure 3, serum and urine samples in group A and B both have a tendency to separate, and specific biological information will be analyzed further.
PLS-DA
Supervised analysis techniques were then used, including PLS-DA and OPLS-DA.Based on the PLS-DA models for serum samples, group A and group B were discriminated with an R 2 X of 0.39 and a Q 2 of 0.75(Figure 4), while the R 2 X and Q 2 in PLS-DA model for urine samples of the two groups was 0.37 and 0.64 respectively (Figure 5).The models for serum and urine samples of group A and B were both valid, indicating that there were significant differences of metabolome for serum and urine samples between the two groups.
OPLS-DA
The OPLS-DA model was constructed subsequent to PLS-DA analysis using the first principal component and the second orthogonal component as Figure 6 and Figure 7.The quality of the models was described by the cross-validation parameters R 2 X and Q 2 , which represented the total variation for the X matrix, and the values are tabulated in Table 1.In OPLS-DA score plots of serum sample, a significant biochemical distinction between groups A and B was identified with R 2 X = 0.39 and Q 2 = 0.75 (Figure 6).In addition, some degree of separation for urine samples between groups A and B could also be visualized with R 2 X = 0.37 and Q 2 = 0.58 (Figure 7).
Metabolites statistics for serum and urine
Metabolites with statistical significance were further summarized by analyzing the correlative coefficient derived from OPLS-DA.The correlation coefficient is then compared with the cut-off value table to obtain metabolites that cause differences between groups.Nine metabolites were detected at significantly different levels in serum samples between groups A and B as in Table 2. Compared with group B, the level of choline, glucose, histidine, valine, lysine, acetate, tyrosine and glutamic was significantly lower, while the level of lipid was significantly higher in serum samples of group A. In urine samples, the level of phenylacetylglycine and guanidoacetate was significantly lower, while the level of citrate was significantly higher in group A relative to group B (Table 3).
DISCUSSION
As a heterogeneous disease, every kind of cancer has its own metabolic characteristics [14].As we all know, the metabolic state of malignant tumor tissue than the normal tissue is more robust.Due to various factors inside and outside the body, the synthesis activity of DNA and RNase increased, while protein anabolism and catabolism are enhanced, and anabolism is more powerful than catabolism [15].Even the decomposition products of normal tissue proteins are used to synthesize the nutrients needed by tumor tissue.Therefore, the occurrence and development of tumor are closely related to metabolic changes in the body [16].Although metabolomics studies have been widely used in a variety of tumors, there are few reports using metabonomics to study biomarkers of breast cancer [17].This study used an NMR-based metabonomics approach to develop a metabolic profile of patients with breast cancer.We demonstrate distinct differences in the spectra acquired between breast cancer patients and healthy controls.Based on statistical models, the technique has the potential to serve as a diagnostic tool for breast cancer and to identify metabolic features of the disease.In the present study, more than 30 metabolites were detected in the serum and urine samples of breast cancer patients and healthy controls based on the results of 1 H-NMR.Nine metabolites were detected at significantly different levels in serum samples, while three metabolites were also detected in urine samples between breast cancer patients and healthy controls.
The process of amino acid metabolism is complex and involves a series of metabolites.Amino acids are raw materials of protein synthesis and catabolism products in vivo.The changes of amino acids composition and concentration can reflect the metabolic status of patients.Characteristics of amino acid metabolism in cancer patients include the following two points [18][19]: (1) uptake of amino acids is faster in tumor cells compared to normal cells, resulting in certain amino acids reducing in host body; (2) to meet the needs of growth and metabolism, tumor tissue with a nitrogen atom trap function, can take the initiative to compete with the host for nitrogen compounds and constantly ingest a variety of essential amino acids and non-essential amino acids for cells proliferation.In this study, there were significant differences in the contents of five amino acids in the serum between breast cancer patients and healthy controls.The serum levels of histidine, valine, lysine, tyrosine and glutamate in breast cancer patients were significantly lower than those in healthy controls.Among them, valine and lysine are essential amino acids, while histidine is a semi-essential amino acid.
In addition, a decrease of amino acids in cancer patients is closely related to malnutrition.It is reported that 40% to 80% of cancer patients are combined to malnutrition and weight loss in 15% of patients within 6 months from diagnosis is more than 10% [20].In this study, the average BMI of 11 patients with breast cancer was 20.41 ± 1.35, which is lower than the normal population.Malnutrition would decrease the tolerance of cancer patients on surgery, chemotherapy, radiotherapy and other anti-tumor treatment and increase incidence of adverse reactions [21].Therefore, doctors should pay attention to amino acid supplements for tumor patients.
It has been reported that amino acid metabolism of different kind of cancers has specificity [22].Serum amino acid levels between esophageal cancer, osteosarcoma, lymphoma and soft tissue sarcoma showed inconsistencies.Experiments in vitro showed that the consumption of arginine, threonine, taurine and glutamine in liver cancer cells increased significantly [23].Ye et al. [24] confirmed that serum concentration of tyrosine, glycine, glutamine, alanine, valine and isoleucine in cervical cancer patients was significantly lower than those in healthy controls.The results of this study showed that the consumption of histidine, valine, lysine, tyrosine and glutamate significantly increased in breast cancer patients.In addition, tumor stage may also affect the body's amino acid levels.It was reported that the concentration of tyrosine, methionine and phenylalanine in patients with hepatocellular carcinoma increased with tumor stage [25].However, due to the small sample size of this study, we did not carry out the study of staging and amino acid metabolism.
This study also showed that serum levels of choline and glucose in breast cancer patients were significantly lower than those in healthy controls.We speculate that this may also be due to the high consumption state.Serum levels of serum lipid in breast cancer patients were higher than healthy controls, indicating that the patient's serum lipid metabolism was in a disorder state.In the urine sample, the content of phenylacetylglycine and guanidoacetate in serum of patients with breast cancer was significantly lower than that of healthy controls, while citrate content was significantly higher than that of healthy controls.All of these indicate that breast cancer patients are in high metabolic and high consumption state.
In conclusion, this study illustrates the successful application of 1 H-NMR spectroscopy-based metabolomics for investigating the metabolic changes in serum and urine of patients with breast cancer.Our results indicate significant dysregulation of metabolic pathways in breast cancer patients.Specifically, we found that breast cancer was associated with metabolism disorder of amino acid, lipid and organic acids.
Patients
This study was approved by the Ethics Committee of Shaanxi provincial people's hospital.All study participants provided written informed consent before participation.Correlation coefficients, positive and negative signs indicate positive and negative correlation in the concentrations, respectively.The correlation coefficient of│r│> 0.576 was used as the cutoff value for the statistical significance based on the discrimination significance at the level of P = 0.05 and df(degree of freedom) = 10.Eleven patients with pathological diagnose of breast cancer were recruited to this study as group A between September 2015 and November 2015, while 11 cases of healthy volunteers were from physical examination center of our hospital as group B during the same period.All the participants did not suffer from other tumors, diabetes and cardiovascular diseases.The clinical information of participants was summarized in Table 1.Age and BMI between the two groups has no significant difference(both P > 0.05).
Sample collection and storage
After fasting and avoiding alcohol and medicine for 12 hours, each participant was collected serum and urine in the early morning before undergoing any treatment.Venous blood samples were collected into plastic serum tubes (5 ml) and allowed to clot by standing tubes vertically at room temperature for 60 min.Serum was obtained after centrifugation at 3000 rpm for 10 min, and samples were stored at −80°C until analysis.Morning urine of all participants were collected and immediately frozen at −80°C until for analysis.
Specimen preparation for 1 H-NMR analysis
Serum and urine samples were thawed at room temperature and homogenized using a vortex mixer.Then 170 μl D2O and 30 μl PB solution(600 mmol/L) were added to 400 ml serum.After centrifugation at 12000 rpm for 10 min at 4°C, 550 μl of the supernatants was transferred into 5-mm NMR tubes and stored at 4°C until analysis.100 μl PB solution(600 mmol/L) including TSP was added to 500 μl urine.500 μl of the supernatants was dispensed into 5-mm NMR tubes for analysis after mixing, 5 min in room temperature and 12000 rpm for 10 min at 4°C.
H-NMR analysis
All NMR data were recorded using a Varian Unity INOVA 600 MHz AVANCE II spectrometer equipped with a 5 mm triple resonance inverse cryoprobe and a z-gradient system at 599.92 MHz.The temperature of the samples was controlled at 25°C during measurement.Prior to data acquisition, tuning and matching of the probe head followed by shimming and proton pulse calibration were performed automatically for each sample.For each sample, 1 H Carr-Purcell-Meiboom-Gill (CPMG; 80 ms spin-lock eliminating the broad resonance lines of high molecular weight compounds in the serum specimens) sequence was applied to transverse relaxation weighted experiment to filter out signals belonging to proteins and other macromolecules, and then one-dimensional (1D) 1 H NOESY (RD-90°-t1-90°-tm-90°-ACQ) spectra were recorded.For each spectrum of serum samples, 96 scans were accumulated with 2.1 s relaxation delay, a spectral width of 8000 Hz, 100ms total echo time and 1.0 s direct acquisition time, while for urine samples 64 scans were accumulated with 2.1 s relaxation delay, a spectral width of 8384.9Hz, 100 ms total echo time and 0.9541 s direct acquisition time.
H-NMR spectral data processing
To reduce the complexity of the NMR data and facilitate the pattern recognition, the raw NMR data were manually Fourier transformed using MestReNova V7.0 software before data processing.The 1 H-NMR spectra of all samples were phase adjusted and baseline corrected using Topspin software V2.1.The serum samples were scaled referencing to lactate bimodal resonance at 1.33 ppm, while the urine samples was scaled referencing to TSP at 0.0 ppm.The spectra ranging from 0.5 to 9.0 ppm was subsequently divided into 1700 integral segments corresponding to 0.005 ppm using AMIX software V3.9.11.The regions of 4.2-6.5 ppm were removed to eliminate the influence of the water and urea peak.In addition, the integrated data were normalized before pattern recognition analysis to eliminate the dilution or bulk mass differences among samples by the total area normalization way.
Multivariate statistics
The standardized data were import to SIMCA-P + package for multivariate analysis, including principal component analysis(PCA) and partial least squaresdiscriminate analysis(PLS-DA).The first and second principal component were taken for PCA, PLS-DA and orthogonal partial least-squares discriminant analysis(OPLS-DA).The results of PCA was displayed by score plot to observe the main cluster sampling and abnormal outliers.Then PCA and PLS-DA analysis were conducted again for further verification between different comparison groups.Standardization of PLS-DA was done by unit variance scaling, the results was also displayed by score plot, and the accuracy of the model was verified by cross-validation and permutations experiment.A 20-fold cross-validation was employed to obtain Q 2 and R 2 values, which represent the predictive ability of the model and the explained variance, respectively.To further validate the quality of the PLS-DA model, permutation tests consisting of a randomly permuting class membership and running 200 iterations were carried out.The verified model was further analyzed using OPLS-DA displayed by score plot according to which the significantly changed metabolites were extracted.Loading diagram showed the significantly changed metabolites and their contribution(correlation coefficient of r value represented the contribution of different metabolites).The sensitivity, specificity, and classification rate(percentage of samples correctly classified) of OPLS-DA models were then depicted.Significant differences were detected by Pearson correlation coefficient to determine significantly changed metabolites and to give them reasonable biological explanation.
Figure 3 :
Figure 3: PCA scores plot based on 1 H NMR spectra of serum and urine sample of groups (A and B).Serum and urine samples in group A and B both have a tendency to separate.
Figure 4 :
Figure 4: PLS-DA scores plots (left panel) derived from 1 H NMR spectra of serum samples obtained from group (A and group B) and cross validation (right panel) by permutation test.Note: group A: black box (■); group B: blue triangle (▲).
Figure 5 :
Figure 5: PLS-DA scores plots (left panel) derived from 1 H NMR spectra of urine samples obtained from group (A and group B) and cross validation (right panel) by permutation test.Note: group A: black box (■); group B: blue triangle (▲).
Figure 6 :
Figure 6: OPLS-DA scores plots (left panel) and corresponding coefficient loading plots (right panel) for serum samples of group (A and group B).The color map shows the significance of metabolites variations between the two classes.Peaks in the positive direction indicate metabolites that are more abundant in the groups in the positive direction of first principal component.Note: group A: black box (■); group B: blue triangle (▲).
Figure 7 :
Figure 7: OPLS-DA scores plots (left panel) and corresponding coefficient loading plots (right panel) for urine samples of group (A and group B).Note: group A: black box (■); group B: blue triangle (▲).
Table 2 : OPLS-DA coefficients derived from the NMR data of different metabolites in serum
Correlation coefficients, positive and negative signs indicate positive and negative correlation in the concentrations, respectively.The correlation coefficient of│r│> 0.602 was used as the cutoff value for the statistical significance based on the discrimination significance at the level of P = 0.05 and df(degree of freedom) = 9. a
|
2018-01-24T17:25:33.048Z
|
2017-03-15T00:00:00.000
|
{
"year": 2017,
"sha1": "5cc57e2736bef2b6857f157b7d4d200ebe2c69ad",
"oa_license": "CCBY",
"oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=16210&path[]=51854",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b5c21b93db9157a7b0dfe53e47bda8c6c489ed94",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
221238289
|
pes2o/s2orc
|
v3-fos-license
|
Myelomonocytic Skewing In Vitro Discriminates Subgroups of Patients with Myelofibrosis with A Different Phenotype, A Different Mutational Profile and Different Prognosis
Normal hematopoietic function is maintained by a well-controlled balance of myelomonocytic, megaerythroid and lymphoid progenitor cell populations which may be skewed during pathologic conditions. Using semisolid in vitro cultures supporting the growth of myelomonocytic (CFU-GM) and erythroid (BFU-E) colonies, we investigated skewed differentiation towards the myelomonocytic over erythroid commitment in 81 patients with myelofibrosis (MF). MF patients had significantly increased numbers of circulating CFU-GM and BFU-E. Myelomonocytic skewing as indicated by a CFU-GM/BFU-E ratio ≥ 1 was found in 26/81 (32%) MF patients as compared to 1/98 (1%) in normal individuals. Patients with myelomonocytic skewing as compared to patients without skewing had higher white blood cell and blast cell counts, more frequent leukoerythroblastic features, but lower hemoglobin levels and platelet counts. The presence of myelomonocytic skewing was associated with a higher frequency of additional mutations, particularly in genes of the epigenetic and/or splicing machinery, and a significantly shorter survival (46 vs. 138 mo, p < 0.001). The results of this study show that the in vitro detection of myelomonocytic skewing can discriminate subgroups of patients with MF with a different phenotype, a different mutational profile and a different prognosis. Our findings may be important for the understanding and management of MF.
Introduction
Normal hematopoietic function is maintained by a well-controlled balance of myelomonocytic, megaerythroid and lymphoid progenitor cell populations. This balance may be skewed during pathologic conditions such as hematological malignancies, infections and autoimmunity [1][2][3][4][5][6][7]. Moreover, skewed hematopoiesis can be found in aged hematopoiesis [8]. Since semisolid in vitro cultures from peripheral blood mononuclear cells (PBMNC) of normal individuals usually contain a higher concentration of erythroid colonies (BFU-E), as compared to myelomonocytic colony-forming units (CFU-GM), this test may be useful for investigating skewed differentiation towards the myelomonocytic over erythroid commitment in patients [9]. In addition to genomic analyses, in vitro cultures may provide functional information and may help to more comprehensively characterize disturbed hematopoiesis in clonal hematologic disorders [9].
Patients with myelofibrosis (MF) have aberrant hematopoiesis due to molecular aberrations in a number of genes, including JAK2, CALR and MPL [10]. Moreover, analysis of the mutational landscapes of patients has shown that additional mutations can be found in a subset of patients and some of them, such as ASXL1, SRSF2, EZH2, IDH1/2 or U2AF1, predicted an inferior outcome [11,12]. The role of myelomonocytic skewing in patients with myelofibrosis has not been studied so far. Our aim was to study the role of myelomonocytic skewing in patients with myelofibrosis.
Introduction
Normal hematopoietic function is maintained by a well-controlled balance of myelomonocytic, megaerythroid and lymphoid progenitor cell populations. This balance may be skewed during pathologic conditions such as hematological malignancies, infections and autoimmunity [1][2][3][4][5][6][7]. Moreover, skewed hematopoiesis can be found in aged hematopoiesis [8]. Since semisolid in vitro cultures from peripheral blood mononuclear cells (PBMNC) of normal individuals usually contain a higher concentration of erythroid colonies (BFU-E), as compared to myelomonocytic colony-forming units (CFU-GM), this test may be useful for investigating skewed differentiation towards the myelomonocytic over erythroid commitment in patients [9]. In addition to genomic analyses, in vitro cultures may provide functional information and may help to more comprehensively characterize disturbed hematopoiesis in clonal hematologic disorders [9].
Patients with myelofibrosis (MF) have aberrant hematopoiesis due to molecular aberrations in a number of genes, including JAK2, CALR and MPL [10]. Moreover, analysis of the mutational landscapes of patients has shown that additional mutations can be found in a subset of patients and some of them, such as ASXL1, SRSF2, EZH2, IDH1/2 or U2AF1, predicted an inferior outcome [11,12]. The role of myelomonocytic skewing in patients with myelofibrosis has not been studied so far. Our aim was to study the role of myelomonocytic skewing in patients with myelofibrosis.
Phenotype of MF Patients with and without Myelomonocytic Skewing
As shown in Table 1, there were pronounced differences in the phenotype of patients with and without myelomonocytic skewing. Patients with myelomonocytic skewing had higher white blood cells (WBC) and blast cell counts in PB, more frequent leukoerythroblastic features but lower hemoglobin levels and platelet counts, respectively, as compared to patients without skewing. The absolute monocyte count (AMC) was not statistically different in both groups, and the proportion of patients with an AMC 10 9 /L was 11% (6/55) in patients without skewing as compared to 23% (6/26) in patients with skewing (p = 0.150). Figure 3 shows the survival curves of the 4 prognostic categories that were shown to predict survival in primary myelofibrosis by the International Working Group for Myeloproliferative Neoplasms Research and Treatment [13]. As one can see, the Dynamic International Prognostic Scoring System significantly discriminated four risk categories in our study, suggesting that the total patient cohort which was used in our study was representative regarding prognosis. When the two subgroups were analyzed separately, the proportion of patients within intermediate-2 and high-risk groups was significantly higher (58%) in patients with myelomonocytic skewing as compared to patients without an CFU-GM/BFU-E ratio 1 (16%).
Phenotype of MF Patients with and without Myelomonocytic Skewing
As shown in Table 1, there were pronounced differences in the phenotype of patients with and without myelomonocytic skewing. Patients with myelomonocytic skewing had higher white blood cells (WBC) and blast cell counts in PB, more frequent leukoerythroblastic features but lower hemoglobin levels and platelet counts, respectively, as compared to patients without skewing. The absolute monocyte count (AMC) was not statistically different in both groups, and the proportion of patients with an AMC ≥ 10 9 /L was 11% (6/55) in patients without skewing as compared to 23% (6/26) in patients with skewing (p = 0.150). Figure 3 shows the survival curves of the 4 prognostic categories that were shown to predict survival in primary myelofibrosis by the International Working Group for Myeloproliferative Neoplasms Research and Treatment [13]. As one can see, the Dynamic International Prognostic Scoring System significantly discriminated four risk categories in our study, suggesting that the total patient cohort which was used in our study was representative regarding prognosis. When the two subgroups were analyzed separately, the proportion of patients within intermediate-2 and high-risk groups was significantly higher (58%) in patients with myelomonocytic skewing as compared to patients without an CFU-GM/BFU-E ratio ≥ 1 (16%).
Mutational Profile of MF Patients with and without Myelomonocytic Skewing
Information on driver mutations determined by polymerase chain reaction (PCR) was available in 81 patients (Table S1). There was no significant difference between MF patients with and without myelomonocytic skewing, respectively, with regard to the distribution of MF driver mutations of JAK2 (58% vs. 58%), CALR (19% vs. 27%), and MPL (8% vs. 2%), respectively (Table S1). Moreover, we could not find statistically significant differences in overall survival (OS) between different driver mutations in the whole cohort and in patients with or without skewing, respectively. However, as shown in Figure 4, the frequency of additional mutations determined by NGS was higher in patients with myelomonocytic skewing (11/15, 73%) as compared to patients without skewing (6/26, 23%; p = 0.002). In particular, genes of the splicing and/or epigenetic machinery were more frequently mutated in patients with as compared to patients without myelomonocytic skewing.
Mutational Profile of MF Patients with and without Myelomonocytic Skewing
Information on driver mutations determined by polymerase chain reaction (PCR) was available in 81 patients (Table S1). There was no significant difference between MF patients with and without myelomonocytic skewing, respectively, with regard to the distribution of MF driver mutations of JAK2 (58% vs. 58%), CALR (19% vs. 27%), and MPL (8% vs. 2%), respectively (Table S1). Moreover, we could not find statistically significant differences in overall survival (OS) between different driver mutations in the whole cohort and in patients with or without skewing, respectively. However, as shown in Figure 4, the frequency of additional mutations determined by NGS was higher in patients with myelomonocytic skewing (11/15, 73%) as compared to patients without skewing (6/26, 23%; p = 0.002).
In particular, genes of the splicing and/or epigenetic machinery were more frequently mutated in patients with as compared to patients without myelomonocytic skewing.
Survival of MF Patients with and without Myelomonocytic Skewing
As shown in Figure 5, the presence of myelomonocytic skewing was associated with a significantly shorter survival. The median survival of patients with a CFU-GM/BFU-E ratio >1 was 46 months vs. 138 months in patients with a CFU-GM/BFU-E ratio <1 (p < 0.001). Table 2 shows the prognostic power of myelomonocytic skewing and established prognostic factors included in the DIPSS score. In a multivariate Cox regression analysis of overall survival, myelomonocytic skewing remained an independent prognostic factor (Table S2). Furthermore, an AMC >10 9 /L was a significant predictor of unfavorable outcome ( Figure S1a), whereas the proportion of CD14 positive cells (>5%) in bone marrow (BM) had no prognostic impact ( Figure S1b).
Factors
Factor Present Factor Absent Hazard Ratio p-Value
Survival of MF Patients with and without Myelomonocytic Skewing
As shown in Figure 5, the presence of myelomonocytic skewing was associated with a significantly shorter survival. The median survival of patients with a CFU-GM/BFU-E ratio ≥1 was 46 months vs. 138 months in patients with a CFU-GM/BFU-E ratio <1 (p < 0.001). Table 2 shows the prognostic power of myelomonocytic skewing and established prognostic factors included in the DIPSS score. In a multivariate Cox regression analysis of overall survival, myelomonocytic skewing remained an independent prognostic factor (Table S2). Furthermore, an AMC ≥10 9 /L was a significant predictor of unfavorable outcome ( Figure S1a), whereas the proportion of CD14 positive cells (>5%) in bone marrow (BM) had no prognostic impact ( Figure S1b).
Survival of MF Patients with and without Myelomonocytic Skewing
As shown in Figure 5, the presence of myelomonocytic skewing was associated with a significantly shorter survival. The median survival of patients with a CFU-GM/BFU-E ratio >1 was 46 months vs. 138 months in patients with a CFU-GM/BFU-E ratio <1 (p < 0.001). Table 2 shows the prognostic power of myelomonocytic skewing and established prognostic factors included in the DIPSS score. In a multivariate Cox regression analysis of overall survival, myelomonocytic skewing remained an independent prognostic factor (Table S2). Furthermore, an AMC >10 9 /L was a significant predictor of unfavorable outcome ( Figure S1a
Discussion
It is well established in the literature that the number of hematopoietic progenitor cells is increased in peripheral blood of patients with myeloproliferative neoplasms (MPN). Levels of circulating hematopoietic progenitor cells are particular high in MF. In 1973, Paul Chervenick described the increased numbers of myeloid colony-forming cells in the peripheral blood of MF patients [14]. Subsequently, this finding was extended by a number of studies including ours, which demonstrated elevated numbers of circulating erythroid, megakaryocytic, and pluripotent progenitor cells in these patients [15][16][17][18]. The levels of circulating colony-forming cells were also significantly higher in MF than in normal controls in our study. Thus, the median CFU-GM levels were approximately eight times higher, and the median BFU-E approximately three times higher among MF patients than in controls. It is important to note that a subgroup of patients had rather low BFU-E numbers, but markedly increased CFU-GM levels resulting in a CFU-GM/BFU-E ratio of equal or more than one. Since the predominance of myelopoiesis over erythropoiesis may indicate some basic changes in the biology of this hematologic disorder, this observation prompted us to consider if patients with myelomonocytic skewing over the erythroid lineage were phenotypically, genotypically and prognostically different from patients without skewing.
The phenomenon of skewed myelopoiesis over erythropoiesis has been described in malignant and nonmalignant conditions in mice and man. Knockdown of TET2 in cord blood CD34 (+) cells skews progenitor differentiation toward the granulomonocytic lineage, at the expense of lymphoid and erythroid lineages [2]. Deletion of Tet2 in mice leads to an increased hematopoietic repopulating capacity with an altered differentiation, skewing towards monocytic/granulocytic lineages [1]. Other epigenetic regulators such as ASXL1 have also been demonstrated to affect skewing of hematopoiesis. Asxl1(−/−) mice had a reduced hematopoietic stem cell (HSC) pool, and Asxl1(−/−) HSCs exhibited decreased hematopoietic repopulating capacity, with skewed cell differentiation favoring granulocytic lineage [3]. Furthermore, the splicing factors SRSF2 and U2AF1 seem to impact skewing. Mutations in both SRSF2 and U2AF1 cause abnormal differentiation by skewing granulo-monocytic differentiation towards monocytes [4]. On the other hand, nonmalignant conditions may also contribute to in vitro myelomonocytic skewing. It is well established that deregulated NF-κB activation contributes to the pathogenic processes of various inflammatory diseases. There is evidence from preclinical models that deregulation of NF-κB signaling promotes skewing of myelopoiesis over erythropoiesis. In mice, loss of IKKß skews differentiation towards myeloid over erythroid commitment and increases myeloid progenitor self-renewal and functional long-term hematopoietic stem cells. [6]. Moreover, Foxp3 deficiency in mice leads to exacerbated NF-kB activity and subsequent cytokine-mediated hyperproliferation of myeloid precursors [5].
The fact that the myeloid and erythroid progenitor cell compartment can be directly compared using in vitro culture of PBMNC makes this method particularly attractive to investigate this phenomenon. In our hands, we rarely find myelomonocytic skewing in PB from healthy individuals, but more frequently in patients with myeloid disorders. Particularly in myelodysplastic syndromes and chronic myelomonocytic leukemia, this phenomenon can be found in a high percentage of patients [19,20].
Both entities are disorders in which molecular aberrations of the epigenetic machinery, including ASXL1 and TET2, seem to play a major pathophysiological role [21,22]. Interestingly, functional knockdown of TET2 in CD34 + /CD38 − caused a granulomonocytic expansion in vitro that was not observed in CD34 + /CD38 + cells, suggesting that early dominance of the TET2-mutated clone in the immature CD34 + /CD38 − compartment may participate in the granulomonocytic skewing that defines CMML [23]. Aging is characterized by clonal expansion of myeloid-biased hematopoietic stem cells, and recurrent somatic TET2 mutations have been detected in normal elderly individuals with clonal hematopoiesis [24]. Additional mutations in MF including ASXL1, SRSF2, EZH2, IDH1/2 and U2AF1 have been shown to shorten the survival of patients with MF [11]. All these mutations were found in our MF patients with in vitro myelomonocytic skewing but were rare in patients without skewing; however, we are aware of the fact that NGS data were available only in a limited number of patients and that these data have to be confirmed in a larger cohort of MF patients. On the other hand, chronic inflammation seems to be an important trigger and driver of clonal evolution in MF [25]. One of the phenotypic differences between MF patients with and without skewing was the higher proportion of intermediate-2 and high-risk patients according to the DIPSS score in the group with a CFU-GM/BFU-E ≥1 [13]. It is intriguing to realize that the features of age, leukocytosis, anemia and constitutional symptoms which are included in DIPSS may be considered as either hematological consequences and/or promoting factors of myelomonocytic skewing over erythropoiesis.
Comparing the hematologic phenotype of our patients, we found that patients with myelomonocytic skewing had higher WBC and blast cell counts, but lower hemoglobin levels and platelet counts. The absolute monocyte count which may indicate myelomonocytic skewing in the blood was not statistically different in both groups, but the proportion of patients with an AMC ≥10 9 /L was twice (23% vs. 11%) in patients with skewing as compared to patients without skewing. The absolute monocyte count in MF patients has been shown to predict outcome [26,27]. Therefore, we also analyzed the prognostic role of the monocytic compartment in our patients. We could confirm that an AMC ≥10 9 /L in PB was associated with an inferior survival, however, the percentage of CD14 positive cells in BM did not predict an unfavorable outcome. This finding suggests that, similar to findings in CMML, aberrations of the monocytopoiesis may be more easily detected in PB as compared to BM.
The clinical relevance of our findings is clearly supported by our observation that myelomonocytic skewing was associated with an inferior outcome. Moreover, the prognostic impact of myelomonocytic skewing was independent of other established prognostic factors, since the effect retained significance in a multivariate Cox regression analysis. Considering the fact that additional mutations were significantly more frequent in MF patients with myelomonocytic skewing in this study, and the fact that additional mutations have been demonstrated to have an adverse impact on prognosis by others, one may speculate that myelomonocytic skewing detected by in vitro cultures may at a functional level reflect aberrations of hematopoiesis at the molecular level. Whatever the exact basis for myelomonocytic skewing may be, in vitro cultures may help to more comprehensively study hematopoiesis in patients with complex disturbances of blood formation.
Patients
This study is based on an Austrian clinicopathological registry, including patients diagnosed for MPN according to the 2008 WHO diagnostic criteria [28] between 1997 and 2020, which was created by clinicians and hematopathologists in the Departments of Hematology and Clinical Pathology at the Medical University of Vienna, Austria. The main eligibility criteria for entry into this study was the availability of representative, treatment-naïve BM biopsies (hematoxylin-eosin or Giemsa staining and silver impregnation after Gomori), confirming the diagnosis of myelofibrosis and the determination of circulating hematopoietic progenitor cells, which has been an integral part of the diagnostic work at our department in patients with suspected myeloid malignancies for many years, and which was in most cases performed at the time of diagnosis [9]. All the differential counts used for this study were manual differential counts. In total, 81 patients were included in this study, in 6/81 patients secondary MF was diagnosed. Diagnosis of secondary post PV/ET myelofibrosis required the demonstration of ≥2+ marrow fibrosis and/or clinical and morphological features according to International Working Group for Myeloproliferative Neoplasms Research and Treatment (IWG-MRT) criteria, including worsening of anemia, increase in splenomegaly either of newly palpable splenomegaly or more than 5 cm from baseline, overt leukoerythroblastosis or anisopoikilocytosis with tear-drop erythrocytes consistent with advanced PMF/myelofibrosis with myeloid metaplasia [29]. Leukoerythroblastosis was defined as the presence of immature cells of the myeloid series and nucleated red cells in the circulating blood.
Molecular Analysis
Mutation analysis for MPN driver mutations included allele-specific polymerase chain reaction techniques to screen for Janus kinase 2 (JAK2), calreticulin exon 9 (CALR) and myeloproliferative leukemia virus oncogene (MPL) mutations [31]. Comprehensive mutational profiles in MF patients were determined by targeted re-sequencing as previously described [32]. DNA isolated from granulocytes or PBMCNs was processed using the TruSight Myeloid Panel kit (Illumina, San Diego, CA, USA) to generate indexed amplicon-based libraries. Equimolar amounts of libraries were pooled into multiplexes which were then sequenced 150bp paired-end on an Illumina HiSeq3000 instrument. Read alignment and variant calling was performed using the BaseSpace software (Illumina, San Diego, CA, USA). Variants called in transcribed regions or at splice sites were selected and further filtered for common variation. Other filters were adjusted for TruSight targeted sequencing and included insufficient sequencing read depth (<200) and low variant allele frequency (VAF < 0.05).
Statistical Analysis
The log-rank test was used to determine if individual parameters were associated with overall survival (OS). OS was defined as the time from sampling to death (uncensored) or last follow up (censored). A multivariate Cox regression analysis of overall survival was used to describe the relation between the event incidence, as expressed by the hazard function and a set of covariates. Dichotomous variables were compared between different groups with the use of the chi-square test.
The Mann-Whitney U test was used to compare unmatched groups when continuous variables were not normally distributed. Results were considered significant at p < 0.05. Statistical analyses were performed with the SPSS version 19.0.0 (SPSS Inc., Chicago, IL, USA); the reported p values were 2-sided.
Conclusions
In conclusion, we report for the first time a study which investigates the phenomenon of myelomonocytic skewing as determined by semisolid in vitro cultures in patients with MF. We can show that the presence or absence of myelomonocytic skewing can discriminate these patients regarding clinical, phenotypic and molecular characteristics. Moreover, the clinical relevance of our findings is further supported by the different outcome of both groups, stratified by whether or not patients showed myelomonocytic skewing. More generally, we think that myelomonocytic skewing as determined by semisolid in vitro cultures may still be an important functional method to complement molecular analyses and to comprehensively study disturbed hematopoiesis in various conditions. Supplementary Materials: The following are available online at http://www.mdpi.com/2072-6694/12/8/2291/s1, Table S1: Circulating CFU-GM and BFU-E numbers, CFU-GM/BFU-E ratio and driver mutation status in patients with myelofibrosis, Table S2: Multivariate Cox regression analysis of overall survival, Figure S1: Overall survival of myelofibrosis patients stratified by the presence or absence of absolute monocytosis ≥ 10 9 /L (a) and the proportion of CD14 positive bone marrow cells > 5% (b).
Author Contributions: B.G. performed the administration of data; E.J. performed colony assays; E.B., E.F., F.S., D.A., R.J., R.K. performed molecular analyses and R.J. in addition, assisted in writing the manuscript; I.S.-K. confirmed the diagnoses and A.-I.S. in addition performed pathologic analyses; H.G. provided patient cohort, patient samples and clinical information and assisted in writing the manuscript; K.G. directed the research, collected, analyzed and interpreted the data and wrote the manuscript. All authors had the opportunity to review the manuscript. All authors have read and agreed to the published version of the manuscript.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2020-08-20T10:11:56.056Z
|
2020-08-01T00:00:00.000
|
{
"year": 2020,
"sha1": "08e7317c15c9861c45371ea0e08d2102eea7493a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/12/8/2291/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cd1cfa829aabca9c1e99cf8405ee52fc305a1cdd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
221109135
|
pes2o/s2orc
|
v3-fos-license
|
Hormonal Regulation of Mammalian Adult Neurogenesis: A Multifaceted Mechanism
Adult neurogenesis—resulting in adult-generated functioning, integrated neurons—is still one of the most captivating research areas of neuroplasticity. The addition of new neurons in adulthood follows a seemingly consistent multi-step process. These neurogenic stages include proliferation, differentiation, migration, maturation/survival, and integration of new neurons into the existing neuronal network. Most studies assessing the impact of exogenous (e.g., restraint stress) or endogenous (e.g., neurotrophins) factors on adult neurogenesis have focused on proliferation, survival, and neuronal differentiation. This review will discuss the multifaceted impact of hormones on these various stages of adult neurogenesis. Specifically, we will review the evidence for hormonal facilitation (via gonadal hormones), inhibition (via glucocorticoids), and neuroprotection (via recruitment of other neurochemicals such as neurotrophin and neuromodulators) on newly adult-generated neurons in the mammalian brain.
Introduction
Adult neurogenesis, resulting in adult-generated functioning neurons, is still one of the most captivating research areas of neuroplasticity. While the first accounts were met with decades of skepticism, methodological advances-including the introduction of the synthetic thymidine analog 5-bromo-3 -deoxyuridine (BrdU) and the use of cell-type specific markers-helped to establish neurogenesis in adult rodents [1,2]. Adult-generated neurons have also been found in numerous other species, including marmosets [3,4], macaques [5], opossums (Monodelphis domestica) [6], and even humans [7,8]. Ultimately, adult neurogenesis was accepted as a real phenomenon and has since been observed in almost all mammals examined so far [9,10], but see [11].
Numerous exogenous (e.g., voluntary exercise and exposure to environmental enrichment) and endogenous (e.g., hormones and neurotrophins) factors influence these different neurogenic stages [13,[45][46][47]. Specifically, a factor influencing cell proliferation either up-or down-regulates the birth of new cells, while a factor influencing survival promotes or prevents differentiation, maturation, and/or integration. Interestingly, the individual neurogenic stages might be influenced independently of one another. As such, cell proliferation can be up-regulated without influencing the other stages or cell survival might be down-regulated without altering other stages. Therefore, it is essential to investigate the impact of exogenous and endogenous factors on each of the neurogenic stages. While numerous reviews have discussed the impact of hormones such as gonadal steroids [48][49][50][51][52][53][54] and glucocorticoids [49,[55][56][57] on adult neurogenesis, the following review will highlight the multifaceted impact of hormones on the various stages of mammalian adult neurogenesis. This is a fairly unique approach that to our knowledge has rarely been used. Specifically, we will discuss hormonal facilitation (via gonadal steroids), inhibition (via glucocorticoids), and neuroprotection (via the recruitment of the brain-derived neurotrophic factor, BDNF, and the neuromodulators serotonin, 5-HT, and oxytocin, OT) of mammalian adult neurogenesis. The impact specifically on the various neurogenic stages will be reviewed in the DG and the SVZ/MOB system, but nontraditional neurogenic brain regions will also be discussed.
Hormonal Facilitation of Adult Neurogenesis
Various factors have been shown to facilitate adult neurogenesis altering the different neurogenic stages (including cell proliferation, cell survival, and neuronal differentiation) independently from one another [13,45,46]. For example, voluntary exercise increased DG cell proliferation and survival, whereas exposure to an enriched environment only increased DG cell survival [58]. Therefore, it is essential to investigate the impact of any neurogenic factor on each stage of neurogenesis separately. Most studies on gonadal steroid regulation of adult neurogenesis have focused on androgens (e.g., testosterone and dihydrotestosterone (DHT)) and estrogens (e.g., estradiol, estrone, and estriol), thus we will discuss the evidence of these hormones acting as neurogenic factors in the adult mammalian brain-highlighting the specific stages of adult neurogenesis that can be impacted.
Androgens
Androgens are hormones that influence male reproductive activity; play a role in social behavior, cognition, and mood; and are potent regulators of neural plasticity [59][60][61][62][63]. Here, we will discuss the evidence that neurogenic stages might be affected by the natural fluctuations of androgens and by manipulations of the androgen system (namely castration, TX, the bilateral removal of testes) and androgen replacement.
Natural Fluctuations of Androgens
Mammals commonly display seasonal reproduction, which is associated with variations in blood androgen levels [64][65][66][67]. The seasonally reproductive meadow vole (Microtus pennsylvanicus) displays a photoperiod-dependent reproductive status-with exposure to a long photoperiod resulting in larger testicular weight and higher blood androgen levels [68,69]. Consequently, male meadow voles that display high androgen levels during the breeding season (long photoperiod) and low androgen levels during the non-breeding season (short photoperiod) have been used as a model to study the impact of seasonally fluctuating androgen on adult neurogenesis. One such study found that cell proliferation in the hilus, but not GCL, was higher in wild-living meadow voles captured during the breeding compared to the non-breeding season [70]. No other neurogenic stages were investigated. A subsequent study [41] aimed to address factors (such as age, previous experience, and capture-induced stress response) that may function as potent regulators of adult neurogenesis [71][72][73][74] and cannot easily be controlled in a wild sample [70]. To assess hippocampal cell proliferation and survival, laboratory-reared meadow voles were acclimated to a long or short photoperiod to simulate the breeding or non-breeding season, respectively [41]. Higher DG cell survival, but not proliferation, was found in reproductively-active versus -inactive males. As laboratory-reared voles likely lack the same complex demands as wild-living voles, another study used endogenous adult neurogenesis markers-eliminating the need for captivity-in wild-living meadow voles to assess DG neurogenesis [69]. In this study, reproductively-active males displayed less cell proliferation and neuronal differentiation in the GCL and SGZ than reproductively-inactive voles. It is important to mention that this study was solely correlational (with many uncontrolled variables such as age or the heightened glucocorticoid levels during the breeding season). Therefore, future studies should verify the relationship of androgens on all neurogenic stages experimentally. Collectively, the findings from these studies suggest that prolonged exposure to high circulating androgen levels during the breeding season inhibit cell proliferation but enhance cell survival in meadow voles (see Table 1). Blood androgen levels also change due to sexual experience-increasing before, during, and following sexual activity, which, in turn, can impact adult neurogenesis [92][93][94][95]. Indeed, one acute mating encounter increased DG, but not SVZ, cell proliferation in male Sprague-Dawley rats and cell survival in the accessory olfactory bulb (AOB), but not MOB, of male Wistar rats without altering neuronal differentiation [75][76][77][78]. Interestingly, one acute mating encounter in male C57BL mice did not alter cell survival in the AOB or MOB, but it increased neuronal differentiation in the glomerular cell layer of the MOB [79]-suggesting a potential species difference. Alternatively, methodological differences might explain the contrasting findings (sexually experienced rats [77] versus sexually naïve mice [79]).
Chronic mating exposure increased DG cell proliferation and survival in male Sprague-Dawley rats [75,76] as well as DG cell survival in male CD1 mice [80] without altering neuronal differentiation. The majority of these DG-generated cells displayed a neuronal phenotype [75,76,80]. It is noteworthy that this upregulation of adult neurogenesis occurred even though the initial mating-induced testosterone peak had returned to baseline [75]-supporting a previously observed dissociation between sexual behavior and circulating testosterone levels [96]. In contrast, chronic mating exposure did not alter adult neurogenesis in the mating circuitry of male Syrian hamsters (Mesocricetus auratus) [15]. It is not clear at this time whether these contradictory findings between the DG and the mating circuitry reflect differences in methodology (daily versus weekly mating exposure), species (rat and mouse versus hamster), or brain region (DG versus mating circuitry).
Taken together, the above-mentioned studies suggest that fluctuating androgen levels impact adult neurogenesis in a neurogenic stage-specific manner. Higher androgen levels facilitate hippocampal adult neurogenesis, particularly cell survival, but not proliferation or neuronal differentiation. The impact of androgens on adult neurogenesis is also brain region-specific, as androgens did not alter adult neurogenesis in the SVZ or in the mating circuitry.
Castration (TX)
TX reduces circulating androgen levels profoundly, and TX is accompanied by the loss of mating behavior [97,98]. Only one study to our knowledge has examined the impact of short-term castration on DG adult neurogenesis [81]-showing that GCL and hilar cell proliferation was not altered following TX in Sprague-Dawley rats.
In contrast, long-term TX reduced cell proliferation in the GCL and SGZ, but not hilus, in male Sprague-Dawley rats [82]. Similarly to the DG, cell proliferation in the mating circuit of male Syrian hamsters was reduced following long-term TX [15]. Interestingly, long-term TX did not alter DG cell proliferation in male BALB/c and C57BL/6J mice [84,85]-suggesting a possible species difference. In addition, long-term TX also reduced cell survival in the GCL and SGZ in male Sprague-Dawley rats [81][82][83]. However, long-term TX did not alter cell survival in the hilus of male Sprague-Dawley rats or the mating circuit of male Syrian hamsters [15,81]-suggesting that the effect might be brain region-specific. Furthermore, neuronal differentiation seems to display a species-specific regulation. Namely, long-term TX did not alter neuronal differentiation in the hippocampus of male Sprague-Dawley rats or BALB/c mice but decreased neuronal differentiation in male C57BL/6J mice [82][83][84][85] Together, these data suggest that long-, but not short-term, castration negatively impact adult neurogenesis, and this effect appears to be brain region-, species-, and neurogenic stage-specific (see Table 1).
TX and Replacement with Androgen
Following TX, androgen replacement commonly occurs via two types of androgens (testosterone and testosterone propionate) or via testosterone metabolites 5α-dihydrotestosterone (DHT) and estradiol [99]. Short-term androgen replacement increased cell proliferation in the cortical and medial AMY of male meadow voles without altering the number of adult-generated cells in the central AMY, DG, or HYP or AMY neuronal differentiation [86]. Interestingly, DG cell survival depended on the time point of estrogen benzoate replacement (see Table 1) [87].
Long-term androgen replacement increased survival in the GCL, but not hilus, in male Sprague-Dawley rats without altering GCL neuronal differentiation [81,90]-suggesting a brain region-specific effect. The effect of testosterone replacement also appears to be dose-dependent. While testosterone doses of 0.5 and 1 mg (resulting in hyperphysiological levels) increased GCL survival, a low (0.25 mg, resulting in a level similar to gonad-intact males) or high (100 mg/pellet) dose failed to alter GCL proliferation and survival [81,88]. Additional support for the dose-dependent effect comes from in-vitro studies [100,101]-showing enhanced neurite outgrowth with lower testosterone concentrations and apoptosis with higher concentrations. The length of hormonal replacement also seems to matter, as only 30-day, but not 15-or 21-day, treatment increased DG cell survival in male Sprague-Dawley rats [83,89]. TX slowly leads to the complete elimination of sexual behavior and testosterone replacement only leads to the full recovery of mating after 8 weeks [97,102]-providing support for the long-term impact of hormonal replacement.
Long-term estradiol treatment did not promote DG neurogenesis in male castrated rats [81,90]. Long-term DHT increased GCL, but not hilus, cell survival in male Sprague-Dawley rats without altering neuronal differentiation [81]. Interestingly, this increase was not observed in middle-aged (11-12 month-old) Sprague-Dawley rats [91]. Finally, DHT treatment in rats pre-treated with an androgen receptor antagonist failed to show the DHT-induced hippocampal adult neurogenesis [103]. Therefore, the negative impact of TX on adult hippocampal neurogenesis can be reversed by long-term androgen replacement via activation of androgen receptors (see Table 1).
Estrogens
Estrogens are hormones that influence motivated behaviors and various cognitive functions [60,104,105]. They are also potent regulators of neural plasticity, play a role in neuronal excitability, and are involved in synaptogenesis via dendritic spine synapse formation [104,106,107]. Here, we will discuss the evidence that neurogenic stages might be affected by the natural fluctuations of estrogens and by manipulations of the estrogen system (namely ovariectomy, OVX, the bilateral removal of ovaries) and estrogen replacement.
Natural Cyclic Fluctuations of Estrogen
Estrogen levels fluctuate significantly across the female estrous cycle. During diestrus, the 17β-estrogen level increases gradually, rises to its maximum level in proestrus, and subsequently decreases and reaches its lowest level near the end of estrus [108]. Using the female Sprague-Dawley rat, a spontaneous ovulator that displays a continuous cycling of reproductive hormones, researchers found that rats injected with BrdU during proestrus (highest estrogen levels) displayed higher DG, but not SVZ, cell proliferation than females injected during other phases of the estrous cycle [109]. Interestingly, such alterations in DG cell proliferation across the estrous cycle were not observed in female C57BL/6 or BALB/c mice [85,110]-suggesting a possible species difference between mice and rats (see Table 2). Cyclic estrogen levels also affect DG cell survival (assessed 4, 7, 14, and 21 days following BrdU injection)-female Sprague-Dawley rats showed higher DG cell survival during proestrus than estrus [109,111]. This difference remained until 21 days, at which point the difference in DG cell survival across proestrus and estrus was no longer present. It was noted that the majority of adult-generated cells were neurons and neuronal differentiation was not altered [109,111].
Unlike female rats or mice, female meadow and prairie voles are induced ovulators, in which the exposure to a male or male pheromones elicits behavioral estrous [128][129][130]. It is of interest to note that meadow and prairie voles (Microtus ochrogaster) display remarkable differences in social behaviors and life strategy. Meadow voles are promiscuous [131], whereas prairie voles are socially monogamous and form lasting pair-bonds [132]. In the wild, female voles have low blood estrogen levels during the non-breeding season, but once primed, their blood estrogen levels remain elevated throughout the breeding season [133]. Researchers captured wild-living female meadow voles across breeding seasons and contrary to findings in mice and rats found that reproductively-active females displayed lower GCL and hilus cell proliferation than reproductively-inactive females [70]. A possible explanation for the discrepancy in findings between mice/rats and meadow voles might be the differences in the ovulation onset-spontaneous versus induced ovulation. Alternatively, captive housing of wild-living meadow voles might have introduced confounding variables Using endogenous markers to eliminate the need for captive housing (a potential confound), researchers observed that reproductively-active females showed lower cell proliferation and neuronal differentiation in the GCL and SGZ than reproductively-inactive females [69]. Other confounding variables might include age, previous experience, and pregnancy status-as all wild-captured female meadow voles during the breeding season were pregnant. Not surprisingly, pregnancy (which is characterized by dramatic fluctuations in steroid hormones [134]) and age (which is associated with changes in circulating 17β-estradiol levels [135]) have previously been identified as potent modulators of adult neurogenesis [71][72][73]136,137]. To address these confounding variables, researchers used laboratory-reared meadow voles and exposed them to a male (to induce behavioral estrous) or female conspecific (control) [112]. Male-exposed females were considered reproductively-active, conversely female-exposed females were considered reproductively-inactive. Reproductively-active females displayed lower GCL cell proliferation and survival than reproductively-inactive females. When the rates of adult-generated cells were compared between the proliferation and survival time points, the data indicate that high estrogen levels might have enhanced cell survival.
Using female prairie voles, researchers found that primed (via short-term male pheromone exposure) females displayed an increase in cell proliferation in the SVZ and along the rms [113]. Interestingly, in another study short-term male exposure did not alter cell proliferation in the SVZ, AMY, caudate putamen, cingulate cortex, DG, or HYP [17]. These contradictory findings between the two studies [17,113] might be due to methodological differences including the type of exposure and the type of control group used. Specifically, one study [113] used a fine wire mesh that resulted in animals being able to see, smell, and have limited physical contact with, while preventing mating. The animals in the other study [17] were housed in the same cage allowing unrestricted social interaction including mating behavior. Long-term male exposure (allowing unrestricted social interaction) increased cell survival in the AMY and HYP but not caudate putamen, cingulate cortex, DG, or MOB of female prairie voles [17]. In the DG and SVZ/MOB, the majority of adult-generated cells expressed a neuronal phenotype. There were no group differences in neuronal differentiation.
Taking together, these studies suggest that fluctuating estrogen levels impact adult neurogenesis in a species-specific manner (see Table 2). In some species (e.g., rat and prairie vole) high estrogen levels are associated with a facilitation of adult neurogenesis. In other species (e.g., meadow vole and mouse) high estrogen levels are linked to a reduction or no alteration of cell proliferation and survival.
Ovariectomy (OVX)
OVX reduces circulating estrogen levels [107] as well as the number of estrogen receptors (ER; beta, but not alpha) [138]. Short-term OVX caused a drastic reduction in DG cell proliferation in female Sprague-Dawley and Wistar rats [109,114], whereas long-term OVX did not alter DG cell proliferation in female Sprague-Dawley, Wistar, and Long-Evans rats [111,114,115]. Long-term OVX did not alter DG cell proliferation or neuronal differentiation in C57BL/6 mice [110]. Interestingly, in a different mouse strain, BALB/c, long-term OVX reduced cell proliferation and neuronal differentiation [85].
To summarize, short-term depletion of estrogen negatively impacts DG adult neurogenesis, while long-term depletion might have a species-specific impact (see Table 2). This time-dependent manner of OVX on adult neurogenesis mirrors the results of OVX on hippocampal dendritic spine density [107]. Specifically, spine density decreases gradually for the 6 days following OVX. No further decrease is observed up to 40 days following OVX.
OVX and Replacement with Estrogen
Following OVX, estrogen replacement might occur via three main forms of estrogen, namely estrone (E1), estradiol (E2, which includes the optical isomers 17β-estradiol and 17α-estradiol), and estriol (E3). The vast majority of studies examining neuroplasticity have used estradiol and its analog estradiol benzoate, as it is the most prevalent and potent form of estrogen [108,125].
Short-term estrogen replacement increased DG cell proliferation in female Sprague-Dawley rats and meadow voles-thereby reversing the OVX-induced reduction in cell proliferation [109,111,[116][117][118][119][120][121]. Similarly, short-term estrogen replacement increased cell proliferation in the prairie vole SVZ, but not the rms or MOB [113]-suggesting a potential brain region-specific effect. Interestingly, estradiol replacement in female C57BL6/J mice led to a decrease in SVZ cell proliferation [122]-suggesting a potential species-specific effect. Alternatively, this difference might be due to species differences in baseline hormonal levels. Voles, which are induced ovulators, exhibit consistently high levels of circulating estrogen during the breeding season (20-30 days), while rats and mice exhibit high estrogen fluctuations across a 4-day estrous cycle [49]. Therefore, estrogen treatment does not cause highly unnatural hormonal levels in voles.
The effects of estrogen replacement also seem dose-specific as a 10 µg dose (that results in circulating estrogen levels in the proestrus range [139,140]) increased whereas other doses (such as 0.3, 1, or 50 µg) did not alter DG cell proliferation and a high dose (100 µg/100 g body weight) reduced cell proliferation in the AOB [111,120,123]. It is noteworthy to understand that dose and type of estrogen are related factors-as one dose might be effective for one type of estrogen while ineffective for another type of estrogen [120]. Underlying pharmacokinetic and pharmacological differences between the different types of estrogen might be causing these differences [108]. For example, the administration of estrogen esters yields different peak estradiol levels-higher levels following estradiol valerate and benzoate treatment than estradiol cypionate treatment [141]. In addition, the impact of estrogen replacement also appears time-specific. On one hand, the estrogen-induced reversal of OVX-induced reduction in cell proliferation is transient-short-term (30 min or 4 h) estrogen replacement increases whereas long-term (48 h) estrogen replacement decreases hippocampal cell proliferation [116,117]. On the other hand, latency of estrogen replacement following OVX seems to matter. Following brief latency (1 week) estrogen replacement increased, whereas long latency (28 or more days) does not alter hippocampal cell proliferation [111,124].
Long-term estrogen treatment did not alter hippocampal cell proliferation in female and male rats [90,111,125,126]-regardless of estrogen-type, dose, or sex. Interestingly, long-term estrogen replacement altered hippocampal cell survival in a sex-specific manner. Namely, hippocampal cell survival was not altered in males but reduced in females [90,126]. Interestingly, the impact on cell survival might be estrogen type-specific. Long-term treatment with estradiol benzoate and estrone reduced hippocampal cell survival, whereas 17β-estradiol treatment increased hippocampal cell survival in female rats [90,125,126]. It is worth mentioning that another study (using 17β-estradiol) found an increase in cell survival in the VMH, arcuate nucleus of the hypothalamus, and the dorsal medial hypothalamus of mice following chronic estrogen treatment [127]. At the moment, it is not known whether this finding indicates a brain region-, species-, or estrogen type-specific impact.
To summarize, estrogen impacts cell proliferation in a dose-, estrogen-type-, time-, and brain region-specific manner (see Table 2). There is also a species-specific effect-for example data showing that estrogen replacement increased AMY cell proliferation in the promiscuous meadow voles but did not alter AMY cell proliferation in the pair-bonding prairie vole [124]. However, it should be noted that the various methodological approaches across studies make it difficult to derive patterns or conclusions with certainty. It cannot be ruled out that methodological differences (e.g., type of estrogen, subjects' ages, and estrogen dosages) might also have influenced the alterations in adult neurogenesis.
Hormonal Inhibition of Adult Neurogenesis
Exposure to stressors, such as predation, is a ubiquitous part of the animal kingdom and commonly triggers a stress response. In turn, the stress response causes the activation of the hypothalamic-pituitary-adrenal (HPA) axis, leading to increased glucocorticoids release [89]. These steroid hormones (corticosterone in rodents and cortisol in humans) seem to play an important role in neuroplasticity-especially in limbic brain regions. In the DG, high glucocorticoid levels suppress long-term potentiation [142], cause dendritic atrophy [143][144][145][146], and can result in neuronal loss [142,144,145,147]. High glucocorticoid levels alter spine density in the AMY and cause cell loss in the prefrontal cortex [148][149][150][151][152]. Furthermore, steroid hormones also influence adult mammalian neurogenesis [57,153]. Glucocorticoid administration reduces DG cell proliferation and survival in male and female rats [154][155][156][157][158] as well as DG cell proliferation in male mice [159]. Similarly, the administration of a glucocorticoid receptor agonist, reduces DG cell proliferation [160]. On the contrary, adrenalectomy, which results in the removal of circulating glucocorticoids, leads to an increase in DG adult neurogenesis [161][162][163] and eliminates the stress-induced suppression of DG cell proliferation [164]. Other means of HPA axis inhibition reverse the stress-induced suppression of DG adult neurogenesis in male mice and rats [165][166][167]. The following section will discuss the impact of stressors that are associated with glucocorticoid release on cell proliferation, cell survival, and neuronal differentiation.
Hormonal Inhibition of Cell Proliferation
Exposure to various acute laboratory-specific as well as ethologically-relevant stressors reduced DG cell proliferation in numerous species, without altering SVZ cell proliferation [4,164,[168][169][170][171][172][173][174][175][176][177]-suggesting a potential brain region-specific regulation (see Table 3 for detail). Such gonadal inhibition of DG cell proliferation might be time-specific. Restraint stress reduced DG cell proliferation at 6 h, but not at time points immediately, 2 h, or 3 h after conclusion of the exposure [168,178,186]. Interestingly, the timeline for inescapable shock differed substantially-with a reduction of DG cell proliferation 7 days, but not 1 h, 1 day, or 2 days after conclusion of the exposure [173,197]. Furthermore, the gonadal inhibition appears transient-as cell proliferation returns to baseline levels (1 day following restraint and 14 days following inescapable shock) [168,173]. Evidence also supports the notion that the stressor-induced impact on DG cell proliferation might also be regulated in a sex-specific (with males potentially showing higher sensitivity in rats [171,176] but see [181]), species-specific [180], and age-dependent manner [181] (see Table 3 for more detail).
It is noteworthy that the impact of stress might be related to the intensity and/or length of the stressor. Even though the exposure to a 20-min stressor (e.g., foot-shock, predator odor, social defeat) results in a robust increase in corticosterone levels, this length of stressor does not alter DG cell proliferation in male Wistar or Sprague-Dawley rats [198][199][200]. Similarly, one acute 40-min social defeat exposure (comprised of 5 min of instigation, 5 min of defeat, and 30 min of threat) as well as three 40-min social defeat exposures do not alter DG cell proliferation in male CFW mice [183]. Using a short acute stressor (5-min of forced swimming) revealed differential impacts on DG cell proliferation dependent on the type of coping style ('reactive' versus 'proactive'). Specifically, male wild house mice with a long attack latency (reactive coping style) showed a reduction in DG cell proliferation in comparison to male wild mice with a short attack latency (proactive coping style) [179]. Indeed, previous research has shown that predictability and controllability can lessen the negative consequences of stress on the brain [201][202][203] and might protect against stress-induced inhibition of adult neurogenesis [171,174].
Subchronic laboratory-specific as well as ethologically-relevant stressors resulted in a reduction of DG cell proliferation in various species [182][183][184][185] (see Table 3 for details). This stress-induced reduction in DG cell proliferation is long-lasting-as 21 days following social defeat DG cell proliferation was still reduced [185]. Interestingly, subchronic psychosocial stress did not alter AMY cell proliferation in mice-possibly suggesting a brain region-specific difference [184]. Alternatively, methodological differences might explain the contradictory findings, as the length of direct exposure to the dominant animal might create a more or less intense social defeat encounter.
Chronic stress exposure (regardless of type of stressor or length) leads consistently to a reduction in DG cell proliferation in various species [165,170,178,[186][187][188][189][190]. Interestingly, 21-day exposure to daily chronic mild stress did not alter DG cell proliferation in male Sprague-Dawley rats [82,89]. It is possible to speculate that such contradictory results can be explained by methodological differences-namely rats were exposed to behavioral tests prior to cell proliferation assessment. In addition, DG cell proliferation was lowered in response to varying lengths and a variety of ethologically-relevant stressors in several mammalian species [191][192][193][194][195][196].
The impact of chronic stress might be time-dependent. Specifically, 21 days of daily foot-shock experience reduced cell proliferation 2 h, but not 24 h, after the last foot-shock [198]. Furthermore, the impact of chronic stressors might also be region-specific [82,165,195]. Interestingly, social defeat stress and subsequent isolation housing in long-tailed hamsters reduced cell proliferation in the AMY and VMH, without altering DG cell proliferation [204]. At this point, it is not clear whether the lack of an effect in the hippocampus reflects a species or a methodological difference.
Hormonal Inhibition of Cell Survival
Although it has not been studied in detail yet, there is evidence that acute stress exposure negatively impacts cell survival (see Table 4). To our knowledge, only one study to date assessed the impact of an acute laboratory-specific stressor on cell survival and observed a reduction in DG cell survival [174]. Similarly, studies that investigated the impact of ethologically-relevant stressors observed the reduction in DG cell survival [164,200]. It is noteworthy that stress exposure did not impact immediate survival (2-day old adult-generated cells), but had negative impacts on both short-term (7-day old adult-generated cells) and long-term (28-day old adult-generated cells) survival in male Sprague-Dawley rats [200]. The stressor intensity might play a role in the longevity of the stressor-induced reduction of DG cell survival, as predator odor exposure reduced short-term (7-day), but not long-term (21-day) survival in male Sprague-Dawley rats [164]. [87,192,195,205] Brain region-specific chronic mild stress, restraint CD-1 mouse, PV, Sprague-Dawley rat ↓ AMY, DG, VMH 0 CA 1, CA 3, hilus, MPOA [80,82,195] Abbreviations used: AMY, amygdala; CA 1, CA region 1 in the hippocampus; CA 3, CA region 3 in the hippocampus; DG, dentate gyrus; MPOA, medial preoptic area; VMH, ventromedial hypothalamus: ↑: increase; ↓: decrease; 0: no change; ↔: mixed findings; -: no data.
Subchronic and chronic stress exposure leads consistently to a reduction in DG cell survival in various species [80,82,158,165,178,191,192,195,205]-regardless of laboratory-specific or ethologically-relevant stressor (see Table 4). The majority of adult-generated DG cells display a neuronal phenotype [80,83,165,178,192,206]-suggesting that exposure to a chronic stressor reduces adult neurogenesis. Research has shown that the stress-induced reduction of cell survival might be brain region-specific [80,82,195].
It has also been shown that the type of stressor might impact cell survival in a sex-specific manner. Chronic social isolation resulted in lower levels of DG cell survival in intact female prairie voles (42 days of social isolation [195]) but did not alter DG cell survival in male Sprague-Dawley (12 or 34 days of social isolation [83,206]). Indeed, sex differences in the influence of stress on neural plasticity has previously been noted. For example, in response to chronic restraint stress males display atrophy of apical CA3 dendrites, whereas females display atrophy in basal CA3 dendrites [207]. Stress exposure also alters HPA axis functioning in a sex-specific manner [187,208]. However, at the moment it cannot be ruled out that species differences (prairie vole versus rat) or length of social isolation underlie these differences.
Interestingly, if BrdU is used to label cells prior to a subchronic (e.g., social defeat) or chronic stressor (e.g., chronic daily foot-shock exposure, chronic twice daily unpredictable stress, or daily chronic restraint stress), the rate of survival of adult-generated DG cells is not altered in male Wistar rats [170,178,185,198].
Hormonal Impact on Neuronal Differentiation
Stress exposure has resulted in mixed findings for its impact on neuronal differentiation. Exposure to acute stressors (such as 20-min psychosocial stress or 30 trials of uncontrollable foot-shock) as well as exposure to various chronic stressors (including 21 days of chronic restraint stress; 21 days of chronic unpredictable stress; 10 or 32 days of social isolation; and 10, 18, or 35 days of daily chronic social defeat) did not alter neuronal differentiation in male C57BL/6 mice, Sprague-Dawley rats, or Wistar rats [83,170,172,178,191,192,200,205,206]. Interestingly, other studies have found that stressors decrease neuronal differentiation. Namely, exposure to an acute stressor (30-min foot shock and 30-min restraint)-a procedure that causes long-lasting and robust increase in serum corticosterone-decreased the percentage of adult-generated DG neurons (assessed by BrdU/Dcx double-labeling) in male Balb/C mice [168,169]. Furthermore, the exposure to 80 sets of tail shock reduced DG adult neurogenesis (assessed by BrdU/NeuN double-labeling) in male Sprague-Dawley rats [174]. Similarly, exposure to a chronic stressor (21 days of daily restraint stress or daily foot-shock exposure) reduced DG neuronal differentiation (assessed by BrdU/Dcx or BrdU/NeuN double-labeling) in male CD1 mice and male Wistar rats [80,198]. Furthermore, 42 days of social isolation reduced the rate of neuronal differentiation (assessed by BrdU/NeuN double-labeling) in the DG and AMY of female prairie voles [195]. At this time, it is not clear whether these contradictory findings might possibly suggest a sex difference, strain difference, difference in stressor, or differences in methodology (e.g., BrdU/NeuN double-label vs. Dcx-labeling).
In sum, the stress-induced release of glucocorticoids is one of the most profound environmental suppressors of adult neurogenesis. Indeed, laboratory-specific as well as ethologically-relevant stressors inhibit multiple neurogenic stages in various mammalian species (including mouse, rat, tree shrew, marmoset, and macaque). Furthermore, acute, subchronic, as well as chronic stress exposure results in a potent suppressive effect on adult neurogenesis-suggesting that the stress duration may have lesser role in affecting adult neurogenesis.
Hormonal Neuroprotective Effects
In addition to facilitating adult neurogenesis, hormones might also have a neuroprotective effect on adult-generated neurons. Here, we will discuss the hormonal neuroprotection by recruiting other neurochemicals as well as the evidence of hormones ameliorating the stress-induced reduction of adult neurogenesis.
Hormonal Neuroprotection via Recruitment of Other Neurochemicals
Adult neurogenesis-just one aspect of the highly complex process of neuroplasticity-is not solely regulated by hormones [47]. Indeed, gonadal hormones might have neuroprotective properties in part by interacting with neurotrophic factors as well as neuromodulators [209]. Here, we will discuss the interaction of estrogen and brain derived neurotrophic factor (BDNF)-a neurotrophin that regulates adult neurogenesis. Furthermore, we will review the interaction between estrogen and serotonin (5-HT) and oxytocin (OT), as 5-HT and OT have been shown to play a role in adult neurogenesis. While very little research has been conducted on the interaction between other gonadal hormones (such as testosterone) and BDNF, 5-HT, or OT, we have included these studies into our discussion.
BDNF interacts with estrogen [231][232][233]. Specifically, researchers found that estrogen receptors colocalize to neurons expressing BDNF and its receptor trkB in the basal forebrain [234]. Such colocalization was also observed in the cerebral cortex, HYP, and hippocampus [235]. Furthermore, researchers found an estrogen-sensitive response element on the BDNF gene [236]-which allows estrogen to have a direct genomic impact on BDNF expression. Consistent with this notion, it has been found that BDNF mRNA levels and BDNF immunoreactivity in the hippocampus vary throughout the estrous cycle in the rat [237][238][239]. While estrogen administration increases the expression of BDNF and its receptor in the cortex, MOB, and hippocampus [236,240,241]; OVX leads to a noticeable reduction of BDNF mRNA levels in the hippocampus, AMY, cerebral cortex, and MOB [236,[240][241][242][243]. Interestingly, this OVX-induced reduction in BDNF mRNA levels can be reversed by estrogen replacement following OVX [236,237,[240][241][242][243][244]. Estrogen-treated animals also show more retrograde transport of BDNF in forebrain circuits-a mechanism by which BDNF exerts its neuroprotective role [245]. While the specific underlying mechanism is not fully understood, these results taken together convincingly suggest an interaction between estrogen and BDNF. It can further be speculated that this interaction might be involved in mediating adult neurogenesis.
Hormonal Neuroprotection via Recruitment of 5-HT
5-HT regulates diverse brain functions such as autonomic nervous system reactivity, sleep cycles, and appetite [246,247]. Furthermore, 5-HT has a role in regulating various emotional behaviors such as anxiety, aggression, and affiliative behaviors [248]. Serotonergic projections originating from the brain stem raphe nuclei innervate nearly every part of the forebrain including the HYP, AMY, prefrontal cortex, and hippocampus where the 5-HT effects are mediated via 15 different 5-HT receptors [249]. 5-HT seems to play a role in the regulation of adult neurogenesis [250][251][252]. Specifically, the depletion of serotonin (by ablating 5-HT neurons using a 5-HT neurotoxin) reduced DG cell proliferation in male and female Wistar rats, but not male Lister hooded rats [114,[253][254][255]. Lesion-induced reduction in cell proliferation in rats was reversed by using fetal 5-HT grafts [254]. Acute fluoxetine (a 5-HT agonist by selectively inhibiting 5-HT reuptake) treatment did not alter DG cell proliferation in male Sprague-Dawley rats [256]. Interestingly, the direct manipulation of 5-HT receptor activity via receptor agonists or antagonists resulted in oppositional effects on mediating DG cell proliferation. The acute activation of the 5-HT 1A receptor increased DG and SVZ cell proliferation in male Wistar rats and female C57Bl/6 mice [257,258], whereas the acute blockade resulted in a reduction of DG cell proliferation in male Sprague-Dawley rats [259]. Interestingly, the acute blockade of 5-HT 2 receptors mirrored the effects of 5-HT 1A receptor activation [258]. While acutely activating the 5-HT 1B receptor did not alter DG cell proliferation in male Wistar rats [257], it reduced SVZ cell proliferation in female C57Bl/6 mice [258]-suggesting a potential brain region-or species-specific effect. Further support for a species-specific effect of the regulation of 5-HT receptor activity comes from the finding that a 5-HT 2C agonist did not alter DG cell proliferation in male Wistar rats [257] but reduced DG cell proliferation in female C57Bl/6 mice [258]. To our knowledge, there is only one study to date that assessed the impact of acute 5-HT system manipulation on cell survival. The researchers noted that an acute treatment with a partial 5-HT 1A agonist increased the survival of adult-generated MOB and DG neurons in opossums [6]. Similarly, there is only one study we are aware of that investigated the impact on neuronal differentiation. The researchers found that the activation of 5-HT 1A R increased the number of Dcx-labeled cells in the hippocampus [258].
Subchronic (5 days) depletion of serotonin (by inhibiting 5-HT synthesis) reduced cell proliferation in the DG and SVZ of male Wistar rats [260]. Subchronic (7 days) activation of 5-HT 1A receptor had no effect on DG cell proliferation, whereas treatment of a 5-HT 1A receptor antagonist reduced the number of BrdU-labeled cells in the DG [258]. Furthermore, subchronic treatment with 5-HT 1A R agonist did not alter the number of Dcx-labeled cells in the DG of female C57Bl/6 mice [258].
Chronic treatment (28 days) with fluoxetine increased DG cell survival in adult (3 month old), but not aged (6 or 12 month old) male C57BL/6 mice [267]. The majority of these adult-generated cells expressed a neuronal phenotype-suggesting that chronic fluoxetine treatment increased DG adult neurogenesis. Similarly, chronic treatment (28 days) with fluoxetine increased the number of immature DG neurons in male adult, but not aged, Wistar rats [263,266]; and chronic treatment (14 days) with a partial 5-HT 1A agonist increased the number of immature neurons in the hippocampus of male Sprague-Dawley rats [270]. Further, a 14-day treatment with 5-HT 1A agonist delivered via an osmotic pump, but not via daily injections, increased hippocampal cell survival and showed a higher number of neuronal hippocampal cells than saline-treated animals in male Lister hooded rats [255,264].
Various studies suggest that ovarian steroids, such as estrogen, interact with the 5-HT system [104]. One example of such an interaction is the localization of estrogen receptors to 5-HT neurons in various species including guinea pigs, macaques, mice, and rats [271][272][273][274][275]. Furthermore, there is evidence that estrogen affects the function of the 5-HT system [276]. Acute estrogen treatment (32-h duration) increased levels of 5-HT 2A mRNA in the dorsal raphe of male castrated rats [277]. Chronic estrogen treatment also decreased the expression of 5HT 2c receptors mRNA in the HYP of spayed female pigtail macaque (Macaca nemestrina) without altering the expression of 5-HT 1A or 5-HT 2A mRNA [278]. Interestingly, chronic estrogen treatment in female OVX rats reduced 5HT 1A mRNA in the hippocampus [279]. These contradictory results between these two studies [278,279] might suggest a possible brain-region specific effect or a species difference. Another example of the interaction of estrogen and 5-HT involved the main process to terminate 5-HT neurotransmission, the 5-HT reuptake transporter (SERT). Estrogen treatment in the female rhesus monkey raphe nuclei reduced the expression of 5-HT transporter (SERT) mRNA [280]-suggesting that estrogen can alter 5-HT neurotransmission.
A potential underlying mechanism for the enhancing effect of 5HT on adult neurogenesis is the role of 5-HT in the regulation of BDNF mRNA. For example, using a selective 5-HT reuptake inhibitor causes an increase of BDNF mRNA in the hippocampus [281]. Furthermore, the stress-induced reduction of BDNF mRNA in the hippocampus was prevented by the pretreatment with a 5-HT antagonist [282].
While more research needs to directly assess the estrogen stimulation of hippocampal cell proliferation via 5-HT, researchers found that the administration of a precursor to 5-HT can reverse the reduction in hippocampal cell proliferation following OVX, whereas estrogen treatment was unable to reverse the OVX-induced reduction in cell proliferation in rats treated with a 5-HT antagonist [114].
Hormonal Neuroprotection via Recruitment of OT
OT is released during sexual activity and it plays an essential role in facilitating sexual and affiliative behavior, including the development of a pair bond [283][284][285]. Furthermore, OT contributes significantly to the initiation of maternal behavior, regulates the selective bond between mother and offspring, and might play an important role in paternal behavior [286][287][288]. OT is primarily produced in the HYP, which projects to the pituitary gland as well as to various regions within the brain [283]. Some evidence has accumulated that OT might be a factor influencing adult neurogenesis.
Indeed, an acute peripheral or central (into the hippocampus) OT administration upregulated DG cell proliferation in male rats without altering cell proliferation in the SVZ [289]. This effect was dose-dependent as 1 mg/kg dose led to an increase in cell proliferation, whereas 10 mg/kg caused no change. Subchronic OT treatment increased DG adult neurogenesis [289]. An acute OT administration did not alter the survival of adult-generated DG cells as assessed 1 or 3 weeks following OT administration [289].
Subchronic peripheral OT administration increased the survival of adult-generated DG cells in male rats [289]. The majority of these cells expressed a neuronal phenotype. Subchronic peripheral OT administration did not alter neuronal differentiation in the DG of male rats [289].
Various studies suggest that ovarian steroids, such as estrogen, interact with the OT system. Specifically, researchers found that estrogen receptors (specifically ER beta) colocalize to OT-producing neurons in the HYP [290,291]. Natural fluctuations of the gonadal hormones influence the OT system. Specifically, prior to parturition, which, leads to an increase in estrogen [134], OTR expression is increased [292]. Additionally, estrogen and testosterone treatment increase OTR binding and OT mRNA levels in the brain, whereas these levels are reduced following castration [293][294][295]. Future studies should investigate the underlying mechanism for this OT mediation of adult neurogenesis by investigating whether OT receptors are expressed by proliferating precursor cells as well as what mechanism estrogen has.
Hormonal Amelioration of Stressor-Induced Reduction of Adult Neurogenesis
Exposure to stressors and the associated upregulation of glucocorticoids have been shown to downregulate neurogenesis. There is evidence to suggest that gonadal hormones might mediate the impact of the stress response on adult neurogenesis.
Stress exposure or corticosterone administration inhibit adult neurogenesis and alter the functioning of the BDNF, 5-HT, and OT systems [296][297][298]-chemicals involved in the neuroprotection of adult-generated neurons. Interestingly, gonadal steroids can confer resiliency to the stress-induced reduction of adult neurogenesis. For example, TX and chronic mild stress in male rats reduced DG cell proliferation and survival more than chronic mild stress alone [82]. Similarly, TX and isolation stress in male rats resulted in a lower survival rate of adult-generated DG neurons than isolation alone [83]. It should be noted that the majority of these adult-generated cells expressed a neuronal phenotype. Furthermore, environmental manipulation of gonadal steroids (i.e., mating exposure, which, in turn, alters the testosterone system) can buffer against the negative impact of stress in male rats [80]. Specifically, males exposed to both daily restraint stress and mating had more BrdU-labeled DG cells (indicating a higher level of cell survival) than rats which were only restraint. The majority of these cells expressed a neuronal phenotype, but the number of BrdU/NeuN double-labeled cells did not differ across groups-suggesting that mating activity buffers against stress-induced reduction of hippocampal adult neurogenesis.
Pharmacological manipulations of gonadal steroids can also buffer against the negative impact of stressors on adult neurogenesis. Estrogen treatment prevented the chronic stress-induced dendritic retraction in the hippocampal of female OVX Sprague-Dawley rats [299]. Estrogen treatment also attenuated hippocampal neuronal loss in chronically stressed female OVX rats (Takuma 2007) [300]. Testosterone treatment in male Sprague-Dawley rats prevented the reduction in hippocampal cell proliferation following social defeat stress [185].
Conclusions
The evidence we reviewed here strongly indicate that hormones have a multifaceted impact in regulating adult neurogenesis (Figure 2). We highlight that gonadal hormones seem to facilitate while glucocorticoids seem to inhibit adult neurogenesis. Furthermore, gonadal steroids have been shown to have a neuroprotective effect on adult-generated cells by interacting with BDNF, 5-HT, and OT. These findings are not surprising as the hippocampus and other neurogenic regions (such as the AMY and MOB) are enriched with receptors for gonadal hormones, prolactin, glucocorticoids, BDNF, 5-HT, and OT. However, the exact mechanisms-whether acting on astrocytes or directly on progenitor cells-for hormones to impact adult neurogenesis in such a diverse pattern still remain to be elucidated. Future studies should systematically investigate the functional implications of this multifaceted regulation of hormones on motivated behaviors. Such investigations might further elucidate the observed differences across species, brain-regions, and age of subjects. Hormones have a multifaceted impact on adult neurogenesis. This model diagram illustrates some of the effects of various hormones on hippocampal adult neurogenesis, including the facilitation by gonadal hormones, inhibition by glucocorticoids, and protection via the recruitment of other neurochemicals. BDNF, brain-derived neurotrophic factor; diff, difference; long, long-term; OVX, ovariectomy; OT, oxytocin; prolif, proliferation; 17β, 17β estradiol; short, short-term; treat, treatment; TX, castration; ↑: increase; ↓: decrease; ↔: mixed findings; 0: no change.
Funding: This work was partially funded by grants from the National Institutes of Health [NIMH R01-108527, NIMH R01-109450, and NIMH R21-111998] to Z.W.
Acknowledgments:
We thank the anonymous reviewers for their constructive and helpful suggestions.
Conflicts of Interest:
The authors declare no conflict of interest.
Abbreviations
AMY amygdala AOB accessory olfactory bulb
|
2020-08-13T10:09:59.734Z
|
2020-08-01T00:00:00.000
|
{
"year": 2020,
"sha1": "e438a8da76cb9e0d174400826d94a01f99645467",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2218-273X/10/8/1151/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ce2dd7d14b54a4f48c3ebbc90486cb8c4a638ff8",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
198297961
|
pes2o/s2orc
|
v3-fos-license
|
Modelling of Hypertension Risk Factors Using Penalized Spline to Prevent Hypertension in Indonesia
Hypertension is an increase in blood pressure that increases to a target organ, such as stroke, coronary heart disease, right ventricular hypertrophy. Hypertension occurs if the blood pressure reaches 140 mmHg or more and diastole reaches 90 mmHg or more. According to WHO, from 50% of hypertensive patients recovering, only 25% received treatment, and only 12.5% could be treated well. Nationally, 25.8% of Indonesia’s population suffers from hypertension. In this study, we modeled the risk of hypertension by considering age, heart rate, family hypertension, stress levels, and the body’s future index as factors that influence the risk of hypertension. The cross-sectional survey was conducted in August 2018 at the Surabaya Hajj Hospital. Based on previous research the method used is logit and gompit logistic regression method, but the results obtained are not maximal. Therefore, in this study the researchers proposed a method for constructing hypertension risk factor modeling using a nonparametric application using a penalized spline estimator. The result of classification accuracy by using non-parametrical is 96%. Based on the result, we conclude that non-parametrical approach has better than outcome so that it can be used to modelling the risk of hypertension.
Introduction
Hypertension or high blood pressure in a place where a person increases blood pressure for a long time which successfully improves mortality [1].Hypertension is an increase in blood pressure that increases to a target organ, such as stroke, coronary heart disease, right ventricular hypertrophy [2].The criteria for hypertension used in the determination of cases are the measurement results of systolic blood pressure ≥140 mmHg or diastolic blood pressure ≥90 mmHg [3].According to WHO, from 50% of hypertensive patients recovering, only 25% received treatment, and only 12.5% could be treated well.Based on the Household Health Survey (SKRT) in 2004, the prevalence of hypertension in Java was 41.9%, with ranges in each province 36.6-47.7%.Urban prevalence is 39.9% and in rural areas 44.1% [4].
Hypertension is not a disease with a single factor, but is caused by many factors, namely obesity, unhealthy eating patterns, lack of physical activity, psychological stress conditions, alcohol drinking habits, coffee consumption patterns and smoking habits [5].Decreasing hypertension in people with risk factors for the type, age above 18 years, has a family history of hypertension, and in people who smoke [6].In a previous study using the Prevalence Odd Ratio (POR) Method stating that someone has obesity by 3.8 times can suffer from hypertension, and someone who has a risk of having a risk of 6.2 times can suffer from hypertension [7].
Previous research on hypertension has been carried out by several researchers.[8] using obtaining computational gompit and logit regression and getting the results of classification accuracy of 81.5% and 85.2%.The data used in this study are categorized into two categories: success and failure.One of the right methods is used to model the probability of someone issuing hypertension or not is a nonparametric regression with a spline punished estimator.
Splines are pieces of polynomials that have different segments, which are combined together at vertices [9].It is this segmented nature that gives more than ordinary polynomials to adapt effectively to the local characteristics of the function or data [10].The model estimator form punishes the spline multipredictor using an additive model with the response variable (y) needed for the sum of the predictor variables (x).Spline estimator has been studied by [11][12][13].With this study, it is expected that the incidence of hypertension in Indonesia can reduce the risk of hypertension.
Research Methods
The data used in this study are primary data obtained from questionnaires and interviews with Cardiac Poly in Surabaya Haji Hospital which was conducted from August to September 2018 with 54 respondents.The variables used for this study consist of response variables and predictor variables.
The response variable () Y is the incidence of hypertension, while the predictor variables used are age 1 () X , body mass index 2 () X , heart rate 3 () X , and stress 4 () X .The research step is the steps that must be taken to solve existing problems.The steps to identifying using nonparametric logistic regression approach using OSS-R based on these following steps: 1) estimation of for each predictor with the following steps : a. determine the order of the polynomial, number of knots, and the smoothing parameters () based on the minimum GCV value b. defines the matrix by entering the optimal polynomial order and knots point c. define the estimation value of ̂ by entering the optimal smoothing parameter value d. calculate the ̂( ) = ( + ) −1 2) iteration of local scoring and backfitting algorithms to obtain an additive model based on the penalized spline estimator with the following steps : a. defines the response variable and predictor variable b. determine the initial value to be used in the iteration to 0 (ℎ = 0) c. determine local scoring for (ℎ = 0,1,2, … ) with the following steps : i. determine the partial residual ii. determine the smoothing function (ℎ+1) = ( ) (ℎ+1) iii. determine the RSS value (−1) and compare the value of ′ with the value of ℎ with free degree 1
Result and Analysis
The first step to estimate the nonparametric regression model based on the penalized spline estimator is to determine the order, many knot points, knot points, and optimal smoothing parameters for each predictor based on the minimum GCV criteria.We obtain the result for each predictor as given in Table 1.Each observation on the penalized spline estimator has a model, so that in this thesis an explanation will be given for the 34 th observation only, for other observations the process carried out is the same as the 34 th observation.The estimated results of additive nonparametric logistic regression models based on the penalized spline estimator for the 30th observation are as follow: a.The first predictor variable at 34 th observation is equal to 25.This value is include in criteria 1 < 50 , so the function used for ( In complete terms, the penalized spline estimator for the 34 th observation model is: = ̂1( Then, we classify categories = 0 for hypertension, and = 1 normal.This is done by determining the threshold value is used as a comparison or cut off in the identification of hypertension contained in Table 2. Threshold that will be used as reference for cut-off category 0 or category 1 is determined by looking at the highest classification accuracy score and highest threshold value which has the highest classification accuracy.If value ̂ is greater than threshold value, then it will be classified as normal, vice versa.The classification result based on value ̂ in the 34 th observation approaches are given in Table 3. the research explained that the models is significant, but there is no test and information that the models is stable or consistent.Then on APPER value results that logit and gompit's classification accuracy is smaller than penalized spline's, it means 85.2% and 81.5%.
Conclusion
Modeling of hypertension risk factors by using logistic nonparametric regression based on penalized spline estimator is better than logistic parametric regression by using link function gompit and logit that it can increase the accuracy of the classification of hypertension risk factors up to 96%.It is expected that the public can realize the importance of a healthy lifestyle in order to maintain health in order to control blood pressure with, so as to avoid hypertension and as an effort to reduce the incidence of hypertension in Indonesia.
Table 1
Optimum Lambda Value for each Predictor VariableAfter obtaining the optimal initial value for each predictor, the next step is to iterate using the local scoring algorithm, with estimated parameter values are as follow: The second predictor variable at 34 th observation is equal to 19.22768787.This value is include in criteria 2 < 22.22222 , so the function used for ( 2,34 ) is The fourth predictor variable at 34 th observation is equal to 16.This value is include in criteria 4 ≥ 9.5 , so the function used for ( 4,34 ) is 5d.
Table 2 .
Threshold Values and Accuracy Classification
Table 3
Classification AccuracyBased on the calculation obtained the value of classification accuracy of 96.6%, so it can be seen that the estimation of nonparametric regression models based on the penalized spline estimator obtained is valid for calculating the incidence of hypertension.The last step Press 'Q test needs to be done to determine the stability in the classification accuracy of the extent to which groups can be separated using existing variables by comparing the Press' Q value with the Chi-Square table value with a free degree 1.We validate the classification accuracy of nonparametric regression model approach with Press'Q value as follow:
|
2019-07-26T07:23:36.031Z
|
2019-06-26T00:00:00.000
|
{
"year": 2019,
"sha1": "5d7792aab0d76b9155ddfeeead04bfb43bc8500c",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/546/5/052003",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "948a7952e1feda3f4b86ec06a0ce1c1acff34744",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
}
|
54215952
|
pes2o/s2orc
|
v3-fos-license
|
Two-loop Yang-Mills diagrams from superstring amplitudes
Starting from the superstring amplitude describing interactions among D-branes with a constant world-volume field strength, we present a detailed analysis of how the open string degeneration limits reproduce the corresponding field theory Feynman diagrams. A key ingredient in the string construction is represented by the twisted (Prym) super differentials, as their periods encode the information about the background field. We provide an efficient method to calculate perturbatively the determinant of the twisted period matrix in terms of sets of super-moduli appropriate to the degeneration limits. Using this result we show that there is a precise one-to-one correspondence between the degeneration of different factors in the superstring amplitudes and one-particle irreducible Feynman diagrams capturing the gauge theory effective action at the two-loop level.
Introduction
The study of scattering amplitudes has played a central rôle in the development of string theory since its very beginning. In the seventies and the eighties it was instrumental in showing that superstring theories provide perturbative gravitational models that, at loop level, are free of ultraviolet divergences and anomalies. The analysis of string amplitudes was also crucial in the discovery of D-branes and in the development of the web of dualities among different superstring theories. It is, then, not surprising that this field continues to be under intense study. Recently, there has been renewed interest in several aspects of string perturbation theory in the RNS formalism, with particular focus on contributions beyond one loop: for example, higher-loop diagrams with Ramond external states were discussed in Refs. [1,2]; further, Refs. [3,4,5] focused on the off-shell extension of amplitudes, studying various situations where this is necessary; finally, Ref. [6,7] derived an explicit result for the D 6 R 4 term in the type-IIB effective action, checking the predictions following from S-duality and supersymmetry. Another interesting approach to the pointlike limit of closed string amplitudes as a 'tropical' limit was discussed in [8]. For recent reviews on multiloop string amplitudes, with a more complete list of references, we refer the reader to [9,10].
Two themes in particular have been at the center of much important progress in our understanding of string interactions: the study of the mathematical properties of the world-sheet formulation of string amplitudes, and their relation to the effective actions describing the light degrees of freedom present in the theory. In this paper, we touch on both these aspects by studying in detail the open string degeneration limits of two-loop amplitudes described by a world-sheet with three borders and no handles. In particular, we expand upon the results of [11]: starting from the Neveu-Schwarz (NS) sector of the open superstring partition function in the background of a constant magnetic field strength, we derive the Euler-Heisenberg effective action for a gauge theory coupled to scalar fields in the 'Coulomb phase'. The idea of using string theory to investigate effective actions in constant electromagnetic fields has a long history, and was studied at one loop in [12,13,14], with some results for the bosonic theory at two loops given in [15]. In our analysis we find exact agreement between calculations in field theory and string theory, in the infinite-tension limit, for the two-loop correction to the effective action. Furthermore, we find that the correspondence holds not just for the whole amplitude, but we can precisely identify the string origin of all individual one-particle irreducible (1PI) Feynman diagrams contributing to the effective action. In order to do so, on the string theory side we need to use appropriate world-sheet super-moduli, respecting the symmetry of the Feynman graphs, while on the field theory side we need to use a version of the non-linear gauge condition introduced by Gervais and Neveu in [16], modified by dimensional reduction to involve the scalars also, and given here in Eq. (5.10).
On the formal side, it is advantageous to use the formalism of super Riemann surfaces [17,18,19,20,21,9], in which the complex structure is generalized to a superconformal structure, with local super-conformal coordinates (z|θ). We follow this approach by constructing the two-loop amplitude in the Schottky parametrization, since there is a close relationship between Schottky super-moduli, in particular the 'multipli-ers', and the sewing parameters of plumbing fixtures. This in turn relates the bosonic world-sheet moduli to the Schwinger parameters associated to the propagators in Feynman graphs, which provides the ideal framework for studying the connection between string integrands and field theory Feynman diagrams. In the bosonic case, it is possible to describe genus h Riemann surfaces as quotients of the Riemann sphere (with a discrete set of points removed) by a discrete (Schottky) group, freely generated by h Möbius transformations. Heuristically, quotienting the Riemann sphere by a Möbius transformation has the effect of cutting out a pair of circles and gluing them to each other along their boundaries. Schottky groups arose naturally in the early treatment of multi-loop string amplitudes [22,23,24,25,26,27] and remained useful [28,18,29,30,31,32] even after alternative methods of analysis were found. In the supersymmetric case, higher genus super Riemann surfaces are similarly generated by quotienting the super manifold CP 1|1 (with a discrete set of points removed) by a discrete group, generated by h 'super-projective' OSp(1|2) transformations.
As is well known, the presence of a constant background field strength in the spacetime description of the amplitudes translates on the world-sheet side into the presence of non-trivial monodromies along either the a or the b cycles of the Riemann surface. It is thus not surprising that the amplitudes we are interested in involve super 1|1-forms (sections of the Berezinian bundle) with twisted periodicities, also known as Prym differentials. The bosonic counterparts of these objects was discussed, in the Schottky parametrization, in [33], and their periods along the untwisted cycles appear in any string amplitude where the fields have non-trivial monodromies [34,35,15,36]. We extend these past results in two directions: first we generalise the twisted period matrix to the supersymmetric case; then we must calculate the supersymmetric version of the twisted determinant to sufficiently high order in the complete degeneration limit, so as to obtain the gauge theory Feynman graphs with multiple gluon propagators. In order to do this, we introduce an alternative formulation of the twisted super-determinant in terms of an integral along a Pochhammer contour, and we show that this simplifies drastically its perturbative evaluation in the Schottky parametrization.
The main result of this paper is to show how the two-loop 1PI Feynman diagrams listed in Fig. 1 arise from the degeneration limits of the superstring result. The graphical notation for the field propagators is explained in detail in Appendix C; here we note in particular that we are using two different types of edges to denote gluons, depending on whether they are polarized parallel or perpendicular to the plane of the background field. We note also that some of the graphs (those in Figs. 1i-1l) include vertices with an odd number of scalars: these vertices arise because of the non-vanishing scalar vacuum expectation values (to which these graphs are proportional); these diagrams appear automatically in the string calculation, and they appear on the field-theory side as a result of having imposed the gauge condition of Gervais and Neveu [16] before dimensional reduction. Our investigation is thus also a contribution to a long-standing program aimed to use string theory to gain insights into field-theory amplitudes, which was started in Ref. [37] in the language of dual models, and generalized to the superstring framework in [38]. The practical usefulness of string theory as an organizing principle for tree-level gauge-theory amplitudes was first noticed and applied in [39,40]. At genus one, several results are available in the literature: they include the derivation of the leading contribution to the Callan-Symanzik β-function of pure Yang-Mills theory in [41], as well as a general analysis of one-loop scattering amplitudes in [42,43,44,45,46]. This was later used to calculate the one-loop five-gluon amplitude in QCD for the first time in [47]. String theory also inspired many developments in the world-line approach to perturbative quantum field theory (QFT), starting with the work of Strassler in [48], with subsequent progress in [49,50], summarized in [51], and more recently, for example, in [52,53]. Bosonic strings were also used to compute Yang-Mills renormalization constants at one loop in [54], and one-loop off-shell gluon Green's functions in [55]. At the two-loop level much less is known: explicit QFT amplitudes with only scalar fields were obtained from bosonic strings in [56] and [57,58]. Two-loop amplitudes with gluons, however, have proved difficult to study with this technology [59,60,61]. Our analysis here marks significant progress in this direction, showing that the prescriptions discussed in [11] are indeed sufficient to derive from string theory all the bosonic two-loop 1PI gauge-theory diagrams listed in Fig. 1. The structure of the paper is as follows. In Section 2 we describe the D-brane setup in which our calculations are carried out. In Section 3 we recall the integration measure for the NS sector of open superstrings in the super Schottky parametrization and explain how to modify it in order to accommodate our background. In Section 4 we expand the measure in powers the Schottky multipliers, and then we identify the appropriate parametrizations to describe the two degenerations of the Riemann surface which are relevant for our purposes: the symmetric degeneration leading to the diagrams with the topology of Figs. 1a-1l, and the incomplete degeneration, leading to diagrams with only two field-theory propagators and a four-point vertex, depicted in Figs. 1m-1r. An analysis of the various factors contributing to the string amplitude, arising from different worldsheet conformal field theories, then enables to unambiguously identify each diagram in the field-theory limit. In Section 5 we obtain and discuss the Lagrangian for the world-volume QFT in the appropriate non-linear gauge, and we use it to compute example Feynman diagrams. Finally, in Section 6.1 we compare our string-theory and QFT calculations, and in Section 6.2 we discuss the differences between the present calculation and the analogous calculation using the bosonic string. In Appendix A we discuss super-projective transformations and the super Schottky group, in Appendix B we give the calculation of the twisted (Prym) super period matrix, and in Appendix C we list the values of all of the Feynman graphs in Fig. 1 with our choice of background fields.
The string theory setup
We consider a stack of N parallel d-dimensional D-branes embedded in a D-dimensional Minkowski space-time, where, as usual, D = 10 for type II theories and D = 26 for bosonic string theory. When d < D − 2, and provided the string coupling g s is small, so that g s N 1, this configuration can be described in terms of open strings moving in flat space and being supported by the D-branes. We will work generically in the 'Coulomb phase' where the D-branes are spatially separated from each other in the directions perpendicular to their world-volumes. Furthermore, on each of the D-branes we switch on a uniform U(1) background field in the {x 1 , x 2 } plane, with a field strength tensor given by where B A is a constant 'magnetic' field on the A-th brane (thus A = 1, . . . , N ). The positions of the D-branes in the transverse directions will be labelled by Y A I , with I = d, d + 1, . . . , D − 1. Such a D-brane configuration is depicted from various viewpoints in Figs. 2 and 3. A string stretched between branes A and B will have squared length and will receive a classical contribution m AB to its mass from the elastic potential energy associated with the stretching of the string, given by where T is the string tension and α the related Regge slope. These strings will also be charged under the magnetic fields B A and B B , with the sign of the charge depending on . . . x their orientation. Open strings that start and end on the same D-brane are uncharged and their mass is independent of Y A I . For generic values of Y A I , this configuration breaks the symmetry of the world-volume theory from U(N ) to U(1) N .
The theory describing open strings supported by this D-brane configuration is free [12,13]. The constant background magnetic fields on the D-brane world-volumes manifest themselves in the world-sheet picture by altering the boundary conditions of string coordinates in the magnetized plane. On the double cover of the surface, this gives twisted boundary conditions, or, in other words, non-trivial monodromies, to the zero modes in the two magnetized space directions. To describe this setup, we will use the conventions of Section 3 of Ref. [11], which we summarize below.
To begin with, let us briefly consider the spectrum of low-lying string excitations. In the bosonic case, the world-sheet theory, in a covariant approach, comprises D embedding coordinates X µ and the ghost system (b, c). The holomorphic components of these fields admit the mode expansions In the presence of constant abelian background fields, the theory remains free, but string coordinates in directions parallel to the magnetized plane acquire twisted boundary conditions and must be treated separately. Considering strings ending on branes A and B, it is convenient to introduce the combinations Z ± AB = (X 1 AB ± iX 2 AB )/ √ 2. These combinations diagonalize the boundary conditions and yield the mode expansions After canonical quantization, the modes introduced above satisfy standard commutation relations, except for magnetized directions, where one finds As usual in covariant quantization, not all states in the Fock space obtained by acting with the creation modes on the SL(2, R)-invariant vacuum |0 are physical: we need to select only the states belonging to the cohomology of the world-sheet BRST charge In the bosonic theory, the lowest-lying physical state is a tachyon |k ≡ c 1 |k, 0 , with mass-shell condition k 2 = −m 2 = 1/α . The next mass level comprises (D + 2) massless states, which will be the focus of our analysis in the field theory limit: one finds two unphysical states, two null states, and (D − 2) physical polarization states appropriate for massless gauge bosons. A crucial ingredient of our analysis is the mapping between these string states and the space-time states in the limiting quantum field theory: as noticed for instance in Chapter 4 of [62], the action of the worldsheet BRST charge (2.8) on the (D + 2) massless states mirrors the linearized action of the space-time BRST charge for the U(N ) gauge symmetry: in particular, the states created by world-sheet ghost oscillators, c −1 |k and b −1 |k , behave as the spacetime ghosts C and C. Acting with the α M −1 oscillators, on the other hand, generates d states along the D-brane, and n s = D − d states associated to the n s directions transverse to the D-brane, representing respectively the d polarisations of the gauge vectors (including two unphysical ones), and n s adjoint scalars. To be precise, the world-sheet BRST charge Q W B acts as while the linearised space-time BRST transformation δ B acts as where a is an adjoint index, Q a µ and Q a I stand for a gluon mode and a scalar, depending on whether X M is parallel or perpendicular to the D-brane, and k M = {k µ , 0}.
This simple relation between world-sheet and space-time states is preserved in perturbation theory, when the string coupling is switched on and non-linear terms in the BRST operators must be taken into account. This is expected, since, in a perturbative analysis, fields propagating between interaction vertices are free. In practice, we will test this statement by calculating a string diagram with the world-sheet topology of a degenerating double-annulus, and identifying the contributions coming from the various massless states listed above, as they propagate through the diagram. We will show that each contribution matches the gauge theory result, where the corresponding space-time fields propagate in the matching edge of the relevant Feynman diagram, provided that the gauge used in field theory is the nonlinear Gervais-Neveu gauge, introduced in [16]. In this way, we can identify individual Feynman diagrams in the target field theory directly at the level of the string amplitude, picking a specific boundary of the string moduli space, and identifying the string states as they propagate along the degenerating surface.
A similar analysis holds also in the superstring case. In the RNS formalism one needs to introduce the extra world-sheet fields ψ µ , β and γ, that are the partners under worldsheet supersymmetry of the ∂X µ , b and c fields mentioned above. The monodromies for these new fields will be the same as those of their partners, except for a possible extra sign, which is allowed for fields of half-integer weight, and distinguishes the Ramond from the Neveu-Schwarz sectors. In this paper we will focus on the Neveu-Schwarz contributions: the analysis of the states at the first mass level, above the tachyonic ground state |k , parallels that of the bosonic case. The only difference is that the relevant modes are ψ −1/2 , β −1/2 and γ −1/2 : in the superstring partition function, the low energy limit will be performed by focusing on the contributions of states with half-integer weight.
The superstring partition function for the NS sector
From the world-sheet point of view, the interaction among D-branes is described by the string vacuum amplitude (the partition function) with boundaries, as depicted in Fig. 2.
The case of two magnetized D-branes, corresponding to a one loop-amplitude, has been well studied [12,13,63,14]. Here we will focus on planar world-sheets, and most of what we will say in this section applies to surfaces with (h + 1) borders, corresponding to hloop open superstring diagrams, but restricted to the NS sector, where the super-Schottky formalism described in Ref. [30] can be used. In particular, as discussed in Section 2, we consider parallel magnetized D-branes that can be separated in the directions transverse to their world-volumes. As a consequence, and as depicted in Fig. 4 for a (two-loop) surface with three boundaries, the partition function depends on two set of variables: the relative distances among D-branes, and the magnetic field gradients between pairs of D-branes.
To be precise, let us label the (h + 1) world-sheet borders with i = 0, 1, . . . , h. Then we can label the D-brane to which the i-th border is attached with the integer A i , with A i ∈ {1, . . . , N }, and A i ≤ A i+1 . To get the full amplitude, we will have to sum over the A i 's. Having fixed A 0 , . . . , A h , we can take the A 0 -th brane as a reference and define The variables i thus form an h-dimensional vector, which we will denote by ; similarly, the variables d i I , which have dimension of length, form n s h-dimensional vectors, which we will label d I . The classical mass of the string stretching between the A 0 -th brane and the A i -th brane is then given by (3.2) Notice finally that, for h = 2, as depicted in Fig. 4, we make a slight variation in this notation by flipping the sign of the second component of the two-dimensional vectors and d I , which will be useful to take full advantage of the extra symmetry at two loops. The string partition function in this setup can be written as follows. For our purposes, it is useful to keep separate the contributions of the different conformal field theory sectors, which leads to the expression is a field-dependent normalization factor, to be discussed in Section 4.4, and we denoted the contributions of the world-sheet ghost systems b, c and β, γ by F gh , that of the string fields X I , ψ I perpendicular to the D-branes by F ( d ) scal , while the contribution of the fields along the D-branes has been separated into sectors parallel (F ( ) ) and perpendicular (F ⊥ ) to the magnetized directions. Finally, µ denotes collectively the supermoduli: here we use the super-Schottky formalism, reviewed in Appendix A, where the supermoduli are the sewing parameters e iπς i k i 1/2 (with ς i ∈ {0, 1}) and the fixed points (u i |θ i ), (v i |φ i ) of h super-projective transformations i = 1, . . . , h. Note that we explicitly associate with each Schottky multiplier k i the phase ς i associated with the NS spin structure around the b i homology cycle. In this parametrization the measure dµ h reads [30] where we denote superconformal coordinates in boldface, and the notation v i −u i indicates the supersymmetric difference The square parenthesis in Eq. (3.4) takes into account the super-projective invariance of the integrand, which allows us to fix three bosonic and two fermionic variables. Θ v 1 u 1 v 2 is the fermionic super-projective invariant which can be constructed with three fixed points, defined in Refs. [64,20], and given explicitly in Eq. (A.12). If we specialize Eq. (3.4) to h = 2 we find Let us now examine in turn the various factors in the integrand of Eq. (3.3). The ghost contribution is independent of both the magnetic fields and the D-brane separations, so we can use the result of Ref. [30], which reads 1 + e iπς 1 k (3.7) In Eq. (3.7), the notation α means that the product is over all primary classes of the super Schottky group: a primary class is an equivalence class of primitive super Schottky group elements, i.e. those elements which cannot be written as powers of another element; two primitive elements are in the same primary class if one is related to the other by a cyclic permutation of its factors, or by inversion. The vector N α has h integer-valued components, and is defined as follows: the i-th entry counts how many times the generator S i enters in the element of the super Schottky group T α : more precisely, we define N i α = 0 for T α = 1 and N i α = N i β ± 1 for T α = S ±1 i T β . Finally, also ς is a vector with h components, with the i-th component denoting the spin structure along the b i cycle, as noted above.
In fact, we need to be more precise about the notation in Eq. (3.7), because the half-integer powers of k α could indicate either of the two branches of the function. The notation is to be understood in the following way: when the spin structure is ς = 0, we define the eigenvalue of the Schottky group element T α with the smallest absolute value to be −k 1/2 α , see Eq (A.16). In particular, we take k 1/2 i to be positive 5 for i = 1, . . . , h. This corresponds to the fact that spinors are anti-periodic around a homology cycle with zero spin structure (see, for example, Ref. [65]). Furthermore, we expect the partition function to be symmetric under the exchange of the homology cycles b 1 , b 2 and b −1 1 · b 2 (depicted in Fig. 5), and one can verify that k 1/2 (S −1 1 S 2 ) is always positive whenever k 1/2 1 and k 1/2 2 have the same sign. Our convention puts all three multipliers on the same footing. Note that k 1/2 α is not in general positive when T α is not a generator: for example, the eigenvalues of T α = S 1 S 2 are positive when the spin structure is zero, so that k 1/2 (S 1 S 2 ), as computed in Eq. (A.25), is negative.
The scalar contribution to Eq. (3.3) depends on the separation between the D-branes in the transverse directions, as shown in Fig. 3. We can write F ( d ) scal as a product over the super Schottky group, capturing the non-zero mode contribution, times a new factor Y(µ, d ), as (3.8) The explicit form of Y can be found by repeating the calculation performed in Ref. [66] for the bosonic theory, and replacing the period matrix τ with the super-period matrix τ discussed in Appendix A.2.2. We find It is instructive, and useful for our later implementation, to consider explicitly the h = 2 case. Let the i = 0, 1, 2 borders of the world-sheet be on the D-branes labelled by A, B and C, respectively. As mentioned above, it is useful in this special h = 2 case to define the i = 2 component d 2 I with the opposite sign with respect to Eq. (3.1), so we have d 1 By so doing, we can then define an additional (redundant) quantity, describing the displacement between the D-branes attached to the i = 1 and i = 2 borders, as d 3 I = Y B I − Y C I . Now the three distances d i I for i = 1, 2, 3 are on an equal footing, reflecting the symmetry of the world-sheet topology, and we have d 1 I +d 2 I +d 3 I = 0 (see Fig. 4). One may easily verify that the product over the n s transverse directions in Eq. (3.9) evaluates to a function of the squared masses m 2 i , defined as in Eq. (3.2). One finds Finally, let us turn to the contribution of the world-sheet fields X µ , ψ µ along the worldvolume direction of the D-branes. In absence of magnetic fields, the result can be found in Ref. [30] and it reads Figure 4: The double annulus world-sheet, with three boundaries labeled with i = 0, 1, 2 attached to three D-branes, with Chan-Paton factors A, B, C. The relative positions and background field strengths of branes B and C with respect to brane A determine the masses and the twisted boundary conditions, as described in the text.
In the presence of constant background gauge fields, F gl gets modified, since string coordinates along the D-branes are sensitive to such backgrounds. The relevant modification to the bosonic theory was derived in Ref. [15]. Using the techniques described in Refs. [33,15,67], it is possible to generalize this construction to the Neveu-Schwarz spin structure of the RNS superstring [11]. The result is that switching on the background fields amounts to multiplying F (0) gl by a factor, as where, assuming the background fields to be non-zero only in one plane, we have R(µ, ) = e −iπ ·τ · det (Im τ ) det (Im τ ) The matrix τ is the supersymmetric analogue of the twisted (or Prym) period matrix, the bosonic version of which was computed with the sewing method in [33,35]. Its calculation in outlined in Appendix B.2. Inspecting Eq. (3.13), we see that F ( ) gl can be factorized as the product of a term F ( ) , capturing the contribution along the magnetized plane, times an -independent term F ⊥ arising from the unmagnetized directions. In the field theory limit, F ( ) will generate the contributions of gluons polarized in the plane of the background field, while F ⊥ will give rise to gluons polarized in the transverse directions. Explicitly, we have (3.14) 1 + e iπ(2 ·τ + ς)· Nα k n−1/2 α 1 + e −iπ (2 ·τ + ς)· Nα k n−1/2 α 1 − e 2πi ·τ · Nα k n α 1 − e −2πi ·τ · Nα k n α .
Focusing now on the h = 2 case, we can use super-projective invariance to fix three bosonic and two fermionic moduli. A convenient gauge choice in the super Schottky formalism is to specify the positions of the fixed points, given in terms of homogeneous coordinates 6 on CP 1|1 , as 16) with (0 < u < 1), which leads to Implementing this projective gauge fixing in Eq. (3.3), we can finally express the h = 2 partition function as 19) in terms of the bosonic super-projective invariant built out of four points, (z 1 , z 2 , z 3 , z 4 ), see Eq. Eq. (A.13).
4 Taking the field theory limit
Expanding in powers of the multipliers
We are interested in computing the α → 0 limit of the integrand of the superstring amplitude. In this limit, we expect massive string states to decouple, so that one is left with the massless spectrum. Possible contributions from the tachyon ground state cancel after GSO projection in the superstring case, or should be discarded by hand in the bosonic case. It is in principle non trivial to take this limit before integration over (super) moduli, since this requires constructing a map between the dimensionless moduli of the (super) Riemann surface and the dimensionful quantities that arise in the computation of field theory Feynman diagrams. This task is considerably simplified in the Schottky parametrization, where, as discussed for example in Ref. [11], the contributions of individual string states can be identified by performing a Laurent expansion of the integrand of the string partition function in powers of the multipliers. One finds a correspondence between the order of expansion and the mass level of the string, and furthermore, within each mass level, one can track individual states by tracing the origin of each term to a specific factor in the string integrand. The main difference between the bosonic string and the RNS superstring is that for the latter, which we discuss here, the expansion is in powers of k 1/2 i rather than k i , as is already apparent from our discussion in Section 3. More precisely, since the measure of integration contains a factor k −3/2 i , a term proportional to k (n−3)/2 i corresponds to a contribution from a state belonging to the n-th mass level circulating in the i-th string loop (where n = 0 corresponds to the tachyonic ground state). Therefore, all terms with n > 1 acquire a positive mass squared, m 2 = (n − 1)/(2α ), and decouple in the limit α → 0. We conclude that it is necessary to expand the various factors in the integrand of Eq. (3.18) only up to terms of order k i 1/2 , in order to get the complete massless field theory amplitude.
This task is made possible by the fact that the multipliers of only finitely many super-Schottky group elements contribute at order k 1/2 1 k 1/2 2 . The reason is that the leadingorder behaviour of the multiplier k α = k(T α ) is related in a simple way to the index N i α introduced in Section 3: one may verify that unless of course the left-most factor of T α is S ∓1 i . Thus, for every super Schottky group element T α not in the primary class of an element in the set {S 1 , S 2 , S 1 S 2 , S −1 1 S 2 }, the multiplier k Next, we compute F ⊥ , defined in Eq. (3.14). Using the expressions for the multipliers k 1/2 (S −1 1 S 2 ) and k 1/2 (S 1 S 2 ), given in Eq. (A.25), we find The expansion of the determinant of the super period matrix is given in Eq. (A. 33), and, substituted here, leads to the factor det (Im τ ) Notice that logarithmic dependence on (super) moduli must be retained exactly: indeed, as shown in Ref. [11] and discussed here in Section 4.3, it will turn into polynomial dependence on Schwinger parameters in the field theory limit. The expansion of the factor F ( ) , also given in Eq. (3.14), is more intricate, as well as more interesting, because of the dependence on the external fields. Writing where R is the background-field dependent factor of the infinite product appearing in Eq. (3.15), we see that we can separately expand the three factors. The determinant of the twisted super period matrix is by far the most intricate contribution. It is discussed in Appendix B, and a complete expression with the exact dependence on the fields, through , is very lengthy. We will see in Section 4.3, however, that in the field theory limit we must expand in powers of the components of as well: at that stage, we will be able to write a completely explicit expression also for det (Im τ ). The exponential factor in the numerator of Eq. (4.5) can be computed using the expression for τ in Eq. (A.32), and is given by Finally, the remaining factor in F ( ) is given by where we defined the factors The last required ingredient is F The remaining, mass-independent, factor in F Figure 5: Two types of homology cycles on the double annulus.
This completes the list of the factors in Eq. (3.18). It is now straightforward to combine them, and expand the resulting polynomial in k 1/2 i to the relevant order. Before proceeding, however, we must consider more carefully our choice of variables in view of the field theory limit.
A parametrization for the symmetric degeneration
In order to go beyond the specification of the mass states circulating in the string loops, and identify the contribution of individual Feynman diagrams in the field theory limit, we must refine our parametrization of (super) moduli space. Let us now, in particular, concentrate on Feynman diagrams with the symmetric topology depicted in the first two lines of Fig. 1. While individual Feynman diagrams will not be symmetric under the exchange of any two lines when the propagating states are different, we expect that, when summing over all states at a given mass level, the result should be fully symmetric, since there are no features distinguishing the three propagators at the level of the world-sheet geometry. This symmetry requirement will guide our choice of parametrization for the region of moduli space close to this degeneration, along the lines already discussed in Ref. [11].
It is clear that the parametrization in terms of the bosonic moduli k 1/2 1 , k 1/2 2 and u ≡ 1 − y + θφ will not be sufficiently symmetric, since the first two chosen moduli are multipliers of super-Schottky group generators, while the third one is a cross-ratio of the fixed points. To present the integration measure in a sufficiently symmetric way, we must parametrize it to be symmetric under permutations of the super-Schottky group elements S 1 , S 2 and S −1 1 S 2 . The reason for this is that the homology cycles b 1 , b 2 and (b −1 1 · b 2 ) lift to these super-Schottky group elements on the covering surface CP 1|1 − Λ, and any two of b 1 , b 2 and (b −1 1 · b 2 ) (along with the appropriate choice of a-cycles) constitute a good canonical homology basis (see Fig. 5a). Our choice of S 1 and S 2 as the generators is arbitrary, so, in order to preserve modular invariance, the measure must be parametrized to display the symmetry under permutations of S 1 , S 2 and S −1 1 S 2 . To reinforce this point, note that any other homology cycle built out of b cycles will intersect itself, as is the case for example for the (b 1 · b 2 ) cycle, depicted in Fig. 5b.
A natural way to symmetrize the measure is to use the multiplier of S −1 1 S 2 as the third bosonic modulus, instead of u. Defining − e iπς 3 k 1/2 3 to be the eigenvalue of S −1 1 S 2 with the smallest absolute value, so that k 3 is the multiplier of that super Schottky group element, one can compute k 1/2 3 using Eq. (A.24). It is related to y implicitly through In these definitions, ς 3 is the spin structure around the b 3 ≡ b −1 1 · b 2 homology cycle, and therefore it is given simply by ς 3 = ς 1 + ς 2 (mod 2). k 1/2 3 is then positive, just as k 1/2 1 and k 1/2 2 are. As discussed in Ref. [11], the field theory limit becomes particularly transparent if one factors the three multipliers k i in order to assign a parameter to each section of the Riemann surface that will degenerate into an individual field theory propagator. This is done by defining where √ p i is defined to be positive. In analogy to the discussion of Ref. [11], each p i will be interpreted, in the field theory limit, as the logarithm of the Schwinger proper time associated to a propagator.
For bosonic strings, the discussion leading to Eq. (4.12) was sufficient to construct a symmetric measure of integration, prepared for the symmetric degeneration in the field theory limit. In the present case, instead, one must also worry about fermionic moduli: our current choice of θ and φ as moduli will not yield a symmetric measure, since they are super-projective invariants built out of the fixed points of S 1 and S 2 only. In order to find the proper Grassmann variables of integration, we take advantage of the fact that we are allowed to rescale θ and φ with arbitrary functions of the moduli, since such a rescaling automatically cancels with the Berezinian of the corresponding change of integration variables. Such a rescaling of course leaves the integral invariant, but it can be used to move contributions between the various factors of the integrand, in such a way that individual factors respect the overall exchange symmetry of the diagram, as we wish to do here. In order to find a pair of odd moduli invariant under permutations of S 1 , S 2 , S −1 1 S 2 , we proceed as follows. Definê for (ij) = (12), (23), (31). For the factors c ij we make the choice with c 23 and c 31 obtained by permuting the indices (123), and where u 3 and v 3 are the fixed points of the transformation S −1 1 S 2 . In Eq. (4.14), we have introduced the symbols q i , i = 1, 2, 3, defined by 7 Note that the spin structures of q 1 and q 2 are swapped compared with what one might expect. This, however, is reasonable, because the q i defined in this way factorize the NS sewing parameters e iπςi k 1/2 i as follows: e iπς1 k With this choice for c ij , one can check that e iπς 3 dθ 12 dφ 12 = e iπς 1 dθ 23 dφ 23 = e iπς 2 dθ 31 dφ 31 , (4.16) so that the Grassmann measure of integration has the required symmetry. It is not difficult to rewrite the various objects computed in Section 4.1 in terms of the new variables, and expand the results to the required order in p i . In order to do so, we use as well as With these results, it is straightforward to verify the symmetry of the full string integrand.
In particular, we find that the product of the two-loop measure of integration times the ghost factor is given by where the contribution of the spin structure to dµ 2 in Eq. (3.6) has been absorbed in dθ 12 dφ 12 . Similarly, the contribution of the orbital modes defined in Eq. (4.7) becomes Here, and in the rest of this section, we understand 'cyclic permutations' to mean cyclic permutations of the indices (1, 2, 3) for The indices ofθ 12φ12 , on the other hand, are not permuted.
In order to reconstruct the full contribution of fields in the directions parallel to the magnetized plane, we still need the other factors appearing in Eq. (4.6). The exponential factor takes the form The last factor in Eq. (4.6) is the twisted determinant det (Im τ ), whose calculation is described in Appendix B. The result for generic values of u is a lengthy combination of hypergeometric functions, which however simplifies drastically in the limit we are considering here, where u, proportional to p 3 , is small. In this limit (B.40) reads Next, we need the contribution of the untwisted gluon sector, given in Eq. (4.3). In the current parametrization it reads where the determinant of the period matrix, given by Eq. (A.33), becomes det(Im τ ) = 1 4π 2 log p 1 log p 2 + log p 2 log p 3 + log p 3 log p 1 (4.23) Finally, we need the ingredients for the scalar sector, given above in Eq. (4.9) and Eq. (4.10). The mass contribution takes the form while the mass-independent factor is given by This completes the list of ingredients needed for the analysis of the symmetric degeneration of the surface. We now turn to the calculation of the α → 0 limit.
Mapping moduli to Schwinger parameters
The last, crucial step needed to take the field theory limit is the mapping between the dimensionless moduli and the dimensionful quantities that enter field theory Feynman diagrams. This α -dependent change of variables sets the space-time scale of the scattering process and selects those terms in the string integrand that are not suppressed by powers of the string tension. The basic ideas underlying the choice of field theory variables have been known for a long time (see for example Ref. [68]), and were recently refined for the case of multi-loop gluon amplitudes in Ref. [11]. The change from bosonic strings to superstrings does not significantly affect those arguments: in the present case we will see that integration over odd moduli will simply provide a more refined tool to project out unwanted contributions, once the Berezin integration is properly handled. Following Ref. [11], we introduce dimensionful field-theoretic quantities with the change of variables These definitions make it immediately obvious that terms of the form p c i i must be treated exactly, as we have done. On the other hand, terms proportional to high powers of i are suppressed by powers of α in the field theory limit, which is the source of further simplifications in our final expressions.
For completeness, we give here the results for the various factors in Eq. (3.3) as Taylor expansions powers of q i (that is, in half-integer powers of p i ), but with the field-and mass-dependent coefficients of the leading terms worked out exactly. Beginning with the contribution of gluon modes perpendicular to the magnetic fields, F ⊥ (µ), we find which we recognize as the first Symanzik polynomial [69] of graphs with the topology of those in the first two lines of Fig. 1, expressed in terms of standard Schwinger parameters. The result for the contribution of gluon modes parallel to the magnetic field is more interesting, as one begins to recognize detailed structures that are known to arise in the corresponding field theory. Multiplying Eq. (4.19) by Eq. (4.20), and dividing by Eq. (4.21), one finds Using the fact that B 1 + B 2 + B 3 = 0, one can verify that ∆ B can be understood as the charged generalization of the first Symanzik polynomial for this graph topology, and indeed for vanishing fields ∆ B tends to ∆ 0 . It is then easy to see that F ( ) (µ), in the same limit, reproduces F ⊥ (µ) with the replacement d − 2 → 2, as expected.
The contribution from the D-brane world-volume scalars can be obtained combining Eq. (4.24) and Eq. (4.25). One finds where one recognizes the exponential dependence on particle masses, each multiplied by the Schwinger parameter of the corresponding propagator, which is characteristic of massive field-theory Feynman diagrams. Note that the masses m 2 i appearing in Eq. (4.31) arise via symmetry breaking from the distance between D-branes, and therefore represent classical shifts of the string spectrum: below, for brevity, we will often call 'massless' all string states that would be massless in the absence of symmetry breaking.
The final factor, including the integration measure and the contribution from the ghosts, can be read off Eq. (4.18), and can be organized as The complete integrand of Eq. (3.3) is the product of F ⊥ from Eq. (4.27), F from Eq. (4.29), F scal from Eq. (4.31) and dµ 2 F gh from Eq. (4.32). In the proximity of the symmetric degeneration, it can be organized in a power series in terms of the variables q i , as we have done for individual factors. We write It is now straightforward to extract the contribution of massless states, which is contained For bosonic strings, one had to discard the contribution of the tachyonic ground state by hand: in this case, one can simply implement the GSO projection and observe the expected decoupling of the tachyon. We now turn to the analysis of this point.
The symmetric degeneration after GSO projection
Starting with the expression in Eq. (4.33), we can now describe more precisely the connection between the powers of the multipliers and the mass eigenstates circulating in the loops. For the symmetric degeneration, we now see that the power of p i corresponds to the mass level of the state propagating in the i-th edge of the diagram. Indeed one observes that dp i and one recognizes that dt i e − n−1 2α t i is a factor one would expect to see in a Schwingerparameter propagator for a field with squared mass m 2 = n−1 2α . In particular, if n = 0, then the state propagating in the i-th edge will be a tachyon, and will have to be removed by the GSO projection.
A cursory look at F in Eq. (4.29), F ⊥ in Eq. (4.27), F scal in Eq. (4.31) and dµ 2 F gh in Eq. (4.18), would suggest that tachyons can propagate simultaneously in any number of edges of the diagram: indeed, we can find terms proportional to i dp i /q 3 i times 1, q 1 , q 1 q 2 , q 1 q 2 q 3 , . . ., which correspond respectively to three, two, one or no edges with propagating tachyons. A closer inspection shows, however, that the nilpotent objectθ 12φ12 multiplies only terms with an odd number of factors of q i , a property which is preserved when we multiply terms together. Since the Berezin integral over dθ 12 dφ 12 picks out the coefficient ofθ 12φ12 , it follows that, after carrying out the Berezin integration, each term must contain an odd number of factors of q i .
As a consequence, after Berezin integration and truncation of the integrand to O(p 0 i ), Eq. (4.33) will be written as a sum of four terms, proportional to 3 i=1 dp i /p 3/2 i multiplied by the factors The first three terms in Eq. (4.35) carry the contributions of tachyons propagating in loops: since we wish to excise tachyons from the spectrum, we need to implement a GSO projection in such a way that these three terms vanish. This is achieved by simply averaging the amplitude over the four spin structures (ς 1 , ς 2 ) ∈ (0, 0), (1, 0), (0, 1), (1, 1) ; one clearly sees that the first three terms in Eq. (4.35) vanish while the fourth term is independent of ς and thus unaffected. Therefore the GSO-projected amplitude is free of tachyons while the massless sector is intact, as desired.
We are now in a position to take the field theory limit for the symmetric degeneration. The only missing ingredient is the normalization factor N ( ) h introduced in Eq. (3.3). It is given by where C h is the normalization factor for an h-loop string amplitude in terms of the ddimensional Yang-Mills coupling g d , calculated in Appendix A of Ref. [68]. For h = 2 it is given by (4.37) The denominator in Eq. (4.36) arises from the Born-Infeld contribution to the normalization of the boundary state (see for example Ref. [70]). It does not contribute to the field theory limit, since cos(π 1 ) cos(π 2 ) = 1 + O(α 2 ), after expressing the twists i in terms of the background field strengths via Eq. (4.26).
Applying the GSO projection to Eq. (4.33), and using dp i /p i = −dt i /α , we finally find that the QFT limit of the partition function can be represented succinctly by where the limit on the second line is finite after Berezin integration. In order to see that, and in order to give our results as explicitly as possible, we define (for simplicity we omitt the arguments of the functions f ) and similarly for f With our definitions, one easily sees that f Performing the Berezin integration is then a simple matter of combinatorics, and one finds We can read off the various terms in the integrand by picking the coefficients of the appropriate factors of q i from F in Eq. (4.29), F ⊥ in Eq. (4.27), F scal in Eq. (4.31) and dµ 2 F gh in Eq. (4.18), and then selecting the coefficient ofθ 12φ12 , divided by α . We find The other terms in the integrand can be obtained from the above by cyclic symmetry. We conclude by noting that Eq. (4.41) does not give the complete 2-loop contribution to the vacuum amplitude with this topology, since the string theory calculation distinguishes the three D-branes where the world-sheet boundaries are attached. This is reflected in the integration region over the Schwinger parameters t i , already discussed in Ref. [15]: they are not integrated directly in the interval 0 < t i < ∞, as would be the case in field theory, but they are ordered, as 0 < t 3 < t 2 < t 1 < ∞. In order to recover the full amplitude, with the correct color factors and integration region, one must sum over all possible attachments of the string world-sheet to the D-branes, effectively summing over the different values of the background fields B i and masses m 2 i . In the absence of external fields, this sum amounts just to the introduction of a symmetry and color factor; for non-vanishing B i , it reconstructs the correct symmetry properties of the amplitude under permutations.
The incomplete degeneration
In the last three sections, 4.2, 4.3 and 4.4, we have given the tools to compute the field theory limit of the partition function in the vicinity of the symmetric degeneration, see Fig. 6a: our final result is summarized in Eq. (4.41). The field theory two-loop effective action, however, includes also the Feynman diagrams with a quartic vertex depicted in the last two lines of Fig. 1.
The main feature of vacuum graphs with a four point vertex, which drives the corresponding choice of parametrization for the neighborhood of moduli space depicted in (c) Figure 6: The symmetric (Fig. 6a), incomplete (Fig. 6b), and separating (Fig. 6c) degenerations of the two-loop vacuum amplitude. Fig. 6b, is the fact that such graphs have only two propagators, each one encompassing a complete loop, and furthermore they are symmetric under the exchange of the two loops. It is natural therefore to associate to each propagator a Schwinger parameter linked to the Schottky multiplier of the corresponding string loop. The fact that there are no further Schwinger parameters implies also that the third bosonic modulus must be integrated over its domain except for a small region around each boundary. We therefore call the configuration depicted in Fig. 6b the incomplete degeneration. To compute the field theory limit for the incomplete degeneration, we must retrace our steps back to Section 4.1, where the various factors in the partition function were expressed in terms of k i and u (or y). We then relate the multipliers to Schwinger parameters as and replace i according to Eq. (4.26).
As may be expected from the simplicity of the target graph, the string partition function simplifies drastically when the α → 0 limit is taken in this way. One finds for example that the determinant of the (twisted) period matrix reduces to det (Im τ ) while the hyperbolic functions appearing in the field theory limit arise in a direct way from combinations like The resulting expressions are very simple because, with no Schwinger parameter associated to u, factors of the form u ± i do not contribute to the field theory limit. This is what makes it possible to perform the integration over the third bosonic modulus, and over the two fermionic moduli: indeed, in the parametrization considered here, the entire partition function can be written explicitly, in the α → 0 limit, in terms of just three simple integrals over the non-degenerating coordinates of super-moduli space. After GSO projection, one finds As the notation suggests, Eq. (4.46) in principle contains contributions from both the incomplete and the separating degenerations, and we now turn to the problem of disentangling them. We also see that in order to complete the calculation one just needs to determine three numerical constants, given by the following integrals over the nondegenerating super-moduli, To determine the domain of integration M 1|2 in Eq. (4.47), and to identify the different degeneration limits, note that the separating, symmetric and incomplete degenerations all come from the region of super-moduli space in which the two Schottky multipliers k 1 and k 2 are small. In this limit, we can think of super moduli space as a 1|2-dimensional space parametrized by (u|θ, φ). The separating degeneration corresponds to the limit y → 0, while the symmetric degeneration corresponds to the limit u → 0, and the incomplete degeneration comes from the region of super moduli space interpolating between the two limits. As pointed out in Refs. [71,21], however, this simple characterization must be made more precise, in particular with regards to the choice of parameters near the two degenerations.
First of all, let us briefly consider the first term in braces on the right-hand side of Eq. (4.46). This dominates in the limit y → 0, and we expect it to represent the contributions of the one-particle reducible (1PR) Feynman diagrams, which we neglect. We can now concentrate on the evaluation of the integrals relevant for our purposes, which are I 2 and I 3 in Eq. (4.47). They can be calculated using Stokes' theorem for a super-manifold with a boundary (see section 3.4 of [71]), since the integrands are easily expressed as total derivatives. We write These expressions mean that the corresponding integrals are localized on the boundary of M 1|2 , which consists of two loci associated with the two distinct ways in which the double annulus world-sheet can completely degenerate: the symmetric degeneration of fig. 6a, and the separating degeneration of fig. 6c. To use Stokes' theorem, it is important to characterize precisely the 0|2-dimensional boundary of the super-moduli-space region over which we are integrating. More precisely, we need to find bosonic functions of the worldsheet moduli ξ i (u, θ, φ), defined near the boundaries of M 1|2 , such that the vanishing of ξ i defines a compactification divisor D i . Such functions are called canonical parameters in section 6.3 of [21]. It is important to note that, for singular integrands such as those of I 1 and I 3 , it is not sufficient to define the canonical parameter ξ up to an overall factor, which may include nilpotent terms. For example, if we attempt to rescale ξ = (1 + θφ)ξ , then log ξ = log ξ + θφ, so that the Berezin integral dθ dφ log ξ does not coincide with dθ dφ log ξ .
In the small-u region, the proper choice of the canonical parameter ξ sym is dictated by our parametrization of the symmetric degeneration: we must take ξ sym = p 3 , as defined in Eq. (4.12), in order to properly glue together the two regions. Although p 3 and the crossratio u vanish at the same point, they are related by a non-trivial rescaling at leading order in the multipliers. Indeed which affects the Berezin integral of Eq. (4.48), as discussed above. Note that, not having introduced a parametrization for the separating degeneration, we would not have a similar guideline in the small-y region. Furthermore, the fact that the corresponding field theory diagram needs to be regulated 9 in order to make sense of the vanishing momentum flowing in the intermediate propagator introduces an ambiguity also in the field theory result. With this choice of parametrization, we can now use Stokes' theorem to determine the values of I 2 and I 3 . Taking ξ sep = y as a canonical parameter for the separating degeneration, we find where we used dθ dφ θφ = −1. Similarly Inserting these results into Eq. (4.46), discarding the separating degeneration, and introducing the overall normalization given in Eq. (4.37), we obtain our final expression for the contribution of diagrams with a four-point vertex to the field-theory effective action. It is given by In order to identify the contributions of individual Feynman diagrams to Eq. (4.52), we can retrace the steps of the calculation and assign each term in our result to the appropriate world-sheet conformal field theory, as we did for the symmetric degeneration in Eq. (4.41). We find that we can rewrite Eq. (4.52) as where here the superscripts denote the powers of k 1/2 i (i = 1, 2) from which the coefficients were extracted, and we have omitted the arguments of the functions f for simplicity. The precise identification is A few remarks are in order. First of all we note that f 11 gh vanishes; this corresponds to the fact that, in the infinite product in F gh (k i , η) in Eq. (3.11), n ranges from 2 to ∞, not from 1 to ∞ as in the case of F gl and F scal . As a consequence, there is no term proportional to k 1/2 (S 1 S 2 ) in the partition function for the ghost systems. We will see that this corresponds to the fact that there is no quartic ghost vertex in the associated Yang-Mills theory. Next, we note that all terms associated with the four-point vertex diagram are not factorizable into the product of two contributions, proportional to k 1/2 1 and k 1/2 2 respectively. If, on the other hand, we had traced the origin of the terms associated with the separating degeneration, and proportional to the integral I 1 , we would have found that the factor multiplying 1/y in Eq. (4.46) can be written as This means that no contributions arise from the Schottky group elements S 1 S 2 and S −1 1 S 2 , which would imply a genuine correlation between the two loops. Rather, as expected, these terms are simply the product of factors rising from individual disconnected loops. Finally, we note that the result I 3 = 0 is crucial in order to recover the correct field theory limit: indeed, as will be verified in the next section and shown in Appendix C, no field theory diagram yields hyperbolic functions with the parameter dependence displayed on the last line of Eq. (4.46). We see once again that the field theory limit, once the contributions of individual diagrams have been identified, provides non-trivial checks of the procedures used to perform the integration over super-moduli.
Yang-Mills theory in the Background Field Gervais-Neveu gauge
In order to make a precise comparison between string theory and field theory at the level of individual Feynman diagrams, as was done in a simple case in Ref. [11], we need a precise characterization of the field-theory Lagrangian we are working with, including gauge fixing and ghost contributions. In principle, this presents no difficulties, since our target is a U (N ) Yang-Mills theory, albeit with a rather special gauge choice. There are however a number of subtleties, ranging from the special features of the background field framework, to issues of dimensional reduction, and to the need to break spontaneously the gauge symmetry in order to work with well-defined Feynman diagrams in the infrared limit, which altogether lead to a somewhat complicated and unconventional field theory setup. We will therefore devote this section to a detailed discussion of the field theory Lagrangian which arises from the field theory limit of our chosen string configuration. The first layer of complexity is due to the fact that the string theory setup naturally corresponds to a field theory configuration with a non-trivial background field. In general, such a background field breaks the gauge symmetry: in our case, since we are working with mutually commuting gauge fields with constant field strengths, and we have a string configuration with separated D-brane sets, one will generically break the U (N ) gauge symmetry down to U (1) N . We will have to adjust our notation to take this into account. Notice also that our background fields break Lorentz invariance as well, since only certain polarizations are non-vanishing. As a consequence, the polarizations of the quantum field will also be distinguished as parallel or perpendicular to the given background.
Furthermore, it is interesting to work in a generic space-time dimension d, and we will find it useful to work with massive scalar fields giving infrared-finite Feynman diagrams. We will therefore work with a d-dimensional gauge theory obtained by dimensional reduction from the dimension D > d appropriate to the string configuration. This yields n s = D − d adjoint scalar fields minimally coupled to the d-dimensional gauge theory, and we will choose our background fields such that these fields acquire a non-vanishing expectation value, giving mass to some of the gauge fields.
Finally, as suggested originally in Ref. [45], and recently confirmed by the analysis of Ref. [11], covariantly quantized string theory picks a very special gauge in the field theory limit: a background field version of the non-linear gauge first introduced by Gervais and Neveu in Ref. [16]. This gauge has certain simplifying features: for example at tree level and at one loop it gives simplified color-ordered Feynman rules which considerably reduce the combinatoric complexity of gauge-theory amplitudes [45]. Only at the twoloop level, however, the full complexity of the non-linear gauge fixing becomes apparent. One effect, for example, is that the diagonal U (1) 'photon', which ordinarily is manifestly decoupled and never appears in 'gluon' diagrams, in this case has non-trivial, gaugeparameter dependent couplings to SU (N ) states, and the decoupling only happens when all relevant Feynman diagrams are summed.
In what follows, we adopt the following notations: we use calligraphic letters for matrix-valued u(N ) gauge fields, and ordinary capital letters for their component fields; we use M, N, . . . = 1, . . . , D for Lorentz indices in D-dimensional Minkowski space before dimensional reduction, and µ, ν, . . . = 1, . . . , d for Lorentz indices in the d-dimensional reduced space-time; finally, I, J, · · · = 1, . . . , n s indices enumerate adjoint scalars, and A, B, . . . = 1, . . . , N indices enumerate the components of u(N ) vectors and matrices. In this language, A M will denote the D-dimensional classical background field, Q M the corresponding quantum field, while C and C are ghost and anti-ghost fields.
We will now proceed to write out the quantum lagrangian (including gauge-fixing and ghost terms) in terms of matrix-valued fields. We will then comment on the form taken by various terms in component notation, which is more directly related to the vertices appearing in diagrammatic calculations.
The u(N ) Lagrangian
We begin by constructing the D-dimensional Yang-Mills Lagrangian, which, in the presence of a background gauge field, depends on the combination A M + Q M . The fieldstrength tensor F M N can be expressed in terms of the covariant derivative of the quantum field with respect to the background field, where F M N (A) is the field strength tensor for the background field only, while D A+Q M is the covariant derivative with respect to the complete gauge field. The classical Lagrangian for the quantum gauge field Q can then be written as where Tr here denotes the trace over the u(N ) Lie algebra, and we have removed terms independent of Q, as well as terms linear in Q, because they are not relevant for our effective action calculation. In anticipation of the string theory results, we now wish to fix the gauge using a background field version of the non-linear gauge condition introduced by Gervais and Neveu in Ref. [16], setting where γ is a gauge parameter. The gauge-fixing Lagrangian L gf is then given by Notice that the overall covariant gauge-fixing parameter which would appear in front of Eq. (5.4) has been set equal to one. Note also that this gauge fixing modifies not only the gluon propagator, as expected, but also the three-and four-gluon vertices. In particular, the symmetric nature of the quadratic term in the gauge-fixing function G will generate Feynman rules involving the symmetric u(N ) tensors d abc , which in turn will induce spurious couplings between gluons and u(1) photons. Finally, we need the Lagrangian for the Faddeev-Popov ghost and anti-ghost fields, C and C. It is defined as usual in terms of the gauge transformation of the gauge-fixing function, as using C as parameter of the gauge transformation. The result is This completes the construction of the pure Yang-Mills Lagrangian in D dimensions; next, we want to dimensionally reduce it to d dimensions. The reduction splits the Ddimensional gauge fields (both classical and quantum) into a d-dimensional field and n s ≡ D − d adjoint scalars, according to with µ = 0, . . . , d−1 and I = 1, . . . , n s , and we have assumed that the classical background scalars take on constant values M I , which we will use to spontaneously break the gauge symmetry and give masses to selected components of the gauge field. Similarly, since we are neglecting the dependence of the fields on the reduced coordinates, the covariant derivative splits into a d-dimensional covariant derivative and a pure commutator with the background scalar fields, as Indeed, the D-dimensional d'Alembertian differs from the d-dimensional one by a mass term: for any field X, Notice that in this section we work with the metric η = diag(+, −, . . . , −). However, when summing over reduced dimensions, our summation convention does not include the negative signature of the metric, and must be understood simply as a summation over flavor indices I. With these conventions, the gauge condition in Eq. (5.3) becomes When these further changes are implemented in the Lagrangian, a number of non-trivial interaction vertices are generated. It is then useful to organize the dimensionally-reduced Lagrangian as a sum of terms with different operator content. One can write As is typical in cases of broken symmetry, the Lagrangian in Eq. (5.11) displays a variety of interactions, and is considerably more intricate than the combination of Eqns. (5.2), (5.4) and (5.6). In order to compute Feynman diagrams, and to compare with the string theory calculation, it is useful to write down an expression for the Lagrangian in terms of component fields as well. In order to do so, we now assume that the matrices A µ and M I are all mutually commuting: we can then pick a basis of u(N ) in which they are diagonal.
In this basis, we write Similarly, we write the quantum matrix fields as all satisfying X AB = (X BA ) * , since u(N ) matrices are Hermitian; the factors of 1/ √ 2 ensure that the matrix element fields are canonically normalized. Notice that, thanks to diagonal form of the classical field A µ , the covariant derivative D µ does not mix matrix entries. Indeed, defining one can write where indices on the right-hand side are not summed. In particular, the covariant derivative of diagonal entries reduces to the ordinary derivative. Motivated by this, we can define a covariant derivative D µ acting directly on matrix entries, as opposed to u(N ) elements. Suppressing the A, B indices on the derivative symbol, we write Note that D µ is a derivation, obeying the Leibnitz rule where again on the right-hand side the indices A and B are fixed and not summed. As an example, the term quadratic in Φ in Eq. (5.11) can be written in component notation as which is the correctly normalized quadratic part of the Lagrangian for N massless real scalar fields φ AA , and 1 2 N (N − 1) complex scalars φ AB , A < B, with mass |m AB |. To give a second example, the gauge-fixing condition in component notation reads where C is summed over but there is no summation over A or B. Note that after dimensional reduction and spontaneous symmetry breaking the gauge fixing has become more unconventional from the d-dimensional point of view, involving scalar fields as well as gauge fields, and mass parameters. We conclude this section by giving the explicit expression for the background field that we will be working with. We choose it so that, for each A, the abelian field strength (1) magnetic field in the {x 1 , x 2 } plane. A possible choice, already employed in Ref. [11], is where we defined the antisymmetric tensor We now turn to the evaluation of selected two-loop vacuum diagrams, contributing to the effective action, which we can then compare with the corresponding expressions derived from string theory. Preliminarily, we collect useful expressions for the relevant coordinatespace propagators in the presence of the background field.
Propagators in a constant background field
The quantum field theory objects that we wish to compute, in order to compare with string theory results, are two-loop vacuum diagrams contributing to the effective action, and computed with our chosen background field, Eq. (5.22). At two loops, these diagrams can be computed in a straightforward manner in coordinate space, directly from the path integral definition of the generating functional, where J µ , J I , η and η are matrix sources for the fields in the complete Lagrangian L. The only non-trivial step is the computation of the quantum field propagators in the presence of the background field: diagrams are then simply constructed by differentiating the free generating functional with respect to the external sources. For a background field of the form of Eq. (5.22), the solution is well-known for the scalar propagator (see, for example, Ref. [15]): we briefly describe it here, and discuss the generalization to vector fields. For scalar fields, the propagator in the presence of the background in Eq. (5.22) can be expressed in terms of a heat kernel as In Eq. (5.26), we have introduced the tensor where the projectors η µν and η µν ⊥ identify components parallel and perpendicular to the background field, and are given by The propagator G AB (x, y) in Eq. (5.25) satisfies where we noted explicitly the variable on which the derivatives act. In fact, covariant derivatives act on a propagator with color indices (AB) as For real scalar fields, or vanishing backgrounds, one recovers the well-known expression for the scalar propagator as a Schwinger parameter integral, Ghosts fields are scalars, and they share the same propagator. For gluons, on the other hand, the background field strength F µρ enters the kinetic term, given in the first line of Eq. (5.11). The propagator must then satisfy To diagonalize this equation, one can introduce the projection operators It is then easy to show that the function satisfies Eq. (5.32), provided the functions G AB ± (x, y) satisfy Eq. (5.36) simply gives a scalar propagator with a mass shifted by the appropriate background field. It's easy therefore to write the solution for the complete gluon propagator explicitly as (5.37) Note that this can be written also in the more compact and elegant form where S αβ 1 are the Lorentz generators in the spin one representation appropriate for gauge bosons, The propagator in Eq. (5.38) naturally generalizes to other representations of the Lorentz group, simply changing the form of the generators. The spin one-half case, where S αβ 1/2 = i[γ α , γ β ]/4, will be useful for example when studying the gluino contribution to the effective action in the supersymmetric case.
Selected two-loop vacuum diagrams
We will now illustrate the structure of the field theory calculation of the effective action by outlining the calculation of a selection of the relevant two-loop diagrams. A complete list of the result for all 1PI diagrams depicted in Fig. 1 is given in Appendix C.
We begin by considering the ghost-gluon diagram given by the sum of Eq. (C.10) and Eq. (C.11), which we denote by H b (B AB , m AB ). The relevant interaction vertex, involving ghost, antighost and gluon fields, arises from the sixth line in Eq. (5.11), and may be written explicitly in component language using Eq. (5.13). Upon integrating by parts, it can be rewritten as Sewing two copies of this vertex together to obtain the desired diagram, one first of all observes that terms linear in the gauge parameter γ, which involve double derivatives of scalar propagators, cancel out upon contracting color indices. Next, one notices that some of the color contractions lead to a non-planar configuration, which would correspond to an open string diagram with only one boundary. We are not interested in these contributions, since the corresponding diagram is built of propagators which are neutral with respect to the background field, and does not contribute to the effective action. Furthermore, we do not expect to obtain this diagram from our string configuration, since we start with a planar worldsheet. Discarding non-planar contributions, one finds that the remaining planar terms can be written as where Σ(gBt) is defined in Eq. (5.27). The integrand in Eq. (5.41) is then proportional to the product of three heat kernels, which we write as .
In Eq. (5.43) we have simplified the notation by using a single index i = 1, 2, 3 in place of the pairs of color indices (AB), (BC), (CA), respectively. Furthermore, we have defined Σ µν = 3 i=1 Σ µν (gB i t i )/t i , and we have taken advantage of the fact that the complex phases in each K i (x, y; t i ) cancel due to the fact that 3 i=1 B i = 0. At this point one sees that the integrand in Eq. (5.41) is translationally invariant, depending only on the combination z = x − y. We can then, for example, replace the integral over x with an integral over z while the integral over y gives a factor of the volume of spacetime, which we will not write explicitly. One needs finally to evaluate the gaussian integral Note that taking the inverse of Σ µν is trivial because it is a diagonal matrix, which can be written as , (5.45) where ∆ 0 and ∆ B were defined in Eq. (4.28) and Eq. (4.30) respectively, while η ⊥ µν and η µν are given in Eq. (5.28). One finds then Putting together all these ingredients, one may evaluate Eq. (5.41). Using the symmetry of the Schwinger parameter integrand under the exchange t 1 ↔ t 2 one can write which can be directly matched to the string theory result. We conclude this section by briefly describing the calculation of two further Feynman diagrams, which arise in our theory because of the pattern of symmetry breaking and dimensional reduction.
Labeling this diagram as H d (B AB , m AB ), we find for it the coordinate space expression where we relabeled double indices as was done for Eq. (5.47). Finally, we briefly consider a diagram with a quartic vertex: the figure-of-eight scalar self-interaction shown in Eq. (C.23), which we label E i (B k , m k ). The relevant interaction term in the Lagrangian comes from L Φ 4 in Eq. (5.11) and can be written as which immediately gives Contracting flavor indices we get, as expected, the product of two one-loop integrals, The diagrams in Eqs. 5.1 -C.22 can be calculated similarly. One easily sees that all these results, and those for the remaining diagrams, given in Appendix C, are directly comparable with the ones obtained from the field theory limit of the string effective action.
6 Discussion of results
Comparison between QFT and string theory
We have now assembled all the results that we need to establish and verify a precise mapping between the degeneration limits of the string world-sheet and the 1PI Feynman diagram topologies in the field theory limit. Furthermore, as announced, we can trace the contributions of individual string states propagating in each degenerate surface, and these can be unambiguously mapped to space-time states propagating in the field theory diagrams. This diagram-by-diagram mapping allows us in particular to confirm that covariantly quantized superstring theory 10 naturally selects a specific gauge in the field theory limit, and the gauge condition is given here in Eq. (5.3). More precisely, our string theory results for the symmetric degeneration are given in Eq. We finally note that we can also characterize the contributions to all 1PI Feynman diagrams according to the Schottky multiplier they originate from. As an example, consider the infinite product over the Schottky group which arises from the determinant of the non-zero modes of the Laplacian, appearing in Eq. (3.11). Tracing the gluon contributions to different Feynman diagrams back to that product, one may verify that all terms appearing in 1PI diagrams originate from at most one value of the index α in the product. More precisely, gluon contributions in Eq. (4.41) come from T α = {S 1 , S 2 , S −1 1 S 2 }, and if, say, a factor of k 1/2 1 = √ p 1 √ p 3 comes from the infinite product, then the necessary factor of √ p 2 must come from elsewhere in the amplitude. On the other hand, all terms in Eq. (4.53) come from T α = S 1 S 2 . This is not surprising from a world-sheet point of view: in fact, one may recall from Fig. 5 that S 1 S 2 corresponds to a homology cycle which passes around both handles with a self-intersection between them, and furthermore this is the only Schottky group element with this property which survives in the field theory limit. We see that this homological property is directly related to the graphical structure of the resulting Feynman diagram.
Comparison with bosonic string theory
It is interesting to compare our results with the field theory limit of the bosonic string effective action which was studied in Refs. [35,15,67]. This comparison was discussed also in [11], but we are now in a position to make a more detailed analysis. Bosonic strings are clearly a simpler framework, since the world-sheet is an ordinary two-dimensional manifold, and not a super-manifold: one can then use the techniques applying to ordinary Riemann surfaces, and specifically the (purely bosonic) Schottky parametrization discussed in detail for example in Ref. [11]. At two loops, one can use the SL(2, R) invariance of the amplitude to choose the fixed points of the two Schottky group generators as The two-loop partition function can then be written as where µ denotes the set of bosonic moduli, µ = {k 1 , k 2 , u}, and one may compare with the corresponding two-loop superstring expression, given in Eq. (3.18). Note that the integration variable u is equal to the gauge-fixed value of a projective-invariant cross ratio of fixed points, u = η 1 −η 2 The various factors in Eq. (6.2), already discussed in [15,11], are given by Here τ is the period matrix of the Riemann surface, whose expression in the Schottky parametrization can be found, for instance, in Eq. (A.14) of [31]. Similarly, τ is the twisted period matrix, the bosonic equivalent of τ , computed here in Appendix B. The most obvious difference between the measures in Eq. (3.18) and Eq. (6.2) is the occurrence of half-integer powers of the multipliers in the former. In the bosonic string, the mass level of states propagating in the i-th loop increases with the power of k i , whereas in the superstring it increases with the power of k 1/2 i . Necessarily, the propagation of a massless state must correspond to terms of the form dk i /k i = d log k i in the integrand, so tachyons propagating in loops correspond to terms of the form dk i /k 2 i in the bosonic theory and dk i /k 3/2 i in the superstring, as seen explicitly in Eq. (6.2) and in Eq. (3.18), respectively. These tachyonic states must be removed by hand in the bosonic theory, whereas they are automatically eliminated from the spectrum of the superstring upon integrating over the odd moduli and carrying out the GSO projection.
The identification of the symmetric degeneration proceeds in the same way for the two theories: in particular, the symmetry of Fig. 6a leads to the choice of the parameters p i , defined by Eq. (4.12). The cross-ratio u can then be written as 4) and the integration measure takes the symmetric form It is interesting to note that in the field theory limit a number of contributions arise in slightly different ways in the two approaches. As an example, let us consider the twisted determinant of the period matrix for the bosonic string. To lowest order in k i , it is given by a combination of hypergeometric functions with argument u, in a manner similar to what happens for its supersymmetric counterpart. In the neighborhood of the symmetric degeneration, the hypergeometric functions can be expanded in powers of p 3 , and the bosonic string determinant reduces to where ∆ B is defined in Eq. (4.30). We note that the term proportional to p 3 in Eq. (6.6) receives a contribution from the series expansion of the hypergeometric functions, and contributes to Feynman diagrams with a gluon polarized parallel to the magnetic field propagating in the leg parametrized by t 3 .
For the superstring, the situation changes: one needs to keep terms only up to order q i , which implies that all the hypergeometric functions appearing in the expression for the supersymmetric twisted determinant can be replaced by unity. Since the first-order term in the expansion of the hypergeometric functions is crucial in order to get the correct coefficient of p 3 in Eq. (6.6), and in turn to match the field theory diagrams, it is necessary that terms proportional to q 3 arise from the nilpotent contributions to det (Im τ ). This is indeed what happens: expanding the supersymmetric twisted determinant in powers of q i one finds To be precise, we note that terms proportional to p 3 and q 3θ12φ12 in det (Im τ ) and det (Im τ ), respectively, also receive contributions from sources other than the ones we have discussed, specifically from factors of the form u n i i /2 , with n i integers. It is easy to see, however, that these contribute in the same way in the two cases, since in the bosonic case we have u n i i /2 = p while in the superstring case we get As a consequence, and as required, when all of the other factors are inserted, the coefficient of p 1 p 2 p 3 in the bosonic string measure is the same as the coefficient of q 1 q 2 q 3θ12φ12 in the superstring measure, and the same field theory amplitude is obtained for the massless sectors of the bosonic and supersymmetric theories. The terms computed in section 4.5, which correspond to field theory diagrams with the topology of the diagrams in the bottom two rows of Fig. 1 as well as 1PR graphs, also appear in the bosonic theory. In fact, one gets once more an expression of the form of Eq. (4.46) in the field theory limit, but the integrals I 1 , I 2 and I 3 get replaced bỹ (6.10) When using the bosonic string, the integralĨ 3 has to be discarded by hand, either by arguing that it corresponds to tachyon propagation, or by explicitly matching to the field theory result. In the case of the superstring, on the other hand, the correct result emerges automatically, provided a consistent integration procedure in super-moduli space is followed. The complete answer for the four-point vertex diagrams emerges in both cases from the terms proportional to I 2 =Ĩ 2 = 1. As in the superstring case, the contributioñ I 1 is related to the sepating degeneration.
The group of transformations which preserves the skew-symmetric quadratic form is OSp(1|2), which can be realised by GL(2|1) matrices of the form where the five even and four odd variables are subject to the two odd and two even constraints, so that the group has dimension 3|2. We can define a map from homogeneous coordinates to superconformal coordinates by Then, any other map of the form f • S, with S an OSp(1|2) matrix, also defines superconformal coordinates. Recall [75] that two C 1|1 charts (z|θ) and (ẑ|θ) belong to the same superconformal class whenever D θẑ =θ D θθ , where is the super derivative which satisfies D 2 θ = ∂ z . In particular, we can cover CP 1|1 with two superconformal charts z 1 = f (z 1 , z 2 |θ) t and z 2 = (f • I) (z 1 , z 2 |θ) t , where I is the OSp(1|2) matrix In general, one can find an OSp(1|2) matrix taking two given points |u = (u 1 , u 2 |θ) t and |v = (v 1 , v 2 |φ) t to |0 ≡ (0, 1|0) t and |∞ ≡ (1, 0|0) t ∼ I|0 respectively; one such matrix is We can further stipulate that a point |w = (w 1 , w 2 |ω) t be mapped to a point equivalent to (1, 1|Θ uwv ) t , where now there is no freedom in choosing the fermionic co-ordinate, which is therefore a super-projective invariant built out of the triple {|u , |v , |w }. The image of |w under Γ uv is then A general dilatation of the superconformal coordinates corresponds to the OSp(1|2) matrix 12 which has |0 and |∞ as fixed points. Note that for |ε| < 1, |0 is an attractive fixed point and |∞ is a repulsive fixed point.
We may use such a dilatation to scale the bosonic coordinates of Γ uv |w as desired, obtaining for example which gives us an explicit expression for the odd super-projective invariant Θ z 1 z 2 z 3 , as where z i = (z i |ζ i ), as in Eq. (3.4) of [64] and Eq. (3.222) of [20]. As with projective transformations, super-projective transformations preserve crossratios of the form one must keep in mind, however, that the simple relations between the three possible cross ratios that can be constructed with four points are modified by nilpotent terms. For example, one finds that which can be checked quickly by noting that the left-hand side is OSp(1|2)-invariant, so that one can fix 3|2 parameters, for example by choosing |z 1 = |0 , |z 2 = |∞ , and |z 4 = (1, 1|φ) t . With these ingredients, it is now easy to construct a super-projective transformation with chosen fixed points and multiplier: using Γ uv to map a pair of points |u and |v to |0 and |∞ respectively, one easily verifies that the transformation has |u as an attractive fixed point and |v as a repulsive fixed point. Here k, for which we take |k| < 1, is called the multiplier 13 of the super-projective transformation S.
The bracket notation has the benefit of allowing us to write S as
A.2 The super Schottky group
Taking the quotient of CP 1|1 by the action of S, defined in Eq. (A. 16), is equivalent to the insertion of a pair of NS punctures at |u and |v , which are then sewed with a sewing parameter related to k. To see this, recall that sewing of NS punctures at P 1 and P 2 is defined by taking two sets of superconformal coordinates, say (x|θ) and (y|ψ), which vanish respectively at the two points, (x|θ)(P 1 ) = (0|0) = (y|ψ)(P 2 ), and then imposing the conditions [9] xy = − ε 2 , yθ = εψ , xψ = − εθ , θψ = 0 . where P and I are defined in Eq. (A.10) and Eq. (A.7), respectively. Then, by acting on both sides with f, we get (x|θ) ∼ (−ε 2 /y εψ/y), which can easily be found to satisfy Eq. (A.18). Let us take (z|ζ) to be a superconformal coordinate on CP 1|1 , with (z|ζ)(P 1 ) = f|u and (z|ζ)(P 2 ) = f|v . Recall that the super-projective transformation Γ uv defined in Eq. (A.8) simultaneously maps |u and |v to |0 and |∞ , respectively. Then if |x = Γ uv • f −1 • (z|ζ) and |y = I −1 • Γ uv • f −1 • (z|ζ), we have that (x|θ) = f|x and (y|ψ) = f|y are local superconformal coordinates which vanish at P 1 and P 2 respectively, since I −1 |∞ = |0 and f|0 = (0|0). As a consequence, we can perform a NS sewing by making the identification in Eq. (A.19) using these expressions for |x and |y , and we find that we need to impose an equivalence relation on (z|ζ): This is what we wanted to show, with S matching the definition in Eq. (A. 16), as long as we identify ε = −e iπς k 1/2 , so the NS sewing parameter is directly related to the Schottky group multiplier. Topologically, this sewing has the same effect (at least on the reduced space CP 1 ) as cutting out discs around u and v and identifying their boundaries, so this quotient adds a handle to the surface, increasing the genus by one.
To build a genus-h SRS, we may repeat this sewing procedure h times, choosing h pairs of attractive and repulsive fixed points u i = (u i |θ i ), v i = (v i |φ i ), and h multipliers k i , for i = 1, . . . , h. The super-Schottky group S h is the group freely generated by We then subtract the limit set Λ (the set of accumulation points of the orbits of S h ) from CP 1|1 , and we quotient by the action of the super Schottky group. this leads to the definition Note that the fixed points must be sufficiently far from each other, and the multipliers sufficiently small, to allow for the existence of a fundamental domain with the topology of CP 1|1 with 2h discs cut out. The fixed points u i , v i and the multipliers k i are moduli for the surface, but for h ≥ 2 we can use the OSp(1|2) symmetry to fix 3|2 of these: in our conventions, we take |u 1 = |0 , |v 1 = |∞ , |v 2 = |1, 1|Θ u 1 v 2 v 1 , so the super-moduli space M h has complex dimension 3h − 3|2h − 2.
To build multi-loop open superstring world-sheets in a similar way, we should start with the super-disc D 1|1 which can be obtained by quotienting CP 1|1 by the involution (z|θ) → (z * |θ), so that RP 1|1 becomes the boundary of the disk. A super-projective map will be an automorphism of D 1|1 if it preserves RP 1|1 , so we should build the super Schottky group from super-projective transformations whose fixed points u i , v i are in R 1|1 and whose multipliers k i are real. If we quotient D 1|1 − Λ by h of these, then we will get a SRS with (h + 1) borders and no handles. α . We can find k 1/2 α by setting the spin structure around the b-cycles to zero, ς = 0, then using the cyclic property of the supertrace 14 . This leads to a quadratic equation, with roots one root being the inverse of the other. We then pick k 1/2 α to be the root whose absolute value satisfies |k where y was defined in Eq. (3.19). Note that k 1/2 (S −1 1 S 2 ) can be obtained from k 1/2 (S 1 S 2 ) by swapping the attractive and repulsive fixed points of S 1 in the cross-ratio, as might be expected.
A.2.2 The super period matrix
The super abelian differentials are an h-dimensional space of holomorphic volume forms, i.e. sections of the Berezinian bundle, defined on a genus-h SRS. They are spanned by Ω i , i = 1, . . . , h, which can be normalized by their integrals around the a-cycles, according to while their integrals around the b-cycles define the super period matrix Here a i and b i are closed cycles on the SRS which are projected to the usual homology cycles on the reduced space. The Ω i 's can be expressed in terms of the super Schottky parametrization as in Eq. (21) of Ref. [30]. In our current notation where dz = (dz | dψ), the sum (i) α is over all elements of the super-Schottky group which do not have S ±1 i as their right-most factor, D ψ is the superconformal derivative D ψ = ∂ ψ + ψ∂ z , and finally Φ is the matrix The matrix Φ has the property that, if f |z = (z|ψ), then for any w|, and furthermore for |w = (w, 1|ω) t and |z = (z, 1|ψ) t the map (z|ψ) → w|z | w|Φ|z is superconformal. The super period matrix can be computed as Figure 7: Recall that the double of a Riemann surface Σ is defined by taking two copies of Σ, replacing the charts on one copy with their complex conjugates, and identifying corresponding points on the boundaries of the two copies [24].
The sum is over all elements of the super Schottky group which do not have S ±1 j as their left-most element or S ±1 i as their right-most element. It is not difficult to compute the leading terms of the super period matrix in the small-k i expansion. For h = 2, using the fixed points in Eq. (3.16), we find This completes our review of the super Schottky parametrization. Our next task is to introduce twisted boundary conditions corresponding to external background gauge fields.
B Appendix B B.1 The twisted determinant on a Riemann surface
The worldsheet theory of strings becomes 'twisted' in a number of contexts: for example, on orbifolds [76], in electromagnetic fields [77,13,14] or when an open string is stretched between a pair of D-branes which have a velocity [78] or are at an angle [79] with respect to each other. If we appropriately pair up the string spacetime coordinate fields X µ as complex coordinates (e.g. in our case, by setting Z ± = (X 1 ± iX 2 )/ √ 2), then in these backgrounds the worldsheet fields ∂Z ± are described by non-integer mode expansions on the upper-half-plane, as in Eq. (2.5). This means that on the double of the worldsheet (the double of the upper-half-plane is the complex plane; see Fig. 7), ∂Z ± (z, z) is no longer a single-valued field but rather it has a monodromy, changing by a factor of e ±2πi as it is transported anti-clockwise around z = 0.
Computing multi-loop amplitudes in these backgrounds is complicated because it is not easy to use the sewing procedure when states propagating along plumbing fixture belong to a twisted sector. We must use, instead, the approach of [67]. This takes advantage of the fact that although the ∂Z ± fields have non-trivial monodromies along the a i -cycles of the double worldsheet, the monodromies along the b j -cycles are trivial. Therefore, the idea is to build the double worldsheet by sewing along the b i cycles, and then to perform the modular transformation swapping the a j and b i cycles with each other, in order to obtain the partition function expressed in terms of the Schottky moduli which are the appropriate ones for the worldsheet degeneration we are interested in.
From a more physical point of view, we are using the fact that in a different region of moduli space, the string diagram can be described as a tree-level interaction between three closed strings being emitted or absorbed by the D-branes. In terms of the closed string moduli, the string partition function is given by [67] The overall factor is just the Born-Infeld lagrangian for the background fields on the D-branes, divided by √ G because all of the background-field independent factors are included in the measure [dZ] cl h . The factor R h (q i , ), which is dependent on both the worldsheet moduli and on the background field strengths, has a simple form so long as it is expressed in terms of the closed string Schottky group moduli, in other words, in terms of the multipliers of a Schottky group whose 2h Schottky circles are homotopic to the b i cycles of the worldsheet instead of the a j cycles which we have been using. Let us denote the multipliers of the elements T α of this Schottky group as q α , then we have where the notation α has the same meaning as for the super Schottky group case, defined after Eq. (3.7). The modular transformation that swaps the a i -and b j -cycles, necessary to switch between the open string and closed string channels, acts non-analytically on the Schottky group multipliers. 15 We need to rewrite Eq. (B.2) in terms of the open string moduli, so the following strategy is used: R h (q i , ) is re-expressed in terms of functions which transform in simple ways under modular transformations, the modular transformations are then carried out, and finally the results are re-expressed in terms of the open string Schottky moduli, allowing us to investigate the field theory limit. This analysis was performed in [33,35] and the results are summarized in section 2 of [15]. Assuming without loss of generality that the h'th twist is nonzero, h = 0, the result is that where R h (k α , ·τ ) is the same as in Eq. (B.2) but with the closed string channel multipliers q α replaced with open string channel multipliers k α , and the twists replaced with · τ , (τ ij ) being the period matrix computed in the open string channel. τ is the twisted period matrix, defined by (Eq. (3.24) of [67]) The Prym differentials Ω i appearing in the integrand in the first line of Eq. (B.4) are (h − 1) 1-forms with trivial monodromies along the a i cycles and twists along the b icycles, i.e. they obey Ω i (S j (z)) = S j Ω i (z) ; (B.6) and they are regular everywhere. 16 Assuming without loss of generality that h = 0, they can be expressed as (Eq. (3.11) of [67]): In Eq. (B.7) the ζ i are a basis of h 1-forms which are holomorphic everywhere except some arbitrary base point z 0 , which can be computed in terms of the Schottky group as (Eq. (3.15) of [67]) where the first sum is over all Schottky group elements which don't have S ± i as their right-most factor and the second sum is over all Schottky group elements. η i and ξ i are the attractive and repulsive fixed points of the generator S i , respectively. Note that the dependence on z 0 cancels out when the ζ i are combined as in Eq. (B.7). Also in Eq. (B.8), The other object appearing in the integrand in the first line of Eq. (B.4), ∆ i (z), is the vector of Riemann constants or Riemann class; it can be expressed in the Schottky parametrization as (Eq. (A.21) of [31]) where the second sum where ω i are the abelian differentials (Eq. (A.10) of [31]), it is easy to check that the integrand of the first line of Eq. (B.4) has twists along the a i cycles and trivial monodromies along the b i -cycles (therefore the integrand does not depend on the starting point w). For simplicity, from now on we focus only on the case h = 2, which yields det(Im τ ) = 1 2πi w Ω ·τ (z)e 2πi · ∆(z) , (B.12) where Ω ≡ Ω 1 is the sole component of the Prym form. Instead of explicitly evaluating the integral over z in Eq. (B.12), it is possible to find an alternative expression for det(Im τ ) in the following way. First of all, we recall the object D( ) ij defined in Eq. (3.14) of [67]. For each i, j = 1, . . . , (h − 1), D ij ( ) is a spacetime rotation matrix; the i, j indices refer to worldsheet homology cycles. In the complex spacetime coordinates in which the background fields are diagonalized, D ij ( ) is diagonal with two non-trivial entries D(± ) ij . For h = 2, the i, j indices can take only one value, so we only have one independent object D 11 (± ) ≡ D(± ). It is given by where γ P ≡ a 2 a 1 a 2 −1 a 1 −1 is the Pochhammer contour shown in Fig. 9, and Ω is the Prym differential which has trivial monodromies around the b i homology cycles and monodromies S i around the a i homology cycles. Ω can be expressed with the Schottky group a 1 a 2 a 1 a 2 γ P Figure 9: The Pochhammer contour γ P = a 2 a 1 a 2 −1 a 1 −1 .
γ P η 1 η 2 Figure 10: Our Pochhammer contour (Fig. 9) can be deformed arbitrarily close to four copies of the line interval [η 1 , η 2 ] ⊆ R, with each copy on a different branch of the Prym form Ω .
thanks to a relation derived in [67], given by Eq. (3.28) of that reference. In the case h = 2, it becomes simply where the second equality is just writing (τ ) 11 in terms of det(Im τ ) with Eq. (B.4). Note that γ P crosses each boundary of the worldsheet once in each direction so it starts and ends on the same branch of Ω and the integral in Eq. (B.13) is well-defined. Now we take the formulae Eq. (B.7), Eq. (B.8) and Eq. (B.10) to get an expression for Ω (z) via Eq. (B.14) and interpret them as defining a one-form not on the worldsheet but on the complex plane, with poles at the Schottky group limit set. If we expand Ω (z) as a power series in k i , we see that at leading order it has poles only at the Schottky fixed points, so at leading order we are free to deform the Pochhammer contour through the Schottky circles and arbitrarily close to the line interval [η 1 , η 2 ] as in Fig. 10. In this way, we can write the Pochhammer integral as four copies of a real integral taking care to account for the different orientations and branches; it turns out that we get
(B.21)
Now we need to define the objects appearing in Eq. (B.21). The Prym differentials Ω i we used to compute det(Im τ ) are holomorphic one-forms; the natural analogues on SRSs are holomorphic volume forms: sections of the Berezinian bundle. Just as holomorphic differentials can be written locally as dz ∂ z f (z), sections of the Berezinian can be written locally as dz D ψ f (z|ψ), the combination being invariant under change of superconformal coordinates [21]. We note that we can write equation Eq. (B.8) for ζ i as , so to find the corresponding SRS volume forms we replace the expressions inside the logarithms with their natural superconformal analogues and replace dz ∂ z → dz D ψ . This yields Then we can write down a basis of (h − 1) holomorphic volume forms Ω j (z) with the expected monodromies along the homology cycles using the analogue of Eq. (B.7), noting that the dependence on the base point |z 0 cancels out: We can calculate Ω j (z|ψ) as a series expansion in k 1/2 i . Truncating to finite order, we only need to sum Eq. (B.24) over finitely many terms of the super-Schottky group, because if the contribution from T α is O(k α 1/2 ) and the left-most factor of T α is not S ±1 i , then the contribution from S ± i T α is O(k i /2 k α 1/2 ). Restricting ourselves to h = 2, this means that if we only want to compute to order k i 1/2 then we only need to sum over the super-Schottky group elements T α ∈ {Id, S ±1 1 , S ±1 2 , (S 1 S 2 ) ±1 , (S −1 1 S 2 ) ±1 , (S 1 S −1 2 ) ±1 , (S 2 S 1 ) ±1 } . (B.27) Using the fixed points given in Eq. (3.16), we obtain the following expression for Ω (z) ≡ Ω 1 (z): (1 − S 1 ) 2 (1 + S 1 )(1 − u)θφψ S 1 (1 − S 2 )u(1 − z) 2
Note that the gauge choice γ 2 = 1 gives many of these diagrams a much simpler form, for example, the second lines of Eq. (C.7) and Eq. (C.9), the third line of Eq. (C.13) and the third and fourth lines of the Eq. (C.18) all vanish in this gauge. In fact, the last example is a special case of the fact that both propagators in the diagrams with quartic vertices must have the same polarization precisely when γ 2 = 1, which corresponds to the fact that k 1/2 1 and k 1/2 2 must be taken from the same CFT in string theory.
|
2015-03-17T19:47:03.000Z
|
2015-03-17T00:00:00.000
|
{
"year": 2015,
"sha1": "b11f17a23a89e08fdede885967a9eecc315d1715",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP06(2015)146.pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "b11f17a23a89e08fdede885967a9eecc315d1715",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
266954996
|
pes2o/s2orc
|
v3-fos-license
|
Trajectory of multimorbidity before dementia: A 24‐year follow‐up study
Abstract INTRODUCTION Although the multimorbidity–dementia association has been widely addressed, little is known on the long‐term trajectory of multimorbidity (TOM) in preclinical dementia. METHODS Based on the Health and Retirement Study, burden of multimorbidity was quantified with the total number of eight long‐term conditions (LTC). Patterns of TOM before dementia diagnosis were investigated with mixed‐effects models. RESULTS In 1752 dementia cases and 5256 matched controls, cases showed higher and faster increasing predicted number of LTC than controls, with a significant case–control difference from 20 years prior to dementia diagnosis. Larger increases in number of LTC during preclinical phase of dementia were found in White participants, females, those whose age at dementia onset was younger, and those who were less educated. DISCUSSION Our findings emphasize the faster accumulation of multimorbidity in prodromal dementia than in natural aging, as well as effect modifications by age and sex. Highlights TOM increased faster in prodromal dementia than in natural ageing. Patterns of TOM by dementia status diverged at 20 years before dementia diagnosis. Patterns of TOM were modified by age and sex.
pathophysiological hallmarks, such as amyloid beta and tau deposition, begin more than two decades prior to the onset of Alzheimer's disease (AD). 4Interventions at the earliest possible stage have been proposed to prevent or delay dementia. 3ltimorbidity, defined as the coexistence of at least two long-term conditions (LTC) in the same individual, is strongly associated with age, with a prevalence of 65% in adults aged 65 to 84 years and 81% in those aged ≥ 85 years. 5There is growing consensus that heavier burden of multimorbidity is associated with increased risks of memory decline, 6 cognitive decline, 7,8 and incident dementia. 9,108]10 It has been reported that a steeper increase in multimorbidity was associated with higher dementia incidence in a cohort study; 9 however, differences in the preceding-dementiaonset time windows of multimorbidity burden between demented and non-demented individuals were not examined. 9Some previous studies have focused on trajectory of a single chronic condition including depression, 11 stroke, 12 blood pressure, cholesterol, and glucose 13 before dementia occurrence.The temporal pattern of multimorbidity during the preclinical phase of dementia is largely unknown.Understanding the multimorbidity trajectory before dementia onset is useful to identify people at high risk of dementia and to find the optimal time window for dementia prevention.
Based on longitudinal data of the Health and Retirement Study (HRS), we aimed to explore the trajectory of multimorbidity (TOM), which was repeatedly measured over a period of 24 years preceding the dementia diagnosis.Because the prevalence and incidence of dementia vary by age, sex, race, education, and apolipoprotein E (APOE) ε4, [14][15][16] we also aimed to examine whether TOM before dementia is modified by these factors.
Study population
This study is embedded in the HRS, an ongoing, nationally representative, longitudinal cohort of US population-based adults aged ≥ 50 years. 17The HRS has been conducted biennially since 1992 to collect a wide range of information on employment, wealth, family composition, lifestyle, and health status.The response rate in the follow-up of HRS was ≈ 85%. 17The modified Telephone Interview for Cognitive Status (TICS-m), used for the classification of cognitive function in HRS, 18 has been measured since wave 3.As a result, 24-year data from wave 3 (1996) to wave 15 (2020) were available for the longitudinal data analysis.The HRS was ethically approved by the University of Michigan Institutional Review Board.All participants provided informed consent for participation in the study.
Assessment on multimorbidity
History of doctor-diagnosed chronic diseases was surveyed at each study wave.Participants were asked "Has a doctor ever told you that
Dementia diagnosis
Dementia status was determined following methods used in previous studies. 18,19Cognitive function was assessed with the TICS-m, which included three tests of serial sevens subtraction, immediate and delayed recall items, and counting backward.Total scores of TICSm were calculated by summing the scores of each cognitive test (0 to 27 points), with higher scores indicating better cognitive performance.TICS-m is validated for dementia screening and has been widely applied in previous studies using HRS data. 18,19According to the criteria of Langa-Weir Classification of cognitive function, dementia was defined as having TICS-m scores < 6. 18,19 In addition, for proxy respondents, composite cognitive scores were calculated with proxy assessment of memory levels (excellent, very good, good, fair, poor; 0 to 4 points), limitations in five instrumental activities of daily living (taking medication, managing money, cooking, using phone, and shopping; 0 to 5 points), and the interviewer assessment of whether the respondents had cognitive impairment (no, maybe, yes; 0 to 2 points).Total scores of proxy assessment were calculated, ranging from 0 to 11, with higher scores meaning poorer cognitive function.Participants who had proxy assessment scores ≥ 6 were classified as demented. 19
Covariates
The demographic information on age (years), sex (male, female), race/ethnicity (White/Caucasian, Black/African American, or other), and years of education were obtained via structured questionnaires.
APOE ε4 carriers were those who had either one or two ε4 alleles.
Matched nested case-control sample
Among 13,395 participants surveyed during HRS waves 3 through 15, we excluded individuals who were < 50 years at cohort entry (n = 1519), had prevalent dementia at wave 3 (n = 1162), had assessments on dementia status for fewer than two times (n = 698), or had missing data on covariates (n = 11; Figure S1 in supporting information).A total of 10,005 participants remained for matching, of whom 2133 with incident dementia were identified.
Dementia cases had to develop incident dementia during followup visits, and to be dementia free at the previous visit of dementia diagnosis to ensure it was the onset of dementia.Controls met the following criteria: they were dementia free across waves; they were dementia free at the visit of diagnosis of the matched dementia case and at one or more visits before the matching visit, to ensure they were dementia free until the matching visit; and they matched to a dementia case by race, age (± 3 years), sex, years of education, and study wave.Time (in years) before dementia at each visit was calculated by subtracting visit year before matching visit from the year at matching visit for each matching pair.Time 0 corresponds to the matching visit for cases and controls.Each dementia case was matched to three controls at the visit of dementia diagnosis.Random sampling with replacement between visits was used to select controls according to a previous study. 13The R package "MatchIt" version 4.4.0 was used to match controls to cases with the nearest neighbor matching with replacement.Finally, 1752 dementia cases were successfully matched to three controls each, resulting in a total sample of 7008 individuals.
Statistical analysis
The method of Spearman correlation was applied to estimate correla- The mean predicted number of LTC between cases and controls were compared at different time points, and P values were adjusted with false discovery rate (FDR) due to multiple comparisons.Sensitivity analyses were performed when using a 1:1 matching ratio to obtain a nested case-control sample (1892 cases and 1892 matched controls), and when additionally adjusting for other covariates including marital status, smoking, drinking, body mass index (BMI), physical activity, and disability.Data analysis was performed with R version 4.2.1.were not statistically significant.Compared to controls, dementia cases had a higher number of LTC, had lower levels of alcohol drinking, BMI, physical activity, and TICS-m scores, and were more likely to be APOE ε4 carriers, to be smokers, to be separated/divorced/widowed, and to be physically disabled (all P < 0.05).Dementia cases also had significantly higher proportions of hypertension, diabetes, lung disease, heart problems, stroke, and arthritis.
Prevalence of multimorbidity category before dementia
We found that observed prevalence of multimorbidity increased with retrospective time to dementia diagnosis (Figure 1).Compared to controls, dementia cases had higher prevalence of multimorbidity throughout the whole study period.Prevalence of participants who had ≥ 3, ≥ 4, and ≥ 5 chronic conditions elevated faster in dementia cases than in controls, especially when approaching the onset of dementia.Time to dementia diagnosis was significantly positively correlated with the dementia-status-related differences in prevalence of ≥ 3 LTC (r = 0.85, P < 0.001), ≥ 4 LTC (r = 0.99, P < 0.001), and ≥ 5 LTC (r = 0.98, P < 0.001; Figure S2 in supporting information).
Trajectory of LTC number before dementia
The predicted mean number of LTC significantly increased over time for dementia cases and controls (Figure 2; P < 0.001 for time).Sex-specific TOM was found (P < 0.001 for "dementia × time × sex" item).Female cases started with significantly higher number of LTC and had a faster increase than female controls (FDR-adjusted P < 0.05 over follow-up).We found that males showed similar levels of LTC between demented and non-demented participants from −24 to about −12 years to diagnosis, and then demented cases had an accelerated increase, leading to a significant difference from 5 years before diagnosis onward (FDR-adjusted P < 0.05).
We found non-significant modification of years of education on TOM (P interaction for continuous duration of education = 0.362, P interaction for categorical duration of education [< 10, ≥ 10 years] = 0.554).The predicted number of LTC was not significantly different by dementia status over time among participants with low education (< 10 years; FDR-adjusted P > 0.05).Our results revealed that dementia cases educated for at least 10 years had a higher LTC number than controls over time, with a significant difference since −19 years to diagnosis (FDR-adjusted P < 0.05).
Patterns of TOM were not significantly modified by APOE ε4 allele status (P interaction = 0.222).From −3 years to dementia diagnosis onward, APOE ε4 non-carriers with dementia had a significantly higher number of LTC compared to dementia-free non-carriers (FDR-adjusted P < 0.05).However, trajectory patterns between cases and controls were not significantly different over time among APOE ε4 carriers.
Compared to the primary results, similar patterns of TOM were found when each dementia case was matched by one control (Figure S4 in supporting information), and when additionally adjusting for marital status, smoking, drinking, BMI, physical activity, and disability (Figure S5 in supporting information).
DISCUSSION
Based on the prospective cohort spanning 24 years in middle-aged and older adults, results indicated that multimorbidity burden generally increased faster in participants with incident dementia compared to matched non-demented controls, with a significant difference at 20 years preceding dementia diagnosis.Moreover, patterns of TOM during the preclinical phase of dementia were modified by age and sex.
Among all participants, we found that the preclinical trajectory of increase in multimorbidity started at −20 years to dementia diagnosis.
2][13] Results from a case-control study nested in the Three-City Study showed lower levels of systolic blood pressure, lower levels of diastolic blood pressure, and higher glucose levels in dementia cases than in controls at about 3, 4, and > 14 years prior to diagnosis, respectively. 13Researchers also found that risks of dementia started increase 5 years before stroke occurrence in women. 12Our findings highlighted that a faster accumulation of multimorbidity seems to be associated with dementia onset.
Chen et al. classified participants from the HRS into four patterns of TOM and found that individuals with steeper increase in multimorbidity had increased risks of dementia. 9However, time windows during which multimorbidity burden diverged between dementia cases and controls have not been investigated previously.
In the stratification analysis by race, the most apparent divergence in TOM by dementia status was found among White participants.Data from National Alzheimer's Coordinating Center suggested that compared to demented White people, demented Black individuals had more dementia risk factors and more severe cognitive impairment and symptoms, 20 which was in accordance with our findings that Black participants generally had higher numbers of LTC than White individuals over time (Figure 2).Consequently, weak case-control differences in TOM among Black participants might be induced by the heavy burden of multimorbidity in Black controls.We found that pattens of TOM were not significantly modified by race, which should be explained with caution due to the limited sample size in Black individuals (n = 1704) and other races (n = 208) in this study.
A significant difference in the number of LTC was found in participants who developed incident dementia at the age of < 75 years.Data from a previous study indicated that associations of stroke and low systolic blood pressure with dementia risks were statistically significant in cases with onset age of dementia ≤ 87 years but not in those with onset age of dementia > 87 years. 21Another study found that multimorbidity-associated risks of dementia were stronger in participants whose multimorbidity occurred in mid-life than those in late life. 22Consequently, age-specific disparities in TOM could be explained by stronger associations between multimorbidity and dementia risks in the early stage of dementia process than that in late stage.
Our results showed that female cases had a consistently higher number of LTC than controls over the whole study period starting from −24 years to dementia diagnosis, and that significant case-control difference started at 5 years before diagnosis for males.The prevalence of dementia is about 1.9 times greater in women than in men. 23Compared to men, heavier burden of dementia in women is presumed to be caused by multiple scenarios including stronger effects of APOE ε4 on cognitive declines and AD pathology, more common risk factors (e.g., lower educational attainment, less physical exercise, more depression), and reproductive factors (e.g., hypertensive disorders, menopause, and hormone replacement therapy). 24Although sex disparities in multimorbidity and dementia have been widely addressed, the biological mechanisms of sex disparities in TOM before dementia have not been elucidated.Our results highlighted that multimorbidity intervention should be performed earlier in females than in males for dementia prevention.
More apparent differences in trajectory patterns by dementia status were found in higher-educated participants than in lower-educated ones, but without significant modification effects of education.It has also been reported that associations between middle-age cardiovascular burden and old-age cognition 25 were modified by educational attainment.In addition, less education is regarded as a risk factor for dementia 3 and multimorbidity. 26Differences in TOM between cases and controls might be offset and masked by the detrimental effects of less education in participants who had lower levels of education.
We found that dementia cases had significantly higher number of LTC than controls at 3 years before dementia diagnosis in APOE ε4 noncarriers but not in carriers, with a non-significant interaction item of "dementia × time × APOE ε4."Similarly, it has been reported that associations between ideal cardiovascular status and risks of dementia were significant in APOE ε4 non-carriers but not in carriers in older adults with a mean age of 75 years. 27In line with our findings, results from a previous study indicated that compared to APOE ε4 non-carriers, carriers had fewer declines in daily functioning, which is associated with multimorbidity 28 before dementia diagnosis. 29is study had some strengths.We compared patterns of TOM between natural aging and prodromal dementia based on a large, nationally representative cohort over a long period of 24 years.In addition, matched nested case-control design with a balanced distribution of age, sex, race, and education by dementia status was efficient in providing unbiased estimation.
There were several limitations in this study.First, although covering a long follow-up, the number of LTC was already significantly higher in dementia cases than in controls at the beginning of this study among females and individuals who were first identified as demented below 75 years old.As a result, assessments of multimorbidity at earlier adulthood were required to find the accurate beginning of diverged trajectory patterns.Second, participants were mainly White individuals, which precluded the generalizability of findings to other ethnicities.Third, dementia status was determined based on the self-or proxy-reported scores of TICS-m but not clinical diagnosis, which might lead to a bias of misclassification.Fourth, the burden of multimorbidity was self-reported and quantified with the counts of seven chronic conditions without consideration of other diseases (e.g., elevated cholesterol, chronic kidney disease, and infections) or the relative impact of each disease, 30 which might induce biases of recall and misclassification.Fifth, although 1752 of 2133 dementia cases were successfully matched to controls, estimations might be biased due to the loss of demented cases.Furthermore, laboratory experiments were warranted to elucidate potential biological mechanisms of modification effects found in subgroup analyses.
In conclusion, results of the present study suggested that the preclinical trajectory of increase in multimorbidity initiates at 20 years before dementia diagnosis, with modifications of age and sex.
Our findings proposed the optimal time windows for multimorbidity intervention in dementia prevention.
tions between the difference (dementia -non-demented) in prevalence of multimorbidity category (≥ 1, ≥ 2, ≥ 3, ≥ 4, and ≥ 5 chronic conditions) and time to dementia diagnosis.The TOM before dementia was estimated using mixed-effects regression models, adjusting for age, sex, race, and years of education.Random intercept and slope on time were used to correct the intra-individual correlation caused by repeated measurements.In the regression model, number of LTC was used as the dependent variable and items of dementia status, time, and "dementia × time" interaction were used as predictors.Natural cubic splines of time were used to fit the trajectory curves.To examine effects modified by race, a three-way interaction term of "dementia × time × race" was included in the regression model.The significance of interaction terms was examined with the Wald test.Similar methods were applied to test modification of other factors including age at dementia diagnosis (mean = 74.08 years, cut-off at 75 years), sex, years of education (mean = 11.19 years, cut-off at 10 years), and APOE ε4 allele status (n = 5157).
Characteristics of participants at dementia diagnosis.Observed prevalence of multimorbidity category before dementia diagnosis.Participants were categorized into different groups according to the number of LTC at each study wave.Observed prevalence of multimorbidity category (y axis) was plotted against years to dementia diagnosis (x axis).Prevalence of dementia cases and controls are shown in solid and dashed lines, respectively."All participants" in the table at bottom means the number of participants who had LTC measurements.LTC, long-term conditions TA B L E 1Abbreviations: %, proportion; APOE, apolipoprotein E; BMI, body mass index; LTC, long-term conditions; n, frequency; SD, standard deviation; TICS-m, modified Telephone Interview for Cognitive Status.aThere were 1851 participants who had missing data on APOE ε4.F I G U R E 1number of LTC during the whole study period (P < 0.001 for "dementia × time" interaction), with significant differences in a long window spanning from −20 years to dementia diagnosis onward (FDR-adjusted P < 0.05; FigureS3in supporting information).The predicted mean number (95% confidence interval [CI]) of LTC was, respectively, 2.22 (2.17 to 2.27) and 2.36 (2.32 to 2.41) at −24 and 0 years to dementia −24 and 0 years to dementia onset for dementia cases.We found nearly parallel TOM by dementia status from 24 to 20 years before dementia diagnosis and faster increases in LTC number from −20 years to dementia diagnosis for dementia cases.White participants with incident dementia had heavier burden of multimorbidity compared to non-demented individuals, reflected F I G U R E 2 Predicted mean trajectories of number of LTC before dementia diagnosis.Predicted number of LTC was estimated using mixed-effects models among dementia cases (n = 1752) and matched controls (n = 5256), adjusting for age at dementia, sex, race, and years of education.Trajectories were plotted for participants with the following profile: female, 74.08 years old, White/Caucasian, and 11.19 years of education.Trajectories for dementia cases and controls are presented in red solid and blue dashed lines, respectively.APOE, apolipoprotein E; LTC,
|
2024-01-13T05:07:52.480Z
|
2024-01-01T00:00:00.000
|
{
"year": 2024,
"sha1": "122f03eb2e04d21aa2900986fde4056524f03cfe",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/dad2.12523",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "b347fd28c18f21612e1b28a41ef79f87e3fb242d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
210944585
|
pes2o/s2orc
|
v3-fos-license
|
Schnurri-3 regulates BMP9-induced osteogenic differentiation and angiogenesis of human amniotic mesenchymal stem cells through Runx2 and VEGF
Human amniotic mesenchymal stem cells (hAMSCs) are multiple potent progenitor cells (MPCs) that can differentiate into different lineages (osteogenic, chondrogenic, and adipogenic cells) and have a favorable capacity for angiogenesis. Schnurri-3 (Shn3) is a large zinc finger protein related to Drosophila Shn, which is a critical mediator of postnatal bone formation. Bone morphogenetic protein 9 (BMP9), one of the most potent osteogenic BMPs, can strongly upregulate various osteogenesis- and angiogenesis-related factors in MSCs. It remains unclear how Shn3 is involved in BMP9-induced osteogenic differentiation coupled with angiogenesis in hAMSCs. In this investigation, we conducted a comprehensive study to identify the effect of Shn3 on BMP9-induced osteogenic differentiation and angiogenesis in hAMSCs and analyze the responsible signaling pathway. The results from in vitro and in vivo experimentation show that Shn3 notably inhibits BMP9-induced early and late osteogenic differentiation of hAMSCs, expression of osteogenesis-related factors, and subcutaneous ectopic bone formation from hAMSCs in nude mice. Shn3 also inhibited BMP9-induced angiogenic differentiation, expression of angiogenesis-related factors, and subcutaneous vascular invasion in mice. Mechanistically, we found that Shn3 prominently inhibited the expression of BMP9 and activation of the BMP/Smad and BMP/MAPK signaling pathways. In addition, we further found activity on runt-related transcription factor 2 (Runx2), vascular endothelial growth factor (VEGF), and the target genes shared by BMP and Shn3 signaling pathways. Silencing Shn3 could dramatically enhance the expression of Runx2, which directly regulates the downstream target VEGF to couple osteogenic differentiation with angiogenesis. To summarize, our findings suggested that Shn3 significantly inhibited the BMP9-induced osteogenic differentiation and angiogenesis in hAMSCs. The effect of Shn3 was primarily seen through inhibition of the BMP/Smad signaling pathway and depressed expression of Runx2, which directly regulates VEGF, which couples BMP9-induced osteogenic differentiation with angiogenesis.
Introduction
Bone defects occur frequently and have various causes, such as trauma, infections, and tumors 1 . Despite the fact that bone possesses a self-healing ability, repairing bone defects beyond a critical size still requires surgery or reconstructive surgery 2 . It has been reported that biological treatment modalities contribute to the repair of bone defects. Tissue-engineered bone techniques involve essential elements, such as seed cells, growth factors, and implant scaffolds, and provide new treatment options for bone defects 3 . Many types of mesenchymal stem cells (MSCs) have been found to originate from various types of tissue, such as bone marrow-derived MSCs (BMSCs), as well as adipose, peripheral blood, and muscle and ligamentderived MSCs [4][5][6][7][8] . Among them, BMSCs are reported to be equipped with the capacity for osteogenic, chondrogenic, and adipogenic differentiation 9,10 . Despite this, there are many disadvantages in the process of extracting BMSCs; invasive surgery carries a high risk of infection and bleeding and may cause immunological rejection after implantation. Recently, a new kind of MSCs, which originated from the surface membrane of human placenta, called human amniotic MSCs (hAMSCs), has been discovered 11 . hAMSCs possess the ability to thrive in multiple environments and are not extracted traumatically. In addition, they do not carry ethical or moral controversies with them 12 . hAMSCs have a multidirectional differentiation capacity as well as an advantage in therapeutic angiogenesis due to the hAMSC root in placenta, which is a vascular tissue 13 . Hence, hAMSCs have been extensively applied in the treatment of bone and spinal traumas and vascular reconstruction surgery.
Bone repair requires multiple stimuli, including growth factors, cytokines, differentiation factors, and extracellular matrix, which contribute to creating a conducive milieu for enhanced bone healing 14,15 . Bone morphogenetic proteins (BMPs), attributed to the transforming growth factor β (TGF-β) super family, comprise 14 members in humans and rodents. They are essential for the proliferation and cell differentiation that determine the fate of cells 16,17 . BMP9 was identified in the mouse liver and exerts some effects in maintaining the embryonic basal forebrain cholinergic neurons 18 . The activity of BMPs is required in cells when ligands combine with BMPRI and BMPRII receptors. The signal transduction begins and continues with phosphorylation of the R-Smads, including Smad1/5/8 19 . After the R-Smad complex is formed, the Smad4 and complex shifts into the nucleus and regulates downstream target genes and proteins 20 . BMP9 can regulate a number of essential targets for osteogenic signal transduction in MSCs; however, the explicit mechanism of how Schnurri-3 (Shn3) is involved in the process of BMP9-induced osteogenic differentiation remains unclear.
Shn3, a large protein that belongs to the ZAS family of zinc finger proteins, is a critical and considerable mediator of adult skeletal formation that regulates mature osteoblast activity 21,22 . Shn3 is one of the three mammalian homologs of Drosophila Shn that acts as a fundamental cofactor for signaling via decapentaplegic (DPP), which is the Drosophila homolog of the BMP/TGF-β signaling pathway 23,24 . Recently, it was reported that mice lacking Shn3 showed increased bone mass 25 . This high bone mass phenotype in mice due to Shn3 indicates that Shn3 could govern the expression of Runt-related transcription factor 2 (Runx2). Runx2 is regulated by Shn3 through a complex with E3 ubiquitin ligase WWP1 26 . Shn3 regulates the interaction by inhibiting mitogenactivated protein kinase (MAPK) activity and osteogenic differentiation, and the Shn3 expression indirectly regulates osteoclastic bone resorption 27,28 .
Bone formation is an intricate process that requires highly coordinated reciprocity between multiple cells, factors, and signals to form mineralized bone tissues 29,30 . Working as structural templates, blood vessels adjacent the region of bone formation carry key elements for bone homeostasis into the osteogenic microenvironment, as well as minerals, growth factors, and osteogenic progenitor cells 31 . Vascularization is necessary for striking coupling of angiogenesis and osteogenesis during skeletal development other than bone repairs 32,33 . Multiple factors participate in the coupling process, including vascular endothelial growth factor (VEGF), hypoxia-inducible factor 1α (HIF1α), von Willebrand factor (vWF), and CD31 34,35 . Owing to Shn3's activity regulating the osteogenic differentiation through BMP/TGF-β signaling, Shn3's capacity to regulate BMP9-induced angiogenesis and osteogenesis needs to be investigated further.
In the present study, we investigated the effect of Shn3 on BMP9-induced osteogenic differentiation both in vitro and in vivo and angiogenesis of hAMSCs. Our investigation provides another possible mechanism for the regulation of BMP9-induced differentiation in hAMSCs. These results will offer abundant benefits to BMP9mediated bone tissue engineering.
Results
Isolation and characterization of hAMSCs hAMSCs were isolated from superficial amnion on human placenta by using both trypsin and collagenase (Fig. 1a). Morphology of cultured primary and passaged 1, 2, and 3 (P1, P2, P3, respectively) cells showed a monolayer of adherent cells and demonstrated a spindle-shaped exterior with radial-like growth, and increasing polarization was observed with each passage. hAMSCs run up to 80% confluence over a period of approximately 5 days, whereas hAMSCs generally require 7 days to cover the culture flask (Fig. 1b). Cell Counting Kit (CCK)-8 results showed that the proliferative curve of hAMSCs exhibited an "S" pattern, and hAMSCs went through a logarithmic growth phase for 4 days after 1 day of latency. The average doubling time of hAMSCs was 72 h (Fig. 1c).
Identification and multidirectional differentiation potential of hAMSCs
Flow cytometric results showed that P3 hAMSCs were positive for MSC markers CD44, CD73, CD90, and CD105 Fig. 1 Isolation and morphology of cultured hAMSCs; the proliferation potential and identification of hAMSCs; the basic expression of Shn3 in hAMSCs. a General observation (a1), isolation, and extraction (a2) of hAMSCs. b Representative morphology of adherent hAMSCs from primary cultures (P0) were cultured through the third passage (P3) with spindle shapes on cell culture dish (original magnification ×40, scale bar = 200 μm). c Proliferation of hAMSCs as determined by the CCK-8 method showed that cells reach the doubling time at 3 days. d ELISA assay to detect the Shn3 expression in cell culture supernatants when cells reached 20% and 40% confluence, respectively. e Phenotypic properties of hAMSCs at third passage by flow cytometry and hAMSCs highly expressed CD44, CD73, CD90, and CD105 and negatively expressed CD34, CD19, CD45, and HLA-DR. f Multiple potential for osteogenic, chondrogenic, and adipogenic differentiation of hAMSCs at third passage: osteogenic (alkaline phosphatase staining and Alzarin Red S staining), chondrogenic (Alcian Blue staining), adipogenic (Oil Red O staining) (original magnification ×100, scale bar = 100 μm). g hAMSCs at third passage highly expressed vimentin and hardly expressed CK-19, and cell nuclei were stained blue by 4′,6diamidino-2-phenylindole (DAPI) (original magnification ×100, scale bar = 100 μm). The data are shown as mean ± SD for three separate experiments. *P < 0.05. and weakly expressed hematopoietic markers CD34, CD19, CD45, and HLA-DR, which indicated that P3 hAMSCs were in possession of low immunogenicity (Fig. 1e). In addition, the results of alkaline phosphatase (ALP), Alizarin S Red, Alcian Blue, and Oil Red O staining showed that hAMSCs own the potential for multidirectional differentiation into osteoblasts, chondrocytes, and adipocytes (Fig. 1f). CK-19 is an exceptional marker of human amniotic epithelial cells, whereas vimentin is a specific marker of hAMSCs. During the isolation process of hAMSCs, small portions of human amniotic epithelial cells from amnion were present in hAMSCs. To reduce the rate of epithelial cells in hAMSCs, we passaged cells; epithelial cells gradually go through a conversion from epithelial to mesenchymal. The immunofluorescent staining results showed that P3 hAMSCs highly expressed vimentin and were negative for CK-19 expression (Fig. 1g). These results demonstrated that hAMSCs expressed pluripotent markers, and hAMSCs have the ability of self-renew and possess multi-linage differentiation potential.
Basic expression of Shn3 in hAMSCs
To detect the basic expression level of Shn3 in hAMSCs, we used an enzyme-linked immunosorbent assay (ELISA). Results of the ELISA assay showed that the concentration of Shn3 in the supernatant of cell cultivation was 0.1316 ng/mL when hAMSCs reached a density of approximately 20% (the number of cells reached about 1.8 × 10 5 cells/mL), while the concentration of Shn3 in the supernatant of cell cultivation was 0.2298 ng/mL when hAMSCs reached approximately 40% confluence (the number of cells reached about 3.7 × 10 5 cells/mL) (Fig. 1d). These results prompted us to determine that hAMSCs have a certain degree of expression of Shn3.
Shn3 diminishes BMP9-induced early and late osteogenic differentiation of hAMSCs in vitro
In order to assess the role of Shn3 in BMP9-induced osteogenic differentiation, we constructed an adenoviral vector system to stably overexpress Shn3 that target human Shn3. After transfection of the cells with Ad-RFP, Ad-BMP9, and Ad-Shn3 of hAMSCs for 24 h, cells were observed by fluorescence microscope (Fig. 2a). By using reverse transcription and quantitative polymerase chain reaction (RT-qPCR) assay, we found that the expression levels of Shn3 were significantly increased in hAMSCs after transfection with Ad-Shn3 for 48 and 72 h when compared with that of control hAMSCs transfected with Ad-RFP (Fig. 2b). Therefore, Ad-Shn3 was used to upregulate Shn3 levels in our study. ALP were used to determine the changes in ALP activity, which indicate early osteogenic activity. First, we examined the effect of Ad-Shn3 on early and late osteogenic differentiation on hAMSCs. Our results showed that ALP activity was dramatically decreased in the BMP9+Shn3 group compared to the BMP9 group at days 3, 5, and 7, whereas exogenous Shn3 expression alone did not exhibit any significant effect on ALP activity of hAMSCs at 7 days (Fig. 2c). Quantitatively, the Shn3-mediated synergistic effect on ALP activity in BMP9-transfected hAMSCs was decreased by 63%, 76%, and 85% on days 3, 5, and 7, respectively (Fig. 2d). Alizarin Red S staining was used to examine the calcium deposition, which is one of the late osteogenic indicators. Our results showed that the calcium deposition was redundantly decreased in the BMP9 +Shn3 group compared to the BMP9 group at 14 and 21 days; quantitatively, overexpression of Shn3 reduced the matrix mineralization compared with the other groups (Fig. 2e, f). Taken together, Shn3 is shown to be able to significantly inhibit the early and late osteogenic differentiation of hAMSCs.
Silencing Shn3 expression potentiates BMP9-induced osteogenic differentiation of hAMSCs in vitro
To determine whether Shn3 is an essential mediator in BMP9-induced osteogenic signaling, we constructed a recombinant adenovirus that expresses a pool of three small interfering RNA (siRNA) targeting human Shn3-coding regions using the established pSOS system as recently described 36 , which prompted Ad-Sim-Shn3 to efficaciously knock down Shn3 expression in hAMSCs. After transfecting the cells with Ad-RFP, Ad-BMP9, and Ad-Sim-Shn3 for 24 h, cells were observed by fluorescence microscope (Fig. 3a). By using RT-qPCR, we determined the effectiveness of transfecting Ad-Sim-Shn3 on hAMSCs. We found that Ad-Sim-Shn3 could greatly inhibit mRNA expression of Shn3 from 24 to 168 h (Fig. 3b).
Further, we analyzed the effect of downregulating Shn3 expression on BMP9-induced osteogenic differentiation of hAMSCs. The results showed that ALP activities were significantly increased at 3, 5, and 7 days by cotransfection with Ad-BMP9 and Ad-Sim-Shn3 compared with the control BMP9 group at each time point (Fig. 3c). However, cells transfected with Ad-Sim-Shn3 alone showed a slight increase inn ALP activity of hAMSCs compared with Ad-BMP9 at 3 and 7 days. By quantification, ALP histochemical staining returned similar results to that of the expression of Ad-Sim-Shn3; both significantly enhanced BMP9-induced ALP activity in hAMSCs at 3 and 7 days (Fig. 3e). Moreover, BMP9induced calcium deposition in hAMSCs was notably increased at days 14 and 21 when the expression of Shn3 was silenced as illustrated by Alizarin Red S staining (Fig. 3d, f). Conjointly, these results indicate that inhibiting Shn3 expression significantly potentiates BMP9-induced early and late osteogenic differentiation in vitro, which shows that Shn3 may play a critical role in BMP9-induced osteogenesis. Ad-RFP groups. c, d Overexpressed Shn3 inhibits BMP9-induced ALP activity in hAMSCs. ALP biochemical quantification assay (c) and ALP staining assay (d) were conducted to detect the ALP activity under the treatment as shown at 7 days after transfection (scale bar = 100 μm). e, f The quantification of mineralization (e) and calcium deposition is observed by Alizarin Red S staining assay (f) under the treatment as shown at 14 and 21 days after transfection (scale bar = 100 μm).The data are shown as mean ± SD for triplicate. *P < 0.05.
Shn3 inhibits the gene and protein expression of osteogenesis-related factors
Runx2 is a crucial osteoblast-specific mediator that plays a central role in osteoblast differentiation, bone formation, and remodeling. Runx2, bone sialoprotein (BSP), collagen type I (COL-1), osterix (OSX), and osteocalcin (OCN) play an important role in regulating anabolic bone formation and calcium metabolism. Hence, we determined the effects of Shn3 on the BMP9induced expression of Runx2, BSP, COL-1, OSX, and OCN in hAMSCs. The RT-qPCR results revealed that the mRNA levels of Runx2, BSP, COL-1, and OSX in the BMP9+Sim-Shn3 group were significantly upregulated at day 7 compared to the BMP9 group (Fig. 3g); in contrast, the gene expression levels of these factors were decreased greatly in the BMP9+Shn3 group compared with the control BMP9 group at each time point. Moreover, the expression levels of these factors were also lower in the Shn3 group when compared with the Sim-Shn3 group and the red fluorescent protein (RFP) group. Similarly, the protein expression of OCN, osteopontin (OPN), and Runx2 were clearly more upregulated in the BMP9+Sim-Shn3 group at day 7 than in the other groups; the Shn3 group showed lower protein expression level compared with the BMP9 group and the BMP9+Shn3 group and but was higher than the RFP group ( Fig. 3h-k). Taken together, these results suggest that Shn3 exerts a negative effect on the mRNA and protein expression levels of osteogenesis-related factors. Effective knockdown of Shn3 expression. b The Ad-Sim-Shn3 expressing siRNA targeting Shn3 transduces hAMSCs with high efficiency after transfection. Ad-Sim-Shn3 silences the expression of Shn3 from 48 to 120 h. All samples were normalized with the house-keeping gene GAPDH. Each experiment was done in triplicate, Ad-Sim-Shn3 vs. Ad-RFP groups. c, e Silencing Shn3 promotes BMP9-induced ALP activity in hAMSCs. ALP biochemical quantification assay (c) and ALP staining assay (e) were conducted to detect the ALP activity under the treatment as shown at 3 and 7 days after transfection (Scale bar = 100 μm). d, f The quantification of mineralization (d) and calcium deposition is observed by Alizarin Red S staining assay (f) under the treatment as shown at 14 and 21 days after transfection (scale bar = 100 μm). g-k Shn3 inhibits the expression of osteogenic relative factors of hAMSCs. g RT-qPCR assay was performed to determine that Shn3 inhibits the expression of osteogenic relative factors, including Runx2, BSP, COL-1, and OSX, induced by BMP9 of hAMSCs at 7 days. h-k Western blotting assay was adopted to detect that Shn3 inhibits the expression of osteogenic relative factors OCN (h), OPN (i), and Runx2 (j) induced by BMP9 of hAMSCs. β-ACTIN served as the loading control. The quantification results of western blotting assay showed the effect of Shn3 on the expression level of OCN, OPN, and Runx2 on BMP9-induced hAMSCs (k). The data are shown as mean ± SD for triplicate. *P < 0.05.
Shn3 suppresses the subcutaneous ectopic bone formation in hAMSCs in nude mice
On the basis of the conclusion that Shn3 restrained BMP9-induced osteogenic differentiation of hAMSCs in vitro, we further examined the effect of Shn3 on BMP9induced osteogenic differentiation with the wellestablished subcutaneous ectopic bone formation in nude mice. The general observation and the results of three-dimensional reconstruction of micro-computed tomography (micro-CT; three-dimensional (3D)) revealed that the volume of ectopic bone mass was decreased in the BMP9+Shn3 group compared to the BMP9 group. The BMP9+Sim-Shn3 group showed a strong ability to potentiate the development of osteogenic masses compared to the other groups (Fig. 4a, b1). Mineral density expressed through a heat map from a micro-CT demonstrated that silencing the expression of Shn3 increased the average mineral density of the bone masses induced by BMP9 in hAMSCs; in contrast, overexpression of Shn3 greatly decreased the average mineral density formed by BMP9-transfected cells (Fig. 4b2). Quantitative analysis of bone histomorphometry showed that the values of bone volume/total volume (BV/TV), trabecular number (Tb. N), trabecular thickness (Tb. Th), and bone mineral density (BMD) were significantly increased in the BMP9+Sim-Shn3 group compared with the BMP9 group. In the opposite set-up, these parameters were greatly decreased in the BMP9+Shn3 group compared to the control. In addition, the results showed that trabecular separation (Tb. Sp) did not create any differences in each group (Fig. 4c). Besides, histological evaluation revealed that Sim-Shn3 increases the number of trabecular bone and the formation of the ossified matrix after transfection with BMP9 compared to the BMP9 group. Shn3 inhibited the BMP9-induced formation of bone matrix (Fig. 4d2). The Masson's trichrome staining showed that Sim-Shn3 can increase the ossified matrix induced by BMP9, and silencing Shn3 can decrease the osteoid matrix maturation after being transfected with BMP9 ( Fig. 4d1). However, overexpressing or silencing Shn3 expression levels did not affect BMP9's effect on chondrogenesis (Fig. 4d3). These in vivo data are consistent with the in vitro studies. In summary, our data strongly suggested that Shn3 is a critical mediator of BMP9-induced osteogenic signaling and that Shn3 exerts a negative regulatory effect on BMP9-induced osteogenic differentiation of hAMSCs in vitro and in vivo.
Shn3 inhibits the angiogenic differentiation and vascularization in BMP9-induced osteogenic differentiation
Angiogenesis and osteogenesis are closely coupled in bone development and regeneration. Thus we further determine the effect of angiogenic differentiation and vascularization regulated by Shn3. We used a RT-qPCR assay to detect the transfected hAMSCs on days 3, 5, and 7. The results showed that the mRNA level of angiopoietin 1 (ANGPT1), CD31, VEGF, and vWF were significantly upregulated in the BMP9+Sim-Shn3 group compared to the BMP9 group at day 7. Meanwhile, the gene expression levels of these factors were downregulated drastically in the Shn3 group compared to the BMP9 group at day 7 (Fig. 5a).
Moreover, the immunohistochemical staining results of ectopic bone masses demonstrated that the expression levels of OCN and OPN were upregulated in the BMP9 +Sim-Shn3 group and downregulated in the BMP9 +Shn3 group compared with the BMP9 group (Fig. 5b, e1, e2). Similarly, the immunohistochemical results also revealed that the angiogenesis-related proteins of ANGPT1 and VEGF were highly expressed in the BMP9 +Sim-Shn3 group and minimally expressed in the BMP9 +Shn3 group compared with the BMP9 group (Fig. 5c, e3, e4). Recent studies have showed that the H-type microvessels highly express CD31 and endothelium (CD31 hi EMCN hi ), which have the function of regulating osteoblasts and serve an important role as mediators of bone regeneration 37 . Beyond this, we investigated the role of Shn3 in BMP9-induced CD31 hi EMCN hi vascular endothelium in ectopic bone formation. The results showed that Sim-Shn3 could potentiate BMP9-induced expression of CD31 hi EMCN hi endothelium as compared with the BMP9 group. However, CD31 hi EMCN hi endothelium expression in the Shn3 group was significantly lower than that seen in the BMP9 group (Fig. 5d, f1, f2). In summary, these data suggest that Shn3 exhibits a negative regulatory effect on angiogenic differentiation and osteogenic differentiation of hAMSCs in vivo.
Effects of Shn3 on BMP9-induced angiogenesis in vitro
To further investigate the effects of Shn3 on BMP9induced angiogenesis of hAMSCs, we used a VEGF assay of hAMSCs and observed the results with immunohistochemical staining. The expression of VEGF was higher in the BMP9+Sim-Shn3 group and lower in the BMP+Shn3 group compared with the control BMP9 group at day 7. In addition, the Shn3 group expressed the lowest expression of VEGF of any of the groups (Fig. 6a1, a2).
The phenotypes of human umbilical vein endothelial cells (HUVECs) were identified by immunofluorescence staining, and the results showed that the HUVECs highly expressed CD31, VEGF, EMCN, and vWF (Fig. 6b). The HUVEC tube-formation assay revealed that silencing Shn3 expression could markedly increase the tube area in BMP9-induced hAMSCs, which had 1.38-fold greater area of tube formation than the control (BMP group). The volume of tube area in the BMP9+Shn3 group was lower; it showed a 0.78-fold change than that from the BMP9 group (Fig. 6d, e). Taken together, these results suggest that Shn3 greatly inhibits the BMP9-induced angiogenesis of hAMSCs in vitro.
Effects of Shn3 on BMP9-induced vessel invasion in vivo
To further investigate the effect of Shn3 on angiogenesis in vivo, the cells were treated as the experimental design, Fig. 4 Shn3 restricts the BMP9-induced subcutaneous ectopic osteogenesis of hAMSCs in nude mice. a The general observation of the subcutaneous bone masses in nude mice. b Subcutaneous osteogenic bone formation at 4 weeks detected by micro-CT scanning to determine the 3D iso-surface (b1) and the heat map of average mineralization density (b2). In the heat map analysis, white represents the highest average mineral density and black represents the lowest (scale bar = 1 mm). c Quantification results of ectopic bone formation and the relative values of BV/TV, Tb. N, Tb. Sp, Tb. Th, and bone mineral density (BMD) were analyzed. These data are shown as mean ± SD repeat triplicate. d The samples of bone masses were subjected to Masson's trichrome staining (d1), H&E staining (d2), and Alcian Blue staining (d3) to determine the formation of trabecular bone and bone matrix (magnification ×400, scale bar = 50 μm). Representative images are shown. BM bone matrix, UM undifferentiated MSCs, CH chondrocytes, TB trabecular bone. *P < 0.05. seeded on poly(lactic-co-glycolic acid (PLGA) scaffolds, and implanted on the dorsal subcutaneous tissues of mice (Fig. 7a). The topological features of PLGA were demonstrated and the PLGA that had been seeded with cells for 24 h was observed by scanning electron microscope (Fig. 7b). The grafts were evaluated by general observation and by immunofluorescence staining for vWF after 4 weeks. The gross graft was observed when the vascular invasion experiment was completed. The BMP9 +Sim-Shn3 group showed a strong vascularization ability as compared to the BMP9 group, while the RFP group hardly had any vessels on the surface of the graft. By b, e Immunohistochemical staining and quantification results were adopted to detect the effects of Shn3 on the expression of osteogenesis relative markers OCN (e1) and OPN (e2) in BMP9-induced ectopic bone masses of hAMSCs (magnification ×200, ×400, scale bar = 50 μm). c, e Immunohistochemical staining and quantification results was adopted to detect the effects of Shn3 on the expression of angiogenesis relative markers ANGPT1 (e3) and VEGF (e4) in BMP9-induced ectopic bone masses of hAMSCs (magnification ×200, ×400). d, f Immunofluorescence staining and quantified analysis were used to determine the effects of Shn3 on the expression of Type H vessels, including CD31 (red) (f1) and EMCN (red) (f2) and DAPI (blue) (magnification of up images = ×200, of down images ×400, scale bar = 50 μm). *P < 0.05.
contrast, the BMP9+Shn3 group showed lower angiogenesis than the BMP9 group, while the vascularization activity in the Shn3 group was also lower than the Sim-Shn3 group (Fig. 7c). The immunofluorescence results revealed that the PLGA scaffold seeded with hAMSCs that had been transfected with BMP9 +Sim-Shn3 showed significantly greater expression of vWF than when transfected with BMP9 alone. However, the overexpression of Shn3 could attenuate the expression of vWF compared with the BMP9 group, and there was no significance found within the RFP group (Fig. 7d, e).
(see figure on previous page) Fig. 6 Effect of Shn3 on BMP9-induced angiogenic differentiation in hAMSCs in vitro. a Immunohistochemical staining (a1) and its quantification analysis (a2) were performed to detect the effects of Shn3 on BMP9-induced protein expression of VEGF in hAMSCs (magnification ×200, scale bar = 50 μm). b Phenotypes of HUVECs at third passage was identified by immunofluorescence staining assay that HUVECs positively expressed CD31, VEGF, EMCN, and vWF (magnification ×100, scale bar = 100 μm). c To investigate the effects of tube formation induced by HUVECs, the Matrigel was coated on Transwell culture plates and HUVECs were cultivated in the ECM medium for 12 h. The HUVECs were adjusted to 2 × 10 5 cells/mL/well; hAMSCs were treated as the experimental design and seeded onto the upper wells. d, e The number of tube structures was recorded beginning at 6 h. The tube formation results were observed by light and fluorescence microscope. The HUVECs treated with calcein AM (green) were observed by fluorescence microscopy. The tube-formation results (e) and the quantified number of tube area (d) are shown (magnification ×200, scale bar = 50 μm). *P < 0.05. Fig. 7 Effects of Shn3 on subcutaneous vascular invasion of PLGA-hAMSCs composite in vivo. a Illustrative diagram showing that hAMSCs treated as experimental design seeded on the electro-spun PLGA scaffolds (a1) were implanted into the dorsal subcutaneous space of the mice and harvested for analysis after 5 weeks (a2). b The scanning electron microscopy was adopted to detect the PLGA (b1) and cells cultured on PLGA scaffold (b2) (magnification ×1000, scale bar = 200 μm). c The macromorphological observation of PLGA implants located on subcutaneous tissue at 5 weeks. d, e Immunofluorescence staining (d) and its quantification analysis (e) were performed to detect the vWF expression (green) in PLGA scaffold subcutaneously (magnification of up images = ×200, of down images ×400, scale bar = 50 μm). *P < 0.05.
Effects of Runx2 and VEGF signal on the expression of BMP9 and Shn3 in hAMSCs BMP9 routinely produced a marked effect on physiological function through the BMP/Smad signaling pathway or the non-canonical BMP/Smad signaling pathway, so we first examined whether Shn3 could exert any effect on the BMP/ Smad signaling pathway. By analyzing western blots, the results showed that BMP9 significantly increased the phosphorylation of Smad1/5/8 (p-Smad1/5/8) and had effects on the level of total Smad1/5/8. Despite the fact that Sim-Shn3 had no explicit effects on the expression levels of p-Smad1/5/8 and Smad1/5/8, it remains relevant to the effect of BMP9 on the expression of p-Smad1/5/8 in hAMSCs. While overexpression of Shn3 did not exert any effect on the expression of p-Smad1/5/8 and Smad1/5/8, it markedly decreased the level of p-Smad1/5/8 by inducing BMP9 in hAMSCs (Fig. 8a, c1). MAPK signaling are vital part of non-Smad BMP pathway. It is reported that BMP9 was capable of activating extracellular signal-regulated kinase 1/2 (ERK1/2) and p38 MAPKs. It is unclear that Shn3 exerts any effect on MAPK signaling pathway. The western blotting results revealed that the expression levels of p-Erk1/2 and p-JNK were higher in the BMP9+Sim-Shn3 group and lower than the Shn3 group as compared with the BMP9 group. In addition, the expression level of p-p38 did not exert any differences in each group (Fig. 8b, c2-c4). These data may point to the effect of Shn3 on inhibiting BMP9-induced osteogenic differentiation, and the molecular mechanism may be mediated by diminishing the BMP/Smad and BMP/MAPK signaling pathways.
The balance of bone metabolism is dependent on the interaction between blood vessels and osteocytes. Angiogenesis and bone formation are coupled to each other through specific vascular forms and pathways. Recent studies have confirmed that VEGF participated in the initiation of angiogenesis and Runx2 as a key regulator in start-up osteoblast differentiation of MSCs induced by BMPs. Thus we further determined the relationship between Runx2 and VEGF, which is regulated by Shn3 and BMP9. Chromatin immunoprecipitation (ChIP) assay results showed that Runx2 can bind with the promoter region of VEGF in hAMSCs after being transfected with BMP9+Sim-Shn3 ( Fig. 8d-g). Immunoprecipitation (IP) and western blotting results revealed that Runx2 has an interaction with VEGF in hAMSCs by transfecting with BMP9+Sim-Shn3. These results strongly indicate that silencing Shn3 may mediate the effect of BMP9 on enhancing angiogenesis and osteogenesis through the interaction between Runx2 and VEGF to some extent.
Discussion
Bone formation and angiogenesis are two closely related processes in bone development, remodeling, and repair 38,39 . Vascular invasion is a prerequisite for the coupling of angiogenesis and bone formation. Neovascularization is not only a bridge pathway for nutrient provisioning and bone tissue metabolism but also plays an active role in the regulation of bone formation 31,40 . Therefore, determining the coupling effect between bone formation and angiogenesis is of great significance for bone regeneration. In this present study, we demonstrated the effects of Shn3 on BMP9induced osteogenic and angiogenic differentiation in hAMSCs and identified the possible mechanism underlying this process. We found that the basic expression of Shn3 is detectable in hAMSCs and BMP9 can partially downregulate the expression of Shn3. Silencing the expression of Shn3 can potentiate BMP9-induced osteogenic factors in hAMSCs, while exogenous expression of Shn3 restrains BMP9-induced ALP activities and calcium deposition in hAMSCs, as well as the ectopic bone formation. Meanwhile, we also analyzed the role of Shn3 in BMP9-induced angiogenic differentiation in hAMSCs and vascular invasion in vivo. We found that silencing the expression of Shn3 upregulates the BMP9-induced angiogenesis-related factors and enhances subcutaneous vascularization. Despite the fact that lumen size seems different among groups, the protein expression level of vWF by immunofluorescence staining is a primary indicator to evaluate the formation of blood vessels. Mechanistically, we found that inhibition of Shn3 can enhance BMP9-induced BMPs/Smad signal transduction, as well as Runx2 and VEGF expression. Shn3 may exert this function through enhancing BMP/Smad signals and we also demonstrated that restraining the expression of Shn3 can activate Runx2, which is capable of directly regulating the expression of angiogenic factor VEGF. These results strongly indicate that Shn3 may play a critical role in regulating the BMP9-induced coupling effect between osteogenesis and angiogenesis in MSCs, which may be mediated by regulating Runx2 and VEGF signaling at least.
MSCs are multi-potent, self-renewing, and undifferentiated cells. BMSCs are considered favorable sources of MSCs; however, the difficulty in obtaining BMSCs comes from their invasive extraction process 41 . MSCs have attracted a lot of attention in cell-based therapy to be practical in clinical settings. Recently, the placenta has gained attention because of its abundance and availability 42 . The placenta is a feto-maternal organ that is disposed of after delivery and can be acquired with no invasive procedures, which makes it a favorable source with no ethical limitations 43 . hAMSCs were derived from the amniotic membrane on the surface of placenta. Compared to other tissue-originated MSCs, hAMSCs have many advantages. Their non-invasive and convenient collection has been widely used in trauma, neurological diseases, and spinal cord injury. Notably, the hAMSCs have lower DNA methylation levels, which contribute to more congruous overlap with the human genome 44,45 . Owing to the placenta's high vascularization, hAMSCs are provided with early progenitors of hemangiogenic cells. In our study, the flow cytometric results showed that the hAMSCs negatively expressed the cell surface markers CD11b and HLA-DR, which give hAMSCs low immunogenicity. Thus we chose hAMSCs as seed cells that can differentiate into osteoblasts and perform angiogenesis.
Despite of osteogenic differentiation mediated primarily by MSCs, it is likely that a bunch of factors are involved in markers, such as BMPs, Runx2, VEGF, TGF-β, and insulin-like growth factor 46,47 . Besides these, many tissue types exist in the bone, including vascular endothelium and connective tissues and autonomic and sensory nerves, which contribute to forming a favorable milieu for bone formation 48,49 . BMPs belong to the TGF-β super-family, which are recognized as playing critical roles in regulating bone formation and proliferation 50,51 , as well as in angiogenesis 52,53 . However, some members of BMPs can commit MSCs to osteoblast lineages, such BMP2, BMP4, BMP6, and BMP7 54,55 . In a recent study, we reported that BMP9 (also termed as growth differentiation factor 2) is one of the most potent BMPs among the 14 types of BMPs in the induction of osteogenic differentiation 19,56 . BMP9 usually produces its marked effect through the BMP/Smad signaling pathway, which includes the canonical BMP/Smad pathway or the non-canonical BMP/ Smad pathways, such as ERK and p38 MAPK pathway 57 . In the canonical BMP/Smad pathway, BMP9 activates the corresponding phosphorylated R-Smad (named Smad1/5/ 8), and the phosphorylated Smad1/5/8 (p-Smad1/5/8) recruits and phosphorylates Smad4 to form the complex. From there, the complex shifts to the nucleus where it can regulate the expression of downstream targets 58 . Beyond this, there are lots of factors or signals that are also implicated in regulating the BMP9-induced osteogenic and angiogenic differentiation, including VEGF, Runx2, and fibroblast growth factor (FGF) 59 . Our results showed that silencing the expression of Shn3 increases the phosphorylation level of Smad1/5/8, which suggested that Shn3 may participate in the BMP9-induced Smad signaling pathway. For the non-canonical BMP/Smad pathway, silencing Shn3 could significantly potentiate the level of phosphorylated ERK1/2. Taken together, we conclude that Shn3 inhibits the BMP9-induced osteogenic differentiation of hAMSCs by blocking the gene expression and downregulating the BMP/Smad and BMP/MAPK signaling pathways.
Shn3 is a large zinc finger protein that plays a vital role in the process of embryogenesis as a critical nuclear factor for DPP signaling pathway, which is the Drosophila homolog of BMP/TGF-β 23 . Shn3 was identified as a DNA-binding protein of the heptameric recombination signal sequence and is one of the mammalian homologs of Drosophila Shn 60 . Shn3 not only functions as an adaptor protein in the immune system that interacts with nuclear factor-κB to regulate tumor necrosis factor-α and interleukin-2 but also serves a function in regulating bone formation 21,61 . It has been reported that Shn3 mutant (Shn3 −/− ) mice exhibit a conspicuously high bone mass phenotype, and this phenotype was regulated by a multimerized complex containing Shn3, Runx2, and the NEDD4 (an E3 ubiquitin ligase)-WWP1 25 . This complex was regulated by Shn3 and inhibits Runx2 function. Hence, the absence of Shn3 result in elevated levels of Runx2 protein and potentiated transcriptional activity of Runx2 that profoundly increased the degree of bone formation 62 . In our study, silencing the expression of Shn3 significantly increased the BMP9-induced ALP activity and late osteogenic differentiation. The RT-qPCR (see figure on previous page) Fig. 8 Effect of Shn3 and BMP/Smad signaling on the expression of Runx2 and VEGF in hAMSCs. a-c Western blot and quantification analysis (c) were adopted to determine the effects of Shn3 and sim-Shn3 on protein level of p-Smad1/5/8, Smad1/5/8 (a), p-Erk1/2, Erk1/2, p-p38, p38, p-JNK, and JNK (b) under the treatment as shown at 5 days after transfection. The β-ACTIN served as the loading control. The same blots of p-Smad1/5/8 and Smad1/5/8, p-Erk1/2 and Erk1/2 (b1), p-p38 and p38 (b2), and p-JNK and JNK (b3) are used for the β-ACTIN control. Total of 12 gels were ran and 12 blots were made. d-g ChIP assay analysis shows the interaction between VEGF and promoter region of human Runx2 in hAMSCs (PP1, primer pair 1; PP2, primer pair 2; PP3, primer pair 3; PP4, primer pair 4) (d). The results show that VEGF is a direct target of Runx2 regulated by Sim-Shn3 in BMP9induced hAMSCs. The hAMSCs were transfected with Ad-BMP9 or Ad-BMP9+Ad-Sim-Shn3 for 36 h followed by formaldehyde crosslinking. The pulled down composite was detected by gel electrophoresis image and the location of primers used for ChIP assay in Runx2 promoter region (e). The crosslinked cells were lysed and subjected to enzymolysis and immunoprecipitation with Runx2 antibody or IgG antibody (f). IP assay results show the enhanced interaction between Runx2 and VEGF in the BMP9+Sim-Shn3 group of hAMSCs compared to the BMP9 group (g). *P < 0.05. h The role of Shn3 regulates Runx2 and VEGF in BMP9-induced angiogenesis-osteogenesis coupling during hAMSC-mediated bone formation. Shn3 was able to inhibit the expression of Runx2 through BMP9-mediated BMP/Smad1/5/8 signaling, thereby inhibiting osteogenic marker factors to enhance osteogenesis. Silencing Shn3 could increase the level of angiogenesis relative factor VEGF induced by BMP9. VEGF is an essential downstream target of Runx2 that could regulate the differentiation of skeletal progenitor cells into osteoblasts both in vitro and in vivo, and VEGF is directly activated by Runx2 in MSCs. Because our preliminary results and previous reports all support the point that silencing Shn3 not only positively regulates the osteogenesis of MSCs through Runx2 induced by BMP9 but also regulates the VEGF, which is directly activated by Runx2 to form the osteogenesis-angiogenesis coupling to promote osteogenic differentiation and calcium deposition, which eventually contribute to bone regeneration. results showed that the osteogenic relative factors including Runx2, BSP, COL-1 and OSX were markedly upregulated by inhibiting the expression of Shn3. These results may indicate that silencing Shn3 inhibits the WWP1-complex-dependent E3 ubiquitin ligase that elevated transcription of Runx2 and then further upregulated the target genes of Runx2.
In addition, VEGF is comprehensively expressed by cranial neural crest cells and plays multiple roles in regulating cell proliferation, vascularization, and bone formation, including endochondral ossification as well as intramembranous ossification. In addition, VEGF is widely known to induce angiogenesis 63,64 . In the process of angiogenesis, a variety of factors are involved. The main pro-angiogenic factors are VEGF, basic FGF, TGF-β family, and HIF1α 65,66 . Other factors contain angiogenic components, such as angiopoietin (ANGPT1), CD31, and vWF 67,68 . VEGF and its receptor VEGF-R are key regulators in the cascade of molecules, which ultimately lead to the development of the vasculature, and the formation of angiogenesis is accompanied by the occurrence of vasculature. Therefore, VEGF is a key regulator of angiogenesis, and VEGF plays a significant role in bone repair and development. Our results showed that silencing Shn3 could enhance the BMP9-induced angiogenic differentiation in hAMSCs and upregulate the mRNA expression of VEGF, ANGPT1, CD31, and vWF. In addition, silencing Shn3 markedly potentiates the vascular invasion of hAMSCs in vivo. These results strongly suggest that Shn3 not only regulates osteogenic differentiation but also controls angiogenesis. Nevertheless, how can we explain the coupling effect between osteogenesis and angiogenesis being regulated by Shn3? It has been reported that the DNA sequence recognized by Runx2 is 5′-PuACCPUCA-3′ and its complementary sequence is 5′-TGPyGGTPy-3′, which can be activated with a variety of protein promoters, including COL-I, OPN, OCN, and BSP. These are used to enhance the expression of these proteins and go on to promote osteogenic differentiation and bone formation 69,70 . Thus we conducted the ChIP assay and the results showed that silencing Shn3 increased the transcription activity of Runx2 in BMP9-induced hAMSCs, which could directly bind with the promoter of VEGF (Fig. 8h). Taken together, these results suggest that Shn3 exerts a coupling effect between osteogenic differentiation and angiogenesis in BMP9-induced hAMSCs by reinforcing the transcription activity of Runx2 and subsequent regulation of the VEGF expression.
In summary, our findings suggested that silencing Shn3 can promote BMP9-induced early and late osteogenic differentiation as well as angiogenesis both in vitro and in vivo, which may be mediated through enhancing the activity of the BMP/Smad signaling pathway and BMP/ MAPK signaling pathway. Inhibition of Shn3 plays a coupling role in regulating the key osteogenic factor Runx2 that activate its downstream target VEGF to promote osteogenesis and angiogenesis in BMP9-induced hAMSCs.
Isolation and cultivation of hAMSCs
This research was approved by the Research Ethics Committee of the First Affiliated Hospital of Chongqing Medical University. Human placentas were obtained from the Obstetrics Department of the First Affiliated Hospital of Chongqing Medical University. hAMSCs were isolated from six full-term puerperants, and informed consent was obtained from all of the patients prior to their participation. For the isolation of hAMSCs, they were first dissected bluntly from the placenta as previously described 13 . Following this, the amnion samples were washed three times with phosphate-buffered saline (PBS) and transferred to sterile containers at 4°C in a laboratory facility. The amnion was washed in a sterile dish with PBS containing 1% penicillin and streptomycin three times. The amnion tissue was minced into 1-2 mm 3 pieces with sterile scissors. Digestion was conducted twice and was terminated by the addition of medium with 0.05% trypsin and 0.01% ethylenediaminetetraacetic acid disodium salt (EDTA-2Na) for 30 min each and incubated for 1-2 h with 0.75% collagenase type II in low-glucose Dulbecco's modified Eagle's medium (LG-DMEM) with 1% penicillin and streptomycin in water at 37°C, until the pieces were indistinguishable. hAMSCs were collected into 50 mL centrifuge tubes by passing through a 300/mesh filter. Cells were centrifuged at 1500 rpm for 6 min and resuspended at a density of 10 × 10 4 cells/ml in LG-DMEM medium with 10% fetal bovine serum (FBS), 1.176 g NaHCO 3 , 1% penicillin and streptomycin, 1% L-glutamine, and non-essential amino acids and placed in diameter of 10 cm dishes at 37°C, with 5% humidified CO 2 . The medium was refreshed every 3 days to remove the unattached cells with PBS. In each experiment, when the cells reached 80% confluence they were digested with 0.125% trypsin/0.01% EDTA-2Na for 3 min and passaged at ratio of 1:2 or 1:3 for subculture, and only cells between passages 3 and 5 were used for subsequent experiments.
Phenotypic identification of hAMSCs
The cell markers of P3 hAMSCs were detected using flow cytometry. P3 hAMSCs were seeded at a density of 2 × 10 6 /mL in a 6-well plate. After the cells reached 90-100% confluence, the cells were digested and obtained in a 100-µL cell suspension, which was transferred to a flow cytometric tube. Cells were incubated with fluorescein isothiocyanate (FITC)-conjugated anti-CD90, phycoerythrin (PE)-conjugated anti-CD44, peridinin chlorophyll protein-conjugated anti-CD105, and allophycocyanin-conjugated anti-CD73. The cells were incubated with the negative control antibodies, including PE-conjugated anti-CD34, anti-CD19, anti-CD45, anti-CD11b, and anti-HLA-DR for 30 min in the dark, washed by the addition of 2 mL of flow buffer, and centrifuged at 1200 × g for 5 min. The liquid supernatant was removed, and the cells were re-suspended in 250 µL of flow buffer. Flow cytometry in conjunction with C6 Plus Workstation Computer and Software (BD Accuri TM C6 Plus Corporation, USA) was used to analyze hAMSC surface marker expression. CK-19 and vimentin expression of hAMSC was determined by immunofluorescence. Goat IgG served as the isotype control and was added to eliminate non-specific staining. P3 hAMSCs on cover slips in six-well plates were fixed with 4% paraformaldehyde, and PBS-Tween-20 (PBST) was used to wash the cover slips. Cells were blocked with Lowlenthal serum for 30 min, then were incubated with purified primary anti-CK-19 and antivimentin antibodies overnight or for 12 h and then with secondary FITC-labeled antibodies for 2 h. Cell nuclei were counterstained with 2-(4-amidinophenyl)-1Hindole-6-carboxamidine (DAPI) at room temperature for 5 min. The results were observed by inverted fluorescence microscopy.
Multidirectional differentiation potential of hAMSCs P3 hAMSCs were seeded at a density of 105 cells/mL in a 6-well plate. After cells reached 50-60% confluence, for osteogenic differentiation, cells were cultured and cycled through a series of mediums with a StemPro™ Human Osteogenesis Differentiation Kit (GibicoTM, USA) according to the manufacturer's instructions for 14 days. Afterwards, the osteogenesis was analyzed. The osteogenic differentiation results were observed by Alzarin Red S staining (0.2%, pH = 8.3) (Solarbio, Beijing, China). For chondrogenic differentiation, hAMSCs were cultured with MSC chondrogenic differentiation basal medium (Cyagen Biosciences, Shanghai, China) for 14 days and assessed by Alcian Blue staining (1%) (Solarbio, Beijing, China). For adipogenic differentiation of hAMSCs, cells were cultured with human MSC adipogenic differentiation basal medium (Cyagen Biosciences, Shanghai, China) for 21 days, and Oil Red O (0.5% in isopropanol) (Solarbio, Beijing, China) staining was conducted to determine the differentiation potential of adipogenic formation, including intracellular lipid droplets.
Determination of hAMSC proliferation by CCK-8 assays
When P3 hAMCSs reached 80% confluence, cells were collected and suspended at 10 5 cells/mL; 100 µL cell suspension was added to a 96-well culture plate. Cells were successively cultured for 7 days; each day included five replicate wells. Viability was evaluated in all the wells by using CCK-8 assays. The results were recorded by microplate reader (Thermo Scientific™, USA) at absorbance of 450 nm. Growth curves were drawn, and the cell proliferation activity was analyzed.
Immunofluorescence stain assay
Cells were seeded onto sterile cover slips in a Corning 12-well culture plate at density of 10 4 cells/mL and treated according to the experimental design. At the indicated time point, cells were washed three times with PBS for 10 min each, then fixed with 4% paraformaldehyde at 37°C for 15 min in a thermostatic water bath, washed with PBS for 10 min each, and then permeabilized using 0.4% Triton X-100 for 30 min at 37°C. After cells were blocked with goat serum for 30 min, cells were incubated with the primary anti-CK19 (ab52625, Abcam, Cambridge, MA, USA), anti-vimentin (ab193555, Abcam, Cambridge, MA, USA), anti-CD31 (ab134168, Abcam, Cambridge, MA, USA), VEGF (ab32152, Abcam, Cambridge, MA, USA), and anti-vWF (ab6994, Abcam, Cambridge, MA, USA) antibodies overnight, followed by incubation with the corresponding fluorophoreconjugated antibodies for 60 min, then cells were washed with PBST for 10 min each and stained with DAPI for 5 min. The cover slips were carefully removed and then mounted on slides with glycerol. The same protocol was performed in the negative control groups except that the primary antibodies were omitted. The slides were observed by confocal microscopy (DFM-80C, Nikon, Japan), and images were assessed by Nikon auxiliary systems. The results of immunofluorescence were quantified using the Image Pro Plus software.
Recombinant adenovirus construction
The recombinant adenoviruses were generated with AdEasy technology as described previously 71,72 . Briefly, the coding regions of RFP, BMP9, and Shn3 (HIVEP3, human immunodeficiency virus type I enhancer binding protein 3) were amplified with the RT-qPCR and cloned into adenoviral shuttle vectors and used to generate recombinant adenoviruses in HEK-293 cells subsequently. The siRNA target sites against mouse Shn3-coding region were cloned into the pSES adenoviral shuttle vector to create recombinant adenoviruses. The resulting adenoviruses were designated as Ad-BMP9, Ad-Shn3, and Ad-Sim-Shn3. The Ad-BMP9 expresses green fluorescent protein, while Ad-Shn3 and Ad-Sim-Shn3 express RFP as a visual tag for monitoring infection efficiency. The analogous adenovirus expressing only monomeric RFP (Ad-RFP) served as a control.
ALP staining and activity
Cells were seeded in 24-well plates at a density of 30-40% confluence and treated as per the experimental design. ALP activities of cells were determined by a modified Great Escape SEAP Chemi-luminescence Assay (BD Clontech) and histochemical staining assay (solution containing 0.1 mg/mL naphthol AS-MX phosphate and 0.6 mg/mL Fast Blue BB salt) as described 35,73 . For the chemiluminescence assay, each assay was performed in triplicate, and the results were repeated in at least three independent experiments. Normalization of ALP activities were subjected to total cellular protein concentrations of hAMSCs. ALP activities were expressed as mean ± SD.
Alizarin Red S staining and calcium quantification assay
Cells were inoculated at a density of 30-40% confluence in 24-well plates and treated as per the experimental design. Cells were cultured with the conditioned medium containing 50 mg/L Vitamin C, 0.1 μmol/L dexamethasone, and 10 mmol/L β-Glycerol phosphate disodium for 14 and 21 days. The mineralization nodules were assessed by Alizarin Red S staining as described previously 74,75 . In brief, cells were fixed with 0.05% (v/v) glutaraldehyde at 37°C for 15 min and washed with PBS for three times, then the mineralization tubercle were incubated with 0.4% Alizarin Red S for 10 min, followed by careful washing with distilled water. The calcium deposits were observed under microscope. For quantification, Alizarin Red S was dissolved with 10% acetic acid and the absorbance was detected at 405 nm with a microplate reader as described previously 76,77 . The results were performed in at least three independent experiments.
Reverse transcription and quantitative polymerase chain reaction
Total RNA was extracted with RNAiso reagents (TAKARA, Japan), then the cDNA was obtained from total RNA extracted from cells through reverse transcription (RT) reaction kit (RR047a, TAKARA, Japan). The products were diluted 5-10-fold and used as templates for detection by RT-qPCR. All samples were normalized with the level of glyceraldehyde phosphate dehydrogenase. The amplification conditions included pre-denaturation at 95°C for 30 s, denaturation for 5 s, and annealing at 60°C for 30 s. All samples were repeated three times. The PCR primers used in this study are provided in Supplementary Table 1. The relative expression levels of mRNAs in the groups were analyzed using the 2 ΔΔCT method.
Protein harvest and western blotting
Cells were seeded in six-well plates and treated as per the experimental design. Total protein were obtained after lysis, and cleared lysates were denatured by boiling for 10 min with 10% sodium dodecyl sulfate-polyacrylamide gel electrophoresis as described previously 78 . Page Ruler Plus Pre-stained Protein Ladder (26619; Thermo Scientific, USA) was used to evaluate the bands based on molecular weights ranging from 10 to 250 kDa. Proteins were separated by electrophoresis with Tris-glycine buffer and transferred carefully onto polyvinylidene difluoride (PVDF) membranes under dark conditions, in which the PVDF membrane were blocked with 5% evaporated milk for 2 h and incubated overnight with primary antibodies against OCN (ab13421, Abcam, USA), OPN (ab8448, Abcam, USA), and Runx2 (ab192256, Abcam, USA). After being washed, the membranes were probed with a fluorescently labeled secondary antibody. Immune-reactive signals were detected captured using a Bio-Rad. In addition, the membranes were incubated with a monoclonal mouse anti-human β-ACTIN (66009-1; Proteintech) antibody used as a loading control. Relative band intensity was measured using the ImageJ analysis software.
Stem cell implantation and ectopic ossification
hAMSCs were transfected with specific adenoviruses and harvested for subcutaneous injection (5 × 10 6 cells per injection) into the flanks of athymic nude (nu/nu) mice (4-6-week-old males, Harlan Sprague-Dawley) until the fluorescence could be seen. At 4 weeks after injection, animals were euthanized, and the bony masses were collected for micro-CT imaging and histologic evaluation.
Micro-CT imaging analysis and hematoxylin and eosin (H&E), Masson's trichrome, and Safranin O-fast green staining Animals were euthanized 4 weeks after injection and the retrieved bone masses were scanned after 4 weeks with Skyscan1174 X-Ray Microtomograph (Micro-CT) (Bruker Company, Belgian) after the animals were euthanized. N-Recon software was used for 3D image reconstruction and all image data analysis was performed using the CT-AN software. Retrieved bony masses were decalcified with EDTA and then processed for paraffin embedding. BV/ TV (%), Tb. N, Tb. Sp, Tb. Th, and BMD were measured.
The retrieved bone masses were decalcified, washed with PBS three times, fixed in 4% paraformaldehyde overnight at 37°C, and embedded in paraffin. Serial sections of embedded bone masses were stained with H&E, and Masson's trichrome or Alcian Blue staining was carried out as previously described 73,79 .
HUVEC cell tube-formation assay
The tube-like structures of HUVECs were developed on growth factor-reduced Matrigel (BD Bioscience, USA) in conditioned media and were assayed using Transwell plates with polycarbonate filters (pore size: 4 μm). Before the experiment, the Matrigel sterilized tips were chilled at 4°C overnight. Twenty-four-well Transwell cultivation plates were used and daubed with the suspension of 200 μL Matrigel and 200 μL complete medium according to the manufacturer's instructions. HUVECs were seeded in endothelial conditioned medium containing 10% FBS, 2 mM/L-glutamine, 1 mM sodium pyruvate, 100 U/ml penicillin, 100 μg/mL streptomycin, and 1% ECGS (Sci-enCell, CA, USA) for 12 h and plated onto the lower layer of the Transwell with diluted Matrigel at a density of 2 × 10 5 cells/mL/well. Then the cells treated as per the experimental design were loaded into each of the upper wells. The Matrigel in the Transwell cultivation was incubated at 37°C and 5% CO 2 for 6 h. HUVECs were stained using 2 μM calcein AM fluorescent dye (Solarbio, Beijing, China) (Fig. 6c). Tube areas were quantified by the number of tubes and relative areas of tubes. The results were recorded under the microscope at 6 h. The number of tubes and relative area of tubes were assessed from five figures of each well by Adobe Photoshop (Adobe, San Jose, CA, USA).
In vivo implantation of PLGA-hAMSC hybrids to evaluate angiogenesis
Cells were seeded at a density of 5 × 10 5 cells/mL on the PLGA scaffolds (diameter 3.5 mm, thickness 200 μm: bought from Foshan Lepton Precision Measurement And Control Technology Company, Guangdong, China) for 24 h. Eighteen mice (6-week-old males; BALB/cAnN, Beijing, China), weighing 18-25 g, were anesthetized with 1% pentobarbital sodium (30 mg/kg), then the cells-PLGA were implanted into the dorsal subcutaneous position. The cells-PLGA composite were collected and analyzed after 5 weeks. Procedures for the animal study were approved by the Institutional Animal Care and Ethics Committee of the First Affiliated Hospital of Chongqing Medical University. The mice were euthanized 5 weeks after implantation surgery. The cells-PLGA composite was retrieved and fixed in 4% paraformaldehyde solution, then immunohistochemical staining was performed for vWF. They were incubated with the primary antibodies against mouse vWF (ab6994, Abcam, USA), followed by incubation with the corresponding fluorophore-conjugated antibodies. The immunohistochemical staining results were observed by inverted fluorescence microscopy (Oly 3800; Olympus), and the images were analyzed using an Olympus auxiliary system.
ChIP assay
Subconfluent hAMSCs were seeded in T75 flasks and infected with Ad-RFP or Ad-Sim-Shn3. The cells were crosslinked after 48 h of infection. The cells were subjected to the ChIP analysis according to the manufacturer's instructions. The cells were incubated with a monoclonal rabbit anti-human VEGF (Anti-VEGF Antibody, clone JH Sigma-Aldrich) antibody or IgG to pull down the DNA-protein complexes. The PCR primers and sequence of promoter used in this study are provided in Supplementary Tables 2 and 3. The presence of Runx2 promoter sequence was analyzed by three pairs of primers corresponding to the human Runx2 promoter region.
Statistical analysis
All quantitative experiments were performed in triplicate and/or repeated through three independent batches of experiments. Differences among groups were assessed using a three-way analysis, and the data are reported as the mean ± standard deviation. Statistical analyses were performed using the software package SPSS 14.0, and Fisher Exact tests and Student-Newman-Keuls q tests were used to identify significant differences among groups. Statistical significance was set at level of P < 0.05 for all post hoc comparisons.
|
2020-01-30T09:03:52.873Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "18f0081e8f5c1de0e1bc8f47216c817576dc9f4a",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41419-020-2279-5.pdf",
"oa_status": "GOLD",
"pdf_src": "Springer",
"pdf_hash": "18f0081e8f5c1de0e1bc8f47216c817576dc9f4a",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
235623744
|
pes2o/s2orc
|
v3-fos-license
|
Experimental investigation of the local environment and lattice distortion in refractory medium entropy alloys
EXAFS analysis of pure elements, binary and ternary equiatomic refractory alloys within the Nb-Zr-Ti-Hf-Ta system is performed at the Nb and Zr K-edges to analyze the evolution of the chemical local environment and the lattice distortion. A good mixing of the elements is found at the atomic scale. For some compounds, a distribution of distances between the central atom and its neighbors suggests a distortion of the structure. Finally, analysis of the Debye-Waller parameters show some correlation with the lattice distortion parameter $\delta ^2$, and allows to quantify experimentally the static disorder in medium entropy alloys.
Manuscript
Numerous studies have now evidenced that multicomponent concentrated alloys with a single solid solution phase, named "high entropy alloys" (HEAs) or "complex concentrated alloys" (CCAs), can be obtained. Due to the concept itself of mixing several elements with different radii, the local atomic structure is of utmost interest [1]. Indeed, a supposed severe lattice distortion has been extensively used to explain the properties of these materials, but investigation of such local effects is challenging.
Ab-initio methods have been used, showing in most cases that long-range interactions are found.
Experimental validation of the lattice distortion suggested by calculation requires advanced techniques. Advanced high-resolution transmission electron microscopy (HRTEM) has recently been used to prove the existence of short-range ordering (SRO) in a MEA CrCoNi [5]. This had been suggested first by X-ray adsorption measurements, evidencing that Cr bonds more favorably with Ni and Co [6]. These results are even more valuable as no structural distortion was evidenced by X-ray and neutron total scattering experiments [6]. Similar X-ray absorption studies have been performed on Al8Cr17Co17Cu8Fe17Ni33 fcc alloy, revealing the presence of SRO and a variation of the electronic structure [7]. Comparing the Debye-Waller coefficient obtained by these two techniques on the CrMnFeCoNi composition measured at 50K and at room temperature allowed for the first estimation of the intrinsic static disorder due to the atomic size mismatch [8]. A similar approach using neutron total scattering measurements was performed by Owen et al. [9] in Nibased alloys with increasing complexity: pure Ni was compared to Ni-20Cr, Ni-25Cr, Ni-22Cr, Ni-37.5Co-35Cr, and the equiatomic CrMnFeCoNi alloy, revealing a variation in the Debye-Waller coefficients. However, the authors rationalize their findings by a mere variation in the homologous temperatures.
Facing this experimental knowledge on fcc CCAs, and based on the ab-initio suggestion that bcc HEA would present a stronger lattice distortion [2], this latter family lacks systematic experimental investigations. Among the few studies that looked at them, Maiti et al. measured a high degree of lattice distortion, as well as Zr short-range clustering after annealing in the ZrNbHfTa system [10].
Lattice distortion was also observed by Zou et al. in NbMoTaW by HRTEM [11]. Finally, a pair distribution function (PDF) study by XRD and neutron diffraction on a MEA ZrNbHf also showed lattice distortion, as atoms' local environment are described with 15.5 nearest neighbors (NN) instead of the 14 first and second neighbors of the bcc structure [12].
This study thus proposes to systematically investigate by EXAFS the evolution of the local environment in MEAs, that are compared to the pure elements. Alloys with higher complexity (4or 5-elements) are not considered here since the number of parameters to fit would be too large for the available signal. Besides, ternary alloys may display a stronger lattice distortion than 5elements alloys [2]. The compositional system Ti, Zr, Nb, Hf, Ta was chosen, as the 5-elements alloy based on these elements benefits from several studies suggesting that it forms a ductile solid solution [13][14][15][16][17].
Zr and Nb were chosen as references and represent two distinct crystalline structures at room temperature, namely hexagonal closed packed (hcp) and bcc. Besides, the choice of these two elements as references was guided by the accessibility of their K-edge energy, and the fact that there is no overlap between their K-edge and other edges, so that the signals can be analyzed without artifact.
Binary alloys and ternary alloys listed in table 1 were considered. All alloys have equiatomic compositions, which allows to characterize the local environment of Zr and of Nb in alloys of increasing chemical complexity.
NbTiZr
Ingots of 15g were prepared for the considered compositions. Arc-melting was performed in an Ar-atmosphere on a water-cooled copper plate. The chamber was flushed twice with Ar prior to melting, and a Ti-Zr getter was used before melting the alloys. Master ingots were prepared for alloys with three arc-melted elements separately three times and then arc-melted together (2 melting steps), the specimens were flipped between these two steps. Alloys with only two elements were directly melted in 3 fusions steps. Induction melting was then performed for all the alloys in a water-cooled sectorized copper mold under helium to ensure chemical homogeneity in the ingots.
Finally, the ingots were cast by arc-melting to get a cylindrical shape. Alloys then underwent thermomechanical treatments described in the supplemental material (table S1). XRD was finally performed with a Panalytical X'Pert Pro diffractometer using the Co Kα radiation, to confirm that the structure was the expected one (hcp or bcc) and that the alloys were a single phase. Lattice parameters were extracted by Rietveld refinement using Maud [18]. Their composition was also confirmed by EPMA. Details are provided in table 2, and additional Rietveld refinement parameters can be found in supplementary materials (Table S2, and Figure S1). Specimens of optimal thicknesses regarding the absorption were prepared by manual polishing on SiC papers with grades 320 to 4000 (exact thickness detailed in supplementary material, table S1). The thin foils were then taped in between two bands of Kapton tape. A minimum of two samples per composition were produced.
Synchrotron measurements were carried out at the BAMline at Helmholtz-Zentrum Berlin in transmission mode at Zr and Nb K-edges. The recording was done with a Si 111 monochromator, using a 1eV step for the XANES. 3 to 6 spectra were acquired for each sample to minimize statistical error (statistical error and error bars evaluated as recommended by the IXS Standard and Criteria Subcommittee [19]). The XANES normalization and EXAFS extraction were done using MAX-Cherokee [20], and EXAFS fits were done with MAX-Round Midnight [20], using a 3-12 Å -1 range and a k 3 weighting for all samples (more details below). For sample NbZrTi we used the ifeffit code [21] to simultaneously simulate Nb and Zr edges to naturally restrain parameters like Zr-Nb distance and Debye Waller terms. We used theoretical phases and amplitudes calculated by FEFF8 code [22]. Figure 1 shows the XANES spectra of all the compounds at the Zr K-edge ( Figure 1a) and Nb Kedge ( Figure 1c). The edge being at the same energy for all the compounds (no shift observed), thus it is concluded that no change happens in the Zr and the Nb electronic structure and oxidation state. Therefore, the correction of the edge energy ΔE was determined for the pure compounds and applied to all the alloys measured at the corresponding edges. The EXAFS spectra are given in Examples of EXAFS fits are plotted in supplementary materials ( Figure S2).
The results of the best fits for each compound are presented in Figure 2 and Figure 3, the values presented in these Figures are also listed in supplemental materials together with details on possible constraints used during the fit procedure. Figure 2a shows the distance obtained between the central atom (Zr or Nb) and its neighbors (itself, or the alloying element(s)). Note that the error bars correspond to the fit's ones and they are not related to the above mentioned resolution of 0.13 Å that estimate the resolving power for identical scattering neighbors (e.g. two distinct Zr-Zr distance values instead of their weighted average associated with a larger Debye Waller factor).
The distances between neighbors are systematically compared to the theoretical ones, calculated based on the atomic radii and indicated by crosses in the plot [32]. The Zr-Zr and Nb-Nb distances are in green and red, respectively, and the distance between the central atom X (X being Zr or Nb) and other elements are colored; X-Ti, X-Hf, X-Ta are in purple, dark green and blue, respectively.
The results of Figure 2a show that reasonable and similar distances are obtained between the central atom and its NN for most compounds (such as NbTa), yet, for some others, differences in distances depending on the nature of the neighbor are found, such as the NbHf compound, or ZrTiNb. Significant variations of NN distances depending on the NN's nature suggest that some alloys have a strong lattice distortion. Yet, the non-equiatomicity of the mixing and possible short range ordering, leading to very local compositional changes might be helpful to interpret some properties of the alloys, such as solid solution strengthening [34]. The DW parameter σ², that measures the dynamic (thermal) and static (structural) pair distribution widths commonly used as a measure of structural disorder, is considered next.
First, variation of σ² with respect to the T/Tm ratio (T being the temperature of the experiment and Tm the melting temperature, calculated with the TCHEA5 Calphad database) is plotted in Figure 3a. Indeed, the dynamic disorder, that is part of the σ² parameter, is related to the temperature.
Although the melting temperatures of the alloys vary between 1837K (for ZrTi) and 3003 K (for NbTa), their T/Tm ratio is only within the range 0.10 and 0.16. Figure 3a shows that NbTa and Nb, that have the highest melting temperatures and therefore the lowest T/Tm ratio also logically have rather low σ² values, hence low dynamic or thermal disorder. However, as T/Tm ratio increases (lower Tm), the σ² values taken by the various alloys do not display a clear trend and rather form a dispersed cloud of data. Therefore, it is hypothesized that although temperature can have an impact, the T/Tm ratio range of all these refractory compounds is narrow enough to assume a similar thermal disorder in all the compounds. One can also notice that σ² takes values rather high (between 0.008 and 0.016 Ų) compared to values published for fcc compounds, which can be explained either by a tendency to overestimate the DW parameter in EXAFS compared to total scattering techniques, as observed for CrMnFeCoNi [8], or by a larger disorder in these refractory compounds, compared to 3d-elements alloys [2].
Next, the static disorder is addressed. In the field of HEAs, the atomic size mismatch parameter δ is conventionally used, and defined as 2 Finally, if differences of atomic radii translate into a globally broader pair distribution, we should expect that 2 scales as . σ² was thus plotted as a function of δ² in Figure 3b. The ri values used here are tabulated and ci was taken as the nominal composition of the element i [35,36].
The plot shows indeed a relationship between the two parameters, although care must be taken considering the various hypotheses detailed above as well as the number of available points. Yet, the σ² coefficient linearly increases as a function of δ². Pure compounds and compounds with similar atomic radii for their constituting elements have low σ², whereas compounds such as TiZrHf or NbHf have DW coefficients up to twice as large. One can notice that this increase of the disorder with δ² is observed independently of the crystal structure, as bcc (close symbols) and hcp alloys (open symbols) follow the same trend and are aligned on the same line. Measurement of σ² thus allows to quantify a lattice distortion with a reliability stronger than that based on the variations of distances between nearest neighbors. However, it is interesting to notice that the alloys with largest DW coefficients, i.e. ZrTi, ZrTiHf, NbHf, and NbTiZr, are the ones that also show a meaningful difference between the NN distances measured by EXAFS and the theoretical ones, as well as a large dispersion between the distances, based on the nature of the NN (see Figure 2a), confirming the first hypothesis of a strong lattice distortion made with the results of Figure 2a.
This result of Figure 3b confirms that the atomic size mismatch increases the disorder at the local scale. It is also very interesting to notice that the disorder does not increase with the number of compounds, since binary alloys (square symbols) can have larger DW parameters than ternary alloys (triangles in Figure 3), see for instance TiNbTa and TiZr.
Finally, the DW parameter of the pure compounds, where the static disorder is supposed to be minimal, being chemically pure, provides an estimate of the thermal disorder baseline, that being hypothesized constant based on Figure 3a (evidenced as a grey region in Figure 3b). The increase of the σ² value represents the increased static disorder, meaning the alloy's distortion, as the static and dynamic contributions add up in σ². Therefore, the difference between the σ² parameter of an alloy and that of the pure metals allows estimating the distortion taking place in the alloy.
Considering the error bars, and assuming a constant thermal disorder, it is suggested that an increase of up to 0.006 Ų can be obtained for the present set of alloys. Pure compounds are represented with circles, binary alloys with squares and ternary alloys with triangles. Open symbols correspond to compounds with hcp structure and plain symbols to compounds with bcc structure.
In conclusion, by investigating the environment of Zr and Nb of 8 alloys and the pure metals by EXAFS, this study evidenced that mixing of the elements, sometimes non-equiatomic, is obtained down to the atomic scale in MEAs. Lattice distortion can be first approached roughly by comparing the X-X and X-Y distances, X being the central atom and Y the other element. When the X-X and X-Y distances between neighbors are strongly different, lattice distortion is expected, such as NbHf and NbTiZr. At last, the analysis of the DW parameter's evolution allows a physical measurement of the lattice distortion in refractory MEAs, which appears to be proportional to the theoretical parameter δ squared (δ²), independently of the crystal structure. Notably these values are not increasing with the number of components, and they are not necessarily maximal for a largest number of alloying elements, going against the lattice distortion principle of HEAs. This last quantification with the DW parameter should be of great relevance to reach a better understanding of the HEAs properties and modeling purposes.
|
2021-06-25T03:57:55.082Z
|
2021-06-24T00:00:00.000
|
{
"year": 2021,
"sha1": "48c967a8db2b1370edebd5dac4fcda138900c259",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "dbd097439982d640826c3b20b8ea16c951e6b172",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": []
}
|
55593176
|
pes2o/s2orc
|
v3-fos-license
|
The Psychology of Cows ? A Case of Over-interpretation and Personification Commentary on Marino and Allen ( 2017 ) The Psychology of Cows
Reviews of existing literature on topics that have been neglected, such as the subject of the cognitive and affective abilities of cows, are productive and necessary exercises in science (Elwen, Findlay, Kiszka, & Weir, 2011; Mulrow, 1994). These syntheses organize and integrate bodies of literature that have been relatively isolated from one another. If performed systematically and objectively, reviews can highlight areas of research that are in need of more information or identify areas that could be integrated in novel ways. The effort made by Marino and Allen (2017) to gather the extant, fragmented literature regarding the “psychology of cows” was timely and commendable. Most of the research on sensory abilities, learning and cognition, emotion, personality, and social complexity in cows has been conducted within applied contexts, which the authors considered to be a skewed representation based on the statement below:
Reviews of existing literature on topics that have been neglected, such as the subject of the cognitive and affective abilities of cows, are productive and necessary exercises in science (Elwen, Findlay, Kiszka, & Weir, 2011;Mulrow, 1994).These syntheses organize and integrate bodies of literature that have been relatively isolated from one another.If performed systematically and objectively, reviews can highlight areas of research that are in need of more information or identify areas that could be integrated in novel ways.The effort made by Marino and Allen (2017) to gather the extant, fragmented literature regarding the "psychology of cows" was timely and commendable.Most of the research on sensory abilities, learning and cognition, emotion, personality, and social complexity in cows has been conducted within applied contexts, which the authors considered to be a skewed representation based on the statement below: And because these kinds of applied contexts continue to shape our understanding of cows from both a scientific and public perspective, it is all the more important to objectively assess cows on their own terms by trying to understand their psychology so that we might better align that knowledge with their welfare and interests.(p.475) Although Marino and Allen (2017) purport to provide an objective assessment of the available research, an independent review of their selected literature instead suggests that much of their article is based on over-interpretations and biased representations of the findings that were pertinent to their argument that cows should be considered as sentient individuals that are on par with elephants, primates, dolphins, and pigs.The purpose of this commentary is to highlight areas of concern particularly with respect to the dangers of using science to advance anthropomorphic and biased objectives.Marino and Allen (2017) began their discussion of cows with a relatively inflammatory summary of the "horrific" conditions that cows experience as farmed animals, ending their evaluation with the following statement: "Given that cows are subjected to so many highly invasive and objectifying practices, the need to understand who they areon their own terms -is long overdue." (p. 475).This statement in conjunction with the above statement illustrate one of the major and consistent weaknesses of this paperthe use of anthropomorphic language to elicit specific perceptions.For example, what is meant by the cows' "interests" (e.g., maybe to give or not give milk that day or to choose which cow to graze next to?).Likewise, how can cows be "objectified"?Instead of objectively thinking whether the treatment of cows as objects for human exploitation is warranted, the term "objectified" elicits a very strong, negative emotional response.Words such as "intelligent," "like humans and primates," "complex cognition," "feats of memory," "attachments," "emotional contagion," "sophisticated abilities," "selfawareness," "self-efficacy," "mother-child bonds," "severe psychological and social impairments," and "distinct personalities" elicit specific perceptions that are not necessarily supported by the studies selected or interpreted by the authors.These issues will be examined across each section.
Cow Conditions
In the summary presented on the historical and current cow conditions, it is unclear if these practices were world-wide or specific to the United States (U.S.).The authors did not mention in their review that many of these practices are governed by welfare laws these days.Although welfare requirements for farmed animals are much more stringent in European and Canadian countries (Fraser, 2008;Veissier, Butterworth, Bock, & Roe, 2008), some of their criteria have filtered into U.S. practices today (Mench, 2008), an important fact that was missing from the introduction.Instead, the authors set their review of the scientific findings into a biased framework being sure to highlight the "distressful and unnatural conditions" (p.474) experienced by cows.
Learning Through Conditioned Associations -Is this Intelligence?
The first aspect of cow psychology discussed was their learning and cognitive abilities, which the authors equated almost instantly to intelligence, arguing that "Intelligence, arguably, refers to the quality of these mechanisms [learning and memory] in terms of rapidity, depth, and complexity.And there is always an interplay between "higher-level" cognitive processes and those considered to be more basic (Shettleworth, 2010)" (p.477).The authors then indicated that "Much of our current understanding of intelligence in cows has to be inferred from other areas of study, including social complexity and communication in other mammals" (p.477).These statements were confusing as the experiments initially presented were constrained to applied settings involving the testing of associative learning abilities in cows locating feeders, learning auditory cues for alarms or shocks indicating fence boundaries or visual cues to turn lights on and off, or performing actions to gain access to salt.All of these experiments assessed behaviors that were directly linked to basic survival needs of the cows.It is, therefore, not surprising that the cows conditioned quickly and memories for these associations lasted for some period of time (i.e., up to six weeks in one study).Associative forms of learning involve basic mental representations but do not necessarily involve "robust" higher-order cognitive skills (as intimated in the quotation above).Rather these conditioned associations tend to be mediated or stored in more primitive or sub-cortical areas of the brain, such as the cerebellum and amygdala (Lange et al., 2015;Jozefoweitz, 2014;VanElzakker, Dahlgren, Davis, Dubois, & Shin, 2014).Moreover, it is expected that these conditioned associations are maintained over time as they are related to enhanced fitness outcomes (Domjan, Mahometa, & Matthews, 2012;Jozefoweitz, 2014).With this knowledge in mind, these findings are not particularly exciting and certainly did not deserve the statement made by the authors that cows "are capable of not only complex learning but feats of long-term memory" (p.479).
The research regarding the discrimination and spatial cognition abilities of the cows also fails to support this conclusion as, once again, these abilities are likely innate skills that were hard-wired and not flexible behaviors (i.e., familiar individuals are likely related/kin and less aggressive so safer than the unfamiliar individual that smells funny, or this area of the pasture was tasty last time we visited).
Marino and Allen (2017) referred to the complex cognitive abilities of cows multiple times throughout this section using evidence primarily from research involving conditioned associations (i.e., classical conditioning and operant conditioning paradigms).This strategy of describing basic cognitive abilities as "complex" is misleading and an overgeneralization of the available data.Take the following quote as an example: "These kinds of capacities [i.e., discrimination abilities] not only underlie the ability to recognize kin from nonkin and stranger from familiar individual, but also allow for finer discriminations of individual identity within one's social network" (p.478).While discrimination tasks may require more "complex" abilities as they could require the cows to hold at least two mental representations for comparison if a successive or match-to-sample discrimination training procedure was used, discriminations can also be solved by using basic associative rules, when not controlled.Unfortunately, the experimental methodology used in these studies was not discussed in the current review, making it difficult for a reader to ascertain the validity of these conclusions.And how one gets to a conclusion that this basic discrimination ability enables a cow to recognize/identify/distinguish itself as a distinct individual from other conspecifics in the herd is unclear, unless of course cows can selfrecognize and identify relevant characteristics?
What Does "Moo" Mean? -Emotional Expression
The initial presentation of emotion as a construct and the available science, including animals other than cows, was comprehensive and objective until the following statement: "The literature on emotions in cows and other farmed animals is substantial and confirms that they experience a wide range of emotions and that some of those responses are quite complex" (p.480).This statement was not explicitly supported by the authors.Instead a reader needed to be familiar with the paper by Forkman, Boissy, Meunier-Salaun, Canali, and Jones (2007) to evaluate it.Similarly, based on the research reviewed by Marino and Allen (2017), the "wide range of emotions" cows experience seemed to be limited to two similar negative emotions (i.e., fear and anxiety) and an axis that represented the frustration-contentedness dimension, which happened to be investigated with an interesting and novel method, eye white percentage.Other emotions, such as surprise, anger, guilt, and joy were not addressed, perhaps because they have not been measured.Regardless, objective scientists should not make generalizations without empirical data.Aside from the handful of emotions, most of this section described different methods by which internal states were assessed.Unfortunately, in Marino and Allen's discussion of the tests that have typically been used to measure fear and anxiety in various animals (including rats, goats, cattle, sheep), the Open-Field (or Novel Arena) Test, they failed to mention that this test has been argued to be unreliable based on the host of confounds present during a trial (e.g., novel environment, isolated testing, an open environment rather than closed/protected, duration in the arena) and is most likely not ecologically valid (Forkman et al., 2007).So instead of concluding that the expression of fear by cows has been difficult to measure, they argued that conclusions about cow emotion had been oversimplified: . . .fear responses in this paradigm are not strongly correlated with fear in other situations.This overall finding demonstrates that fear responses in cows are shaped by diverse and complex factors and the idea of "general fear" in cows is an over-simplification (Forkman et al., 2007).(p.480) Fear is a basic emotion that is found across taxa and is elicited by innate stimuli and conditioned stimuli, pending individual experiences, and results in a multitude of physiological and behavioral responses that tend to correlate with one another.These responses do not require a complex, cognitive explanation, but may simply represent physiological responses to stimuli (e.g., VanElzakker, Dahlgren, Davis, Dubois, & Shin, 2014).
The discussion of the complex emotions was interesting, but once again over-generalized from the actual findings with anthropomorphic twists.Beginning with discussion of the emotional reactions during learning, Marino and Allen (2017) indicated that cows might be self-aware and might react with a sense of self-efficacy based on a study in which cows became more aroused/excited (heart rate increased and more vigorous movement; Hagen & Broom, 2004) when the cows improved significantly on an operant conditioning task.Whereas Hagen and Broom cautiously interpreted their findings as having "found some, albeit inconclusive, indication that cattle may react emotionally to their own learning improvement" (p.203), Marino and Allen used language that again encouraged readers to relate cow behavior to their own sense of self and emotional excitement when solving a difficult task.This presentation represented an overly complex interpretation and use of human-centric concepts for a response that may be explained by conditioned associations with rewards.The section on cognitive bias also produced some consternation; why is it a complex emotional experience given that cognitive biases may simply be the product of conditioned associations?Forming preferences for or avoidance of stimuli that are associated with survival would seem to be a basic, innate ability selected for its increased fitness benefits.A similar question exists regarding the reasoning behind emotional contagion being a complex emotion; contagion is an outcome based on the sharing of an emotion.Marino and Allen suggest that this experience, which they emphasize may possibly be the simplest form of empathy, is a basic building block for more complex expressions of emotions.Their entire argument for emotional contagion is built upon a single set of studies performed with a set of cows that began to produce increased levels of cortisol after being housed with stressed individuals.The question becomes whether empathy is the simplest explanation or could a conditioned association or an innate, physiological response be a more appropriate explanation?The section on social buffering also suffered from the authors' tendencies to overgeneralize from the available evidence.Marino and Allen stated "As highly social mammals, cows demonstrate a strong response to their social circumstances, finding social isolation to be highly distressing and showing robust social buffering responses when they are together" (p.483).This statement is made with no supporting references, suggesting that this information is general knowledge; an unfounded assumption.Of all the sections presented in this paper, the social buffering section has the most reasonable evidence to support the claims made by Marino and Allen.Yet, if this species truly is sociable, then one might expect social buffering to occur at the physiological level suggesting that categorizing it as a complex social experience may be over-generous.
Social Complexity Simplified
The final section on social complexity represented consistent misrepresentations and incomplete ideas.The introductory material was accurate and comprehensive, reflecting a brief but thorough discussion of social complexity and its relevant constructs.Unfortunately, the subsequent review of the pertinent cow literature was incomplete, disjointed, and misleading.For example, Marino and Allen (2017) highlighted the current definition of social complexity as consisting of a number of differentiated relationships with other conspecifics, which transcend group size and physical proximity or synchronized activities.Yet, the bulk of the evidence used in this section relied on group size and network models based on physical proximity and synchronized activities of cows in a pen measured by radio collars (Gygax, Neisen, & Wechsler, 2010).Although this is a common technique for assessing associations in gregarious animals, the primary issue is that Gygax et al. used two very human-oriented terms: attachment and avoidance relationships.For readers familiar with attachment theory, these terms have very specific definitions and criteria: (1) attachments are bonds in which two individuals receive emotional support from one another when together and experience distress when separated, and (2) avoidance refers to bonds in which limited social support is expected and thus, little distress or comfort is experienced by the individuals involved.Unfortunately, these definitions were not the definitions used by Gygax et al.Attachment was defined by animals that were near one another and engaged in the same activity and avoidance was defined as animals that were in different areas of the pen and asynchronous in activities.Intentional or not, Marino and Allen perpetuated this misrepresentation by continuing to use that language instead of acknowledging the limitation.Marino and Allen (2017) then discussed what hierarchies would entail in species that formed them.Unfortunately, in this discussion, they never actually indicate if cows have hierarchies, only mentioning that cows form matrilineal groupings.After reviewing this literature, it appears that hierarchies are somewhat present in cows with the oldest female "leading" the group and bulls and steers competing with one another based on level of testosterone when housed together (Bouissou, Boissy, Le Neindre, & Vessier, 2001), information that may have been helpful in supporting their point.A similar concern exists for Marino and Allen's conclusions on a handful of studies about bonding and alliances.Although the mother-offspring section was relatively accurate (with the exception of the use of the term "mother-child" bond when discussing non-human animals), alliances were actually never discussed but rather were inferred by the "…lasting social bonds, both with their offspring and their herd members."(p.489) a statement that was not substantiated clearly.Herd members tended to be related or familiar conspecifics that were raised together and spent time in close proximity.However, proximity is not enough to evaluate if a bond exists.Responses to separations or distress and seeking comfort from others are measures needed to evaluate these claims.It is also unclear if this research has been conducted with adult members of a herd.Thus, the use of "alliances" in this section is unfounded.Despite the incompleteness of this section, the authors concluded that based on the evidence they examined, cows displayed broad parameters of social complexity such that "They have demonstrated knowledge about conspecifics and the exchange of relevant social knowledge with conspecifics.Through dominance hierarchies and affiliative bonds, they have demonstrated knowledge about conspecifics and of their own social interactions with them" (p.490).By my assessment, none of these statements were supported by the evidence presented.The simplest conclusion currently is that cows form herds, thus meeting the criteria for gregariousness and sociality, but it is not clear if they are as sociable with each other as implied by Marino and Allen in their conclusions.
So, What About a Cow's Psychology?
Ultimately, the areas of research proposed for the learning and cognitive abilities of cows (i.e., object permanence, numerosity, time perception), their emotional capabilities, social complexity, and even personality are reasonable.The goal of comparative psychology is to understand the similarities and differences in behaviors across similar and disparate species.Marino has attempted to do this by presenting the same type of reviews for pigs (Marino & Colvin, 2015), chickens (Marino, 2017), and now cows.By pursuing topics in a comparative fashion, we begin to understand the functions, evolution, development, and mechanisms involved.This knowledge should be used to understand the origins or pressures that produced these abilities such that future outcomes may be predicted.Similarly, this knowledge should be used to enhance the current and future welfare of animals in human care.As scientists, it is our responsibility to vet this knowledge using systematic, parsimonious, objective, reliable, and valid approaches.The misrepresentation of information leads to biased, and potentially wrong, bodies of knowledge that are very difficult to modify once established (e.g., vaccines, animal/plant sentience, climate change).As scientists, we must be the critical thinkers and ask the hard questions, but we must also be willing to represent the facts accurately not as we wish them to be as is the case with many of the points in "The Psychology of Cows."
|
2018-12-11T07:53:56.907Z
|
2017-11-01T00:00:00.000
|
{
"year": 2017,
"sha1": "97eb6d22dffefbf88bc4b3a8a1a939c0ca682d98",
"oa_license": "CCBY",
"oa_url": "https://www.animalbehaviorandcognition.org/uploads/journals/17/AB&C_2017_Vol4(4)_Hill.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "97eb6d22dffefbf88bc4b3a8a1a939c0ca682d98",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
91929153
|
pes2o/s2orc
|
v3-fos-license
|
Studies on Genetic Parameters for Diversified Uses in Sugarcane (Saccharum spp.)
Sugarcane is an important cash crop of India. In India it is grown in sub-tropical and tropical climatic regions. Sugarcane crop serves as the major source for a variety of products such as sugar, jaggery, molasses, bagasse and filter cake out of which sugar and jaggery are meant for daily use as consumable products while other byproducts have industrial significance. It is realized that sugar production alone will not be able to make the industry profitable and under such circumstances diversification is a necessary consequence for the successful growth of industry. Sugarcane, an important bio energy crop belongs to the category of C4 plants which converts the solar energy effectively into high quality and low cost raw materials for sugar and ethanol (Bruce et al., 2005). Molasses and bagasse are the byproducts of sugar industry which form the feedstock for ethanol production and cogeneration respectively. Generally the main objective of sugarcane breeding is to develop varieties capable of producing high sugar yields per unit land area. The recent awareness on the advantages of using green fuel for generation of power and use of gasohol to reduce automobile emission have resulted in setting up of a number of cogeneration plants and distilleries in various sugar mills. To achieve these goals of increased sugar, alcohol and cogeneration, sugar industries need special varieties to meet their specific International Journal of Current Microbiology and Applied Sciences ISSN: 2319-7706 Volume 7 Number 08 (2018) Journal homepage: http://www.ijcmas.com
Introduction
Sugarcane is an important cash crop of India. In India it is grown in sub-tropical and tropical climatic regions. Sugarcane crop serves as the major source for a variety of products such as sugar, jaggery, molasses, bagasse and filter cake out of which sugar and jaggery are meant for daily use as consumable products while other byproducts have industrial significance. It is realized that sugar production alone will not be able to make the industry profitable and under such circumstances diversification is a necessary consequence for the successful growth of industry. Sugarcane, an important bio energy crop belongs to the category of C4 plants which converts the solar energy effectively into high quality and low cost raw materials for sugar and ethanol (Bruce et al., 2005). Molasses and bagasse are the byproducts of sugar industry which form the feedstock for ethanol production and cogeneration respectively. Generally the main objective of sugarcane breeding is to develop varieties capable of producing high sugar yields per unit land area. The recent awareness on the advantages of using green fuel for generation of power and use of gasohol to reduce automobile emission have resulted in setting up of a number of cogeneration plants and distilleries in various sugar mills. To achieve these goals of increased sugar, alcohol and cogeneration, sugar industries need special varieties to meet their specific requirement of raw materials. Hence, breeding programmes must integrate new traits such as high fiber, high biomass and high total sugars in addition to cane yield and juice quality.
Breeding for higher yield and quality traits requires basic information on the extent of genetic variation in a population and its response to selection. Understanding various genetic parameters that govern a population under improvement is essential for proper planning and direction of plant breeding program. The success of such program will depend upon largely on the extent of genetic variability available in the base population and heritability of the characters under improvement.
Therefore, a clear understanding of genetic parameters is of paramount importance in the development of a breeding strategy (Singh et al., 2002). The information on the nature and magnitude of variability present in the genetic material is of prime importance for a breeder to initiate any effective selection programme. Genotypic and phenotypic coefficients of variation along with heritability as well as genetic advance are very essential to improve any trait of sugarcane because this would help in knowing whether the desired objective can be achieved from the material or not (Tyagi and Singh, 1998). Hence, in the present study the nature and extent of genetic variability, heritability and genetic advance for twenty seven characters were estimated in second clonal stage.
Materials and Methods
The present investigation was carried out at Agricultural Research Station, Perumallapalle (Acharya N.G. Ranga Agricultural University), situated in the Southern Agroclimatic Zone of Andhra Pradesh, India. The experimental material consisted of 77 genotypes including four checks viz., Co 6907, Co 7219, 2003 V46 and Co 86032. The seventy seven genotypes were planted in a randomized block design with two replications during April, 2011. Each entry was planted in 2 rows of 5 m length spaced at a distance of 80 cm between rows with 4 three budded setts per meter as seed rate. Fertilizers were applied at recommended dose of 224:112:112 kg ha -1 N, P 2 O 5 and K 2 O The recommended dose of P 2 O 5 and K 2 O were applied as basal and nitrogen was applied in two equal split doses at 45 and 90 days after planting. Cultural practices like weeding, irrigation, earthing up and propping were followed to maintain good crop growth.
Phenotypic and genotypic coefficients of variation were computed using the formulae given by Burton (1952). The range of variation was categorized according to Sivasubramanian and Madhavamenon (1973). Heritability in broad sense was estimated as suggested by Lush (1940). Genetic advance as per cent of general mean was computed by using the formula given by Johnson et al., 1955. Data were recorded on seventy seven genotypes including four checks for twenty seven characters viz., tiller number at 120 DAP, shoot population at 180 and 240 DAP, number of green leaves at 90, 120, 240 DAP and at maturity, number of internodes, internode length, stalk length, stalk diameter, stalk volume, NMC per plot at harvest, single cane weight, fibre content, brix per cent, sucrose per cent, CCS per cent, juice purity per cent, pol per cent cane, juice extraction per cent, total sugars per cent, biomass per cane, fibre yield, CCS yield, theoretical yield of alcohol and cane yield.
Results and Discussion
Mean, Range, GCV, PCV, heritability (broad sense) and genetic advance as percentage of mean for twenty seven characters in seventy seven genotypes of sugarcane are presented in Table 1. The GCV and PCV values were high for the traits viz., number of leaves at maturity, stalk volume, total sugars, biomass per cane, fibre yield, commercial cane sugar yield, theoretical yield of alcohol and cane yield indicating that the variability observed in the seventy seven genotypes was high.
Moderate variability was observed for the traits viz., number of tillers at 120 DAP, shoot population at 180 and 240 DAP, number of leaves at 90 and 240 DAP, number of internodes per cane, internode length, stalk length, number of millable canes, single cane weight, fibre content, brix per cent, sucrose per cent, CCS per cent and pol per cent cane. The low GCV values for number of green leaves at 120 DAP, stalk diameter, juice purity per cent and juice extraction per cent indicated that the variability was low for these traits in the seventy seven genotypes. Critical analysis of the results pertaining to genetic parameters indicated that the characters viz., shoot population at 240 DAP, stalk length, number of millable canes, fibre content, brix, sucrose, CCS per cent, pol per cent cane, total sugars, biomass per cane, fibre yield, CCS yield, theoretical yield of alcohol and cane yield showed high heritability coupled with high genetic advance as per cent of mean indicating that these characters are controlled by additive gene effects and selection would be effective for these characters. These results are in agreement with the findings of Singh and Singh (1994) for brix per cent; Das et al., (1996), Ghosh and Singh (1996) for number of millable canes and cane yield; Singh et al., (1996) for commercial cane sugar, and cane yield; Ravishankar et al., (2003) for cane yield, commercial cane sugar yield, CCS per cent and juice brix; Berding and Pendrigh (2009) for brix, commercial cane sugar, dry matter and fibre content; Krishna et al., (2011) for sucrose per cent and CCS per cent; Mancini et al., (2012) for pol per cent cane.
The existence of sufficiently large genetic variability and less influence of environment on these traits facilitates effective phenotypic selection.
Number of green leaves at 90 DAP and at maturity, stalk volume and single cane weight exhibited low to moderate heritability coupled with high genetic advance as per cent of mean indicating that these traits are governed by additive gene effects, hence selection may be effective for these characters but low or moderate heritability might be due to high environmental effects.
Juice purity and juice extraction per cent showed high heritability coupled with low genetic advance as per cent of mean which indicated that these traits were governed by non-additive gene action and hence selection for these characters may not be rewarding. These results are in conformity with the findings of Tyagi and Singh (2000), Sabitha and Rao (2008), Charumathi (2011), Ahmed and Obeid (2012) for juice purity per cent.
The traits viz., shoot population at 180, number of green leaves at 120, 240 DAP, tiller number at 120 DAP, internode number, internode length and shoot diameter registered low to moderate heritability coupled with low to moderate genetic advance as per cent of mean indicating that these traits are highly influenced by environmental effects and selection for these characters would be ineffective.
|
2019-04-03T13:06:50.410Z
|
2018-08-10T00:00:00.000
|
{
"year": 2018,
"sha1": "0440793b9bb762117c84654e3dbbef51116c96c8",
"oa_license": null,
"oa_url": "https://www.ijcmas.com/7-8-2018/M.%20Shanthi%20Priya,%20et%20al.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1e1b99dc9e0c68b5d66cb78b7b935e003eb44ef6",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
}
|
55988367
|
pes2o/s2orc
|
v3-fos-license
|
Impact of Three Dimensional In-Room Imaging (3DCA) in the Facilitation of Percutaneous Coronary Interventions
Introduction: Coronary angiography is a two-dimensional (2D) imaging modality and thus is limited in its ability to represent complex three-dimensional (3D) vascular anatomy. Lesion length, bifurcation angles/lesions, and tortuosity are often inadequately assessed using 2D angiography due to vessel overlap and foreshortening. 3D Rotational Angiography (3DRA) with subsequent reconstruction generates models of the coronary vasculature from which lesion length measurements and Optimal View Maps (OVM) defining the amount of vessel foreshortening for each gantry angle can be derived. This study sought to determine if 3DRA-assisted percutaneous coronary interventions resulted in improved procedural results by minimizing foreshortening and optimizing stent selection. Methods: 26 patients with obstructive coronary artery disease were included. Rotational angiographic acquisitions were performed and a 3D model was generated from two images greater than 30° apart. An optimal view map identifying the least amount of vessel foreshortening and overlap was derived from the 3D model. 3DRA derived and operator predicted optimal working view and stent lengths were compared. Results: 3DRA assistance significantly reduced target vessel foreshortening when compared to operator’s choice of working view for PCI (2.99% ± 2.96 vs. 9.48% ± 7.56, p=0.0001). The operators concluded that 3DRA recommended better optimal view selection for PCI in 14 of 26 (54%) total cases. In 9 (35%) of 26 cases 3DRA assistance facilitated stent positioning. 3DRA based imaging prompted stent length changes in 4/26 patients (15%). Conclusion: The use of 3DRA positively impacts the performance of percutaneous coronary interventions by optimizing working views through reductions in vessel foreshortening and overlap and assisting in stent positioning by improvements in stent and lesion measurements. ©2013 The Authors. Published by the JScholar under the terms of the Creative Commons Attribution License http://creativecommons.org/licenses/ by/3.0/, which permits unrestricted use, provided the original author and source are credited.
Introduction
Coronary angiography remains the preferred modality for characterization of atherosclerotic coronary artery disease in contemporary cardiology. Because coronary angiography transforms a complex three-dimensional (3D) structure into a flat two-dimensional (2D) silhouette of the coronary lumen, however, the modality is inherently subject to numerous imaging limitations [1]. Most notably, the projection images may contain vessel foreshortening and overlap which may subsequently result in potential misjudging of critical vessel characteristics such as bifurcations, tortuosity, vessel and lesion size, lesion length, lesion eccentricity and angulation [1][2][3]. The traditional strategy used to minimize projection imaging inaccuracies is to acquire multiple views of a vessel segment for a more complete angiographic assessment. This assessment, however, comes at the cost of increased radiation, contrast exposure, and procedural time. Despite the additional information, optimal view selection is contingent on both individual operator skill and experience as well as unique patient anatomy and is often based purely on a trial-and-error technique. To overcome these obstacles, innovative angiographic techniques have been devised, most notably, rotational angiography (RA) with three-dimensional modeling (3DCA).
The need for optimal acquisitions and the recent availability of the rapid processing of angiographic data has led to the advent of 3D modeling technology. Chen and colleagues developed 3-D coronary modeling approaches starting in the 1990's with subsequent commercialization through three vendors [4,5]. By using two angiographic views > 30º apart, a 3D model of the coronary tree can be generated for subsequent display which allows for viewing from multiple perspectives, generation of an optimal view map, and advanced analysis of vessel segments [6]. Carroll and Chen subsequently developed the practical idea of utilizing 3D graphical data in map format to illustrate the interaction between gantry position and degree of vessel segment foreshortening and overlap [7]. Optimal Viewing Maps (OVM) identify optimal fluoroscopic viewing angles of specific coronary segments based on the degree of foreshortening and overlap [8,9]. The use of optimal view maps to facilitate angiography has been previously shown [5]. In addition, the use of both on-line and off-line 3DCA has also been previously validated along with its ability to estimate lesion length and predict stent length [10][11][12]. The implementation of in-room 3DCA to facilitate interventions is now possible with improved software and computer processing capabilities [13,14]. Despite these technological advancements, 3DCA has not yet been applied to Percutaneous Coronary Intervention (PCI) procedures in determining its direct impact on minimizing vessel foreshortening and overlap, as well as optimizing lesion length measurement and subsequent stent choice. To examine the potential benefits of 3DCA-assisted PCI, we prospectively evaluated the extent of vessel foreshortening, stent placement, and choice of stent length using 3DCA as compared to operator-projected optimal views and stent selection using Standard Angiography (SA).
Methods
Patients undergoing angiography and subsequent PCI for obstructive coronary artery disease at the University of Colorado Hospital were included in this study. Exclusion criteria included renal insufficiency (Cr > 2.0 mg/dL), contrast allergy, or pregnancy. Patients were also excluded if the 3D modeling of their coronary tree was not feasible. The patients were enrolled with informed consent and this prospective study was approved by the Colorado Multiple Institutional Review Board.
Study Protocol
Images were acquired using a flat panel detector (Allura Xper FD20, Philips Healthcare, Best, The Netherlands) angiographic system. Cine runs were acquired at 30 frames per second in a standardized rotational format and transferred to a dedicated 3D workstation. Rotational angiographic (RA) acquisitions were performed beginning at 60º right anterior oblique (RAO) to 60º left anterior oblique (LAO) with 25º of both cranial and caudal angulation respectively. The C-arm rotates while acquiring images over a 4 second time span with an 8-12 ml injection of contrast medium. The resulting images were then transferred to the workstation automatically. Less than 45 seconds are required for the images to transfer over to the workstation [11]. The 3DCA process and all vessel evaluations were completed by a trained physician and/or a clinical scientist familiar with the reconstruction system. The interventions were performed by board-certified interventional cardiologists. Due to conflict of interest, both Drs. Carroll and Chen did not participate in the execution of the protocol or data analysis.
3D modeling of the coronary vessels was performed by using a dedicated 3D workstation as previously described [6]. Briefly two images with full angiographic opacification are manually selected using the graphics interface in the 3D workstation, the vessels to be included in the model are manually identified in both views and a model is immediately generated. Subsequently the operator identifies the beginning and end of a lesion
Statistics
Continuous variables are presented as mean ± SD. Differences between continuous variables were analyzed with the t-test.
Results
Of the fifty-four patients screened, 26 patients required PCI for significant lesions and were enrolled into the study. Complete 3-D reconstruction was unable to be accomplished in 3 patients (6%) due to severe native vessel ostial disease resulting in study exclusion. A description of the 26 patients who underwent PCI is listed in 3DCA assisted optimal view selection had less vessel foreshortening (FS) than operator selected optimal views (2.8 ± 2.7% vs. 9.2 ± 7.6% FS, p=0.0001) ( Table 2). 3DCA derived optimal views were judged to have superior lesion visualization by highly trained, experienced, and board certified interventional cardiologists compared to operator selected optimal views in 54% of the cases. In the remaining 12 cases (46%), 3DRA improved foreshortening but demonstrated more vessel overlap or recommended an infeasible gantry location (i.e. unobtainable gantry). 3DCA facilitated stent positioning in 9 cases (35%) and impacted stent length in 4 cases (15%) as it resulted in the operator changing the stent length based on a non-foreshortened measurement of the target vessel using 3DCA. Prospective operator and 3DCA stent length predictions were performed in 22 cases and were not found to be significantly different from the final stent length used (p=NS) (
Discussion
This prospective evaluation of 3DRA-assisted PCI successfully demonstrates its practical application in PCI procedures to optimize angiographic working views and stent length choice. Although the interventional cardiologists participating in this single center trial each had >10 years experience, 3DCA-optimized angiographic working views in 54% of cases thereby illustrating its efficacy even in the hands of experienced operators. Furthermore, these changes in working views translated into assisting in stent placement and optimizing stent length in 35% and 15% of cases respectively, suggesting that a significant proportion of routine coronary interventions may not have optimal device dimension evaluations and subsequent placement. It is well known to the interventional community that the operator's stent length choice is often both generous in an effort to cover lesions adequately as well as constrained by the availability of stents in fixed length increments. Therefore, subtle differences in lesion length (3DCA vs. operator) may result in similar stent lengths. Measuring actual lesion length may have been a better marker for differences between the operator length and the 3DCA-assisted length. Overall, in an economic environment where rising healthcare costs need to be addressed, strategies to improve stent selection, decrease the risk of in-stent restenosis or side branch jailing due to longer stents, stent -positioning and equipment utilization are crucial [12]. on the 3-D model. A color-coded map of the degree of foreshortening at all possible gantry positions is calculated and displayed. An optimal working view (True View R ) and a lesion length from the 3-D model are generated (True Length R ) while an estimated stent length is recommended by a clinical scientist or non-operator physician performing the evaluation. Image model generation takes less than 5 minutes. Simultaneously, operators blinded to the True View R and True Length R results select their preferred optimal working view and predicted stent length. Verification of both the operator selected and 3DRA assisted working views were performed and evaluated by the operators. Study protocol endpoints were 3DCA derived optimal view % foreshortening and stent length versus operator predicted views and stent length respectively (see Appendix). Table 3: Comparison of operator predicted, 3D assisted and actual stent length used. The clinical validation of in-room image-processing tools such as 3DCA and optimal view maps is important since FDA approval of these tools does not require the presentation of any data on clinical experience and impact on clinical outcomes. While the technology of 3DRA and optimal view calculations has been well validated by the work of Chen and colleagues, this study is important in demonstrating how clinical care may be impacted [4,5,7]. This study was biased toward minimizing the impact of these tools on clinical decision-making since the study site, cardiologists, and staff have extensive experience in rotational angiography, 3-D modeling and reconstruction, and the impact of foreshortening on the assessment of lesion length and choice of stent size.
3D= three dimensional
This study of 3DCA as an interventional tool bears several limitations. 3DCA reconstructs the major vessels and side branches but smaller vessel are not accounted for and those may contribute to significant vessel overlap obscuring important angiographic findings. This limitation can be overcome by completing the whole tree reconstruction. Gantry positions are constrained by body habitus and table position and 3DCA recommended views are not always necessarily practical. Another important parameter, vessel diameter was not evaluated as this study predated the addition of vessel sizing to the imaging software. Newer studies will include vessel size evaluations. The operators were allowed to use rotational angiography which may have resulted in better optimal view selection than the practice of standard fixed view angiography as detailed above. The impact of 3DCA as it related to operators without 3D or rotational experience remains a question worthy of further investigation. Lesion lengths may have been underestimated since they were determined through angiographic projections and not via intravascular ultrasound. Finally, most of the reconstructions were performed by a clinical scientist with no interventional background. It is possible that A unique facet of this study was the routine use of RA for the initial imaging of the coronary vasculature which inherently provides many more images than standard angiography. Indeed, RA has been shown to improve PCI working view selection by providing a higher volume of information per angiogram [15,16]. This stands in contrast to SA wherein only 6-10 images are acquired in a diagnostic angiographic study. The use of RA therefore allowed the operators in this study the luxury of having a sizable anatomy survey, potentially resulting in an improved optimal view choice. It can be postulated that the use of standard fixed views may have resulted in operator selected optimal views exhibiting greater vessel foreshortening. Nevertheless, even with operators using RA, 3DCA-predicted working views still had statistically significantly less foreshortening and improved imaging in a majority of cases. A better reflection of the possible impact 3DCA may have in conventional cardiac catheterization labs would be a similar study using fixed angiographic views which more closely resembles the current standard. The software package used for the study supports 3DCA only with rotational acquisitions but the use of fixed angiographic views in performing 3DCA is being explored and currently incorporated in other 3D software packages [12].
the reconstructions and gantry suggestions may have been improved by an interventionalist being allowed to review the images in-room and pick optimal gantry choices. A) 3DRA assisted projection of the post-PCI mid-LAD illustrating well separated diagonal branches (circles) with a minimally foreshortened mid-LAD. B) Prior operator selected view of the mid-LAD with an obscured first diagonal branch and overlapped second diagonal (circle). The impact of foreshortening length estimation is noted with a 17% difference length of the segment between the two diagonal branches in panel A versus panel B.
3DRA= three dimensional reconstruction, CRAN= cranial, LAD=left anterior descending artery, PCI= percutaneous coronary intervention
Initially the use of 3DCA was limited due to slow computer processing times and offline analysis. As computer graphics and processing times have improved this is a no longer a limitation. Use of 3DRA enables the generation of optimal view maps as a guide for working views for each vessel and has resulted in better view selection compared to operator selected views [8,9]. 3DCA reference vessel length and diameters have been shown to be similar to Quantitative Coronary Analysis (QCA), suggesting that procedural planning using 3DCA may be possible, even in complex lesions such as chronic total occlusions [10,13]. Post-stenting 3DRA analysis showed that improvements in stent selection and resource utilization may be achieved. Furthermore, the use of 3DCA in the analysis of stent conformational changes following an intervention have been shown and validated [17,18]. This study is the first prospective evaluation of 3DCA applied to a general population outside of individual cases [14]. The feasibility of applying 3DCA to a general population is suggested in this cohort and its role in cardiac catheterization laboratory may expand as efficiency in radiation exposure, contrast use and resource utilization is increasingly emphasized.
|
2019-03-11T13:07:51.927Z
|
2013-08-16T00:00:00.000
|
{
"year": 2019,
"sha1": "e80232bceab1bd7f144af3ee7e61300f8836d48a",
"oa_license": "CCBY",
"oa_url": "http://www.jscholaronline.org/articles/JCVM/impact-of-three-dimensional-in-room-imaging-(3DCA)-in-the-facilitation-of-percutaneous-coronary-interventions.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "16d1ffc16f6ad1225118d389ae569b64a276e2f5",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
151893397
|
pes2o/s2orc
|
v3-fos-license
|
Genetic and Environmental Basis of the Relationship Between Dissociative Experiences and Cloninger’s Temperament and Character Dimensions – Pilot Study
individuals Abstract : Dissociation is commonly regarded as a disruption in the normally integrated functions of memory, knowledge, affect, sensation or behavior. The present study utilized behavioral genetics’ methodology to investigate genetic and environmental basis of the relationship between dissociation and Cloninger’s temperament and character traits. A sample of 83 monozygotic and 65 dizygotic twins were administered self-report measures which assessed dissociative experiences along with personality dimensions. Significant correlations and high loads of common genetic variance between dissociative experiences and personality traits of novelty seeking, self-directedness, cooperativeness and self-transcendence were identified. Heritability of dissociative experiences was estimated at 62%. The study shows that there exists a considerable amount of genetic variance overlap between dissociation and personality dimensions. It also supports the hypothesis that propensity to dissociate is highly heritable.
Dissociation has been defined as a disruption of normally integrated functions of perceptual, emotional, and memory systems (American Psychiatric Association [APA], 2000). It is considered to play an important defensive role in situations when an individual experiences a traumatic event that is too difficult to cope with emotionally. Such mechanism would cause detachment of emotional and cognitive functions involved in processing those psychologically overwhelming experiences. A deeper understanding of its genetic and environmental underpinnings would help to explain how facing traumatic situations can lead to the Post-Traumatic Stress Disorder (PTSD) and perhaps to develop effective therapies.
The construct of dissociation itself still is a centre of a yet unresolved discussion, whether its nature is dimensional or typological. Some empirical research point to a typological model which divides the universe of dissociative experiences into relatively mild non-pathological symptoms observed among the whole population (e.g. absorption or absentmindedness) and more severe states, indicative of pathology, such as depersonalization, derealization, amnesia and multiple personality (Putnam, 1997;Waller, Putnam, & Carlson, 1996). From this point of view it would not seem superfluous to determine the degree to which those two types of dissociation are influenced by the common and by the separate factors. In contrary, other researchers suggest that it would be more adequate to apply a dimensional model in which dissociative experiences are assumed to be continuously distributed throughout the population. Results supportive of this hypothesis were obtained by the research on the Dissociative Experiences Scale (DES; Bernstein & Putnam, 1986).
Behavior genetics is a field of study that combines statistics with genetic and behavioral sciences to determine extent to which genetic and environmental factors contribute to the overall phenotypic variance of psychological dimensions (e.g. personality features) in different populations. A phenotype is defined as a final result of genetic and environmental influences that is apparent in a certain trait or behavior. The methodology of behavior genetics bases on two basic assumptions. In the first of them it is stated that all of the differences among individuals who belong to a certain population are determined, at variable proportion, by genetic and non-genetic factors. The genetic factor can be divided into additive and non-additive. The additive factor is influenced by assortative (selective) mating of parents and random parents-children genetic transmission which to some extent is shaped by natural selection. The non-additive factor pertains to effects that are due to single-gene dominance deviations as well as epistatic interactions between different genes. Then, the environmental factor may be of shared or unique (non-shared) nature. Shared environment increases resemblance within the same family, while, conversely, unique environment decreases it, mostly due to individual experiences gained exclusively by each family member throughout life, but also because of genotypeenvironment interactions and correlations, as well as errors of measurement. All the main sources of phenotypic variance as outlined in the current model (Plomin, DeFries, McClearn, & McGuffin, 2008) are illustrated on Figure 1.
Figure 1. A scheme of the sources of phenotypic variance
Note. GE means correlation between genotype and environment, while GxE stands for their interaction.
The concept of heritability is central to the field of behavior genetics. It refers to a contribution of the genetic factor to the total phenotypic variance in a certain population, expressed as a percentage of it. More precisely, heritability is "a proportion of phenotypic variance that can be attributed to the genetic differences between individuals" (Plomin et al., 2008). It is then easy to see that estimation of heritability of a feature is equivalent to calculating the extent to which this feature is influenced environmentally. Moreover, multivariate behavior-genetic models may be utilized to examine the structure of genetic and environmental correlations between several dimensions (Neale & Cardon, 1992).
Both genetic and environmental factors have been suggested as underpinnings for development of dissociative experiences. The results of existing research in behavioral genetics of dissociation are inconsistent. Some of them indicate that it is highly heritable (Becker-Blease et al., 2004;Jang, Paris, Zweig-Frank, & Livesley, 1998;Pieper, Out, Bakermans-Kranenburg, & van Ijzendoorn, 2011), while another work excludes any genetic influence (Waller & Ross, 1997).
That those findings remain contradictory should be considered as supportive for a well-established view that the most important role in development of dissociative tendencies is played by the environmental factors. Studies that have hitherto explored the environmental-only etiology of dissociation point to abusive childhood experiences, especially sexual, physical and emotional maltreatment but also caregiver neglect and betrayal. This has been demonstrated in adults (Coons, Bowman, & Milstein, 1988;Putnam, Gurof, Silberman, Barbar, & Post, 1986), adolescents (Bowman, Blix, & Coons, 1985;Dell & Eisenhower, 1990;Hornstein & Putnam, 1992) and children (Coons, 1994;Fagan & McMahon, 1984). However, the most recent study showed that the largest levels of dissociation are observed in the foster children who had experienced all types of abuse or only the physical abuse (Hulette, Freyd, & Fisher, 2011). Those proved to be significantly higher than the magnitude of dissociative experiences in the groups of children who had been sexually abused or neglected by their caregivers. On the other hand, Braun and Sachs suggested that there may exist a "natural, inborn capacity to dissociate which could determine potential magnitude of dissociative symptoms triggered by specific environmental circumstances" (Braun & Sachs, 1985). Not only does this view seem to receive support from research conducted per the behavioral approach, which is indicative of dissociation's heritability, but it also fits well into the results obtained from molecular analyses. The latter point in the first order to the serotonin transporter gene promoter polymorphism (5-HTTLPR), whose role in the etiology of dissociation was discovered by Pieper with her team (Pieper et al., 2011). Moreover, it has been shown that in women with a history of sexual or physical abuse this polymorphism is related to novelty seeking (Steiger, Richardson, & Joober, 2007). Another suggested direction of research is connected with the relationship between PTSD symptoms and the dopamine receptor type 4 (DRD4) polymorphism (Dragan & Oniszczenko, 2009). What is interesting and promising about this location within human genome is that a number of links between the DRD4 polymorphisms and novelty seeking have been discovered (Benjamin et al., 1996;Ebstein et al., 1996;Noble et al., 1998). However, other studies (e.g., Strobel, Spinath, Angleitner, Riemann, & Lesch, 2003) and, more importantly, a meta-analysis of them (Munafo, Yalcin, Willis-Owen, & Flint, 2008) have failed to demonstrate any association between novelty seeking and the variable number tandem repeat (VNTR) DRD4 polymorphism that has been linked to the PTSD symptoms by Dragan and Oniszczenko (2009). Still, in the same work Munafo with colleagues reported a significant relationship between this temperament trait and another DRD4 polymorphism (-521C/T).
Dissociation has been examined with regard to its simple correlations with many different personality traits: the Big Five dimensions (Ruiz, Pincus, & Ray, 1999), Revised Dimensions of Temperament Survey (DOTS-R) and the Rusalov Structure of Temperament Questionnaire (STQ) temperamental features (Beere & Pica, 1995), Cloninger's personality dimensions measured by the Revised Temperament and Character Inventory (TCI-R; Evren, Sar, & Dalbudak, 2008), dissociative mental disorders (including PTSD), hypnotic susceptibility (van Ijzendoorn & Schuengel, 1996) and schizotypy (Pope & Kwapil, 2000;Merckelbach, Rassin, & Muris, 2000). All of these works show that more extensive and thorough research on relationships between the dissociative experiences and personality should be undertaken. Therefore, in the present study genetic and environmental factors that influence both personality traits and dissociation are chosen to be the main object of examination. Continuation of the research on dissociation and its genetic and environmental basis also seems to be more than necessary, because this area of psychology of personality and individual differences still remains underexplored.
In the presented work our team decided to employ the original Cloninger's TCI inventory as a measure of temperament and character traits, because it is directly related to the psychobiological model of personality in which certain neurotransmitter systems are assumed to underpin some of the temperamental features (Cloninger, Svrakic, & Przybeck, 1993). Not only does this inventory allow its users to include testable hypotheses regarding genetic and environmental basis of different areas of temperament and character, but it also, in case of positive findings among correlations with other dimensions, renders the search of genetic candidates for molecular analyses a lot more tractable. Notably, it has been shown by Mardaga & Hansenne (2007) that the temperamental dimensions of TCI share substantial correlations with Gray's behavioral inhibition/activation systems (BIS/BAS; Gray, 1970).
The Cloninger's psychobiological model divides the structure of personality into a more genetically determined temperament, whose features stem from functioning of the procedural memory, and a more environmentally influenced character, which is assumed to be closely related to the declarative memory system. The temperament traits in this model, which are "involved in presemantic perceptual processing and encoding of concrete visuospatial structural information and affective valence" (Gillespie, Cloninger, Heath, & Martin, 2003) consist of: novelty seeking (NS), harm avoidance (HA), reward dependence (RD) and persistence (PS). According to the authors, the first three are supposed to have separate biological underpinnings: dopamine, serotonin and noradrenalin neurotransmission systems, respectively. No theoretical assumptions has been made regarding the last feature. The character traits are dependent on social and cognitive development and include: self-directedness (SD), cooperativeness (CO) and self-transcendence (ST). The temperament traits are to be considered as the primary personality features on the basis of which secondary (yet highly independent with regard to environmental sources of variance) character traits develop. This hierarchical outlook on personality traits is supported by the multivariate analyses of convergent and discriminant TCI's validity (Ando et al., 2002). On the other hand, some evidence suggests not only environmental but also partial genetic independence of character from temperament (Gillespie et al., 2003).
In the present study we evaluated the influence of genetic and environmental factors on the relationship between Cloninger's temperament and character traits and dissociative experiences. Although we chose to explore the data rather than test specific hypotheses, there are certain expectations related to this study which arise from the previous research that points to manifold connections between dissociation and the TCI dimensions. Thus, NS and HA are expected to be genetically correlated with the propensity to dissociate, as the DRD4 polymorphism appears to be related to the PTSD symptoms and the 5-HTTLPR polymorphism has been directly linked with dissociation. Another hypothesis would involve all of the character traits, since it has been shown that people with low self-directedness and cooperativeness along with high self-transcendence (a structure typical of persons with mental disorders) tend to be more schizotypic, hypnotizable and also more prone to absorption (Laidlaw et al., 2005). Moreover, Szekely et al. (2010) discovered a significant association between the trait of hypnotizability and the dopamine metabolite catechol-O-methyltransferase (COMT) Val158Met single nucleotide polymorphism (SNP, rs4680), which opens up the possibility that the dopaminergic system may play a crucial role in development of dissociation not only through high level of NS, but also through the functional relation between dopamine level in the prefrontal cortex (PFC) and efficiency of the cognitive functions.
Method
The study utilizes one of the most frequently applied method for behavior genetics -a twin study. Basically, its aim is to determine existence of genetic underpinnings of certain features or behaviors by comparing their correlations in separate groups of monozygotic and dizygotic twins reared together. If the correlation coefficient calculated for the former is distinctly higher than the one calculated for the latter, a non-zero contribution of genetic factors to the variance of the measured variables is to be suspected.
The above-described method is based on a few assumptions (Plomin et al., 2008): • MZ twins are genetically identical, whereas DZ twins share 50% of their genotypes, • any observed differences in the behavior characteristics of MZ twins stem from the environmental factors only, whereas in DZ twins both genetic and environmental factors account for all such differences, • similarities determined by environment are the same among both MZ and DZ twins, • parents' mating occurs at random with regard to the analyzed behavioral traits. The aforementioned assumptions were often criticized for their putative counterfactuality, especially the one stating environmental equality. On the other hand, it is a well-established view, that the proportions or distributions posited through those assumptions are roughly equal to the actual ones, hence overall the method ought to be useful (Plomin et al., 2008).
Participants
The eligible twins in the approximate number of 1000 were chosen on the basis of the sampling frame, a Polish twin database that has been accumulated by the Interdisciplinary Centre for Behavioural Genetics Research (ICBGR). The full sets of questionnaires were sent to all the eligible twins via mail. 152 pairs of those twins (each of the same gender) filled them in and sent back to the indicated address. Four pairs have been excluded from the analysis due to extremely high rates of item nonresponse. Out of the remaining 148 pairs 105 were feminine and 43 were masculine in the age ranging from 13 to 66 with the mean being equal to 24.38 and standard deviation -to 8.24. Out of 141 pairs who provided their birth year 103 (73%) belonged to the youngest age group (13-25 years old), 36 (26%) -to the middle-aged group, and 2 pairs (1%) were the oldest twins having 56-66 years old. Importantly, 7 twin pairs (5%) were young adolescents (13-15 years) while 16 pairs (11%) were 16-18 years old.
Zygosity was determined by the means of the Twins Physical Resemblance Questionnaire (TPRQ; Oniszczenko & Rogucka, 1996). This measure contains several questions about twins' morphological characteristics and psychical resemblance, as well as questions about the degree to which they had been confused with each other by parents, siblings and friends. Each twin is instructed to describe him-or herself in comparison to the brother or sister but without any communication between them. Out of these items six are used to construct a discriminative function which separates monozygotic twins from dizygotic twins. Its validity, expressed as the percentage of correct assignments, proved to be as high as 94%. In the current study, out of the 148 twin pairs who were included in the analysis 83 have been diagnosed as monozygotic and 65 as dizygotic.
Prior to the investigation every participant had been informed about the nature of the study and gave his or her informed consent in writing. The research project was accepted by the Ethics Commission at the Faculty of Psychology, University of Warsaw.
Measures
Dissociative experiences were evaluated with the second version of DES (DES-II; Carlson & Putnam, 1993) is a self-report scale which comprises of 28 item, each of them being scored from 0 to 100 with a step of 10. Any number stands for degree to which a participant experienced manifold dissociative symptoms in daily life: amnesia, depersonalization, derealization and absorption. It has been suggested than one of the items is inappropriate for same-sex twin subjects, because it pertains to situations when people called them by another name or insisted that they met them before (Pieper et al., 2011). However, a possibility cannot be ruled out that higher frequency of such incidents among twins might interact in some way with their propensity to dissociate (i.e. bolster or reduce it). Moreover, scores on this particular item, albeit higher than average -contrary to what would have been expected on the premise of the descriptive statistics from the general population (Ross, 1996) -were not distinctly higher than their counterparts for a half of the remaining items. Therefore it was not excluded from further proceedings. In the light of the psychometric research accumulated so far (Ross, 1996;van Ijzendoorn & Schuengel, 1996) DES-II proves to be a valid and highly reliable instrument with Cronbach's alpha being equal to .93 and a high test-retest stability (r = .84). Unfortunately, there is no available reliability data regarding its Polish version, thus Cronbach's alpha coefficient was derived off the present twin sample to serve an ad-hoc diagnostic purpose. It was equal to .94, so it can be said that in our sample the Polish version of DES-II has proved to have excellent internal consistency, which also happens to be very close to the original version. However, we admit that there is a lack of any data that could be supportive of the validity of Polish DES, as the design of the present study did not empower us to provide any compelling evidence thereof.
To measure temperament and character traits the Polish version of TCI was used (Hornowska, 2003). This tool had been constructed to operationalize the psychobiological model of personality (Cloninger et al., 1993). It consists of seven independent scales, each of them measuring one of the hypothesized temperament and character dimensions. It is a 240-item, self-report, forced-choice measure.
Statistical analyses
Although due to unreturned questionnaires and item nonresponse, some data on dissociative experiences and TCI dimensions were incomplete, the total percentage (after exclusion of 4 subjects who merely started filling in the sheets) of missing data was equal only to 0.34%. Thus, just the most basic means of imputation have been used. The missing values in TCI positions were changed to halfpoints, while in DES-II they were replaced by the mean of the remaining filled positions. Afterwards, we checked the multivariate distributions of all included dimensions in order to identify those whose kurtosis measured by the z statistic exceeds the threshold of 5 indicating of an early departure from normality (see: Byrne, 2010). Only distribution of the DES-II scores were not even roughly equal to normal. Therefore they have been log-transformed, which effectively solved the problem of non-normality. We then performed the multivariate behavioral genetic analyses to determine the level in which additive genetics (A), shared environment (C) and unique environment plus error of measurement (E) contribute to the variance and covariance among the included dimensions, as well as to examine the genetic and environmental correlations between dissociation and TCI traits. Structural equation modeling and maximum likelihood estimation were used to fit ACE, AE and CE Cholesky models with different numbers of independent factors (2 to 8) as described by Neale and Cardon (1992). We compared the models in terms of the Akaike Information Criterion (AIC) and the root mean square error of approximation (RMSEA) in order to choose the one that has the closest fit-to-data. On the basis of the best model genetic and environmental correlations were estimated along with their 95% confidence intervals. All analyses were performed in SAS ® University Edition with an open-source add-on called SASPairs (http://psych. colorado.edu/~carey/SASPairs/).
Results
Descriptive statistics for DES-II and TCI scores are presented in Table 1. The mean score of dissociative experiences was comparable to other nonclinical samples, in which this statistic usually varies from 10 to 20 (Ross, 1996). The distribution of DES scores was also typically left-skewed with only 15.2% participants scoring 30 or more. TCI scores did not exceed the typical values demonstrated in the Polish general population (Hornowska, 2003). Heritability od dissociation was estimated at 64%, while most temperament traits proved to be consistently less heritable (35-50%) than all character traits (62-65%). Of note is that based on the current sample the temperament trait of harm avoidance was not heritable at all.
Multivariate pathway analyses included 21 distinct models: 7 models nested within each other in an incremental manner as per the variable number of Cholesky factors used (2 to 8) for each of the 3 select specification families -ACE, AE and CE. From each specification family the best-fit model was chosen and then 3 candidate models were compared in terms of AIC and RMSEA statistics. Those 3 models are summarized in Table 2.
The AE model with 7 independent Cholesky factors yielded the best fit-to-data. Therefore, for the purpose of calculating genetic and environmental correlations between variables, estimates from this model were used. All correlation coefficients, percentages of common phenotypic, genetic and environmental variance between dissociation and TCI dimensions along with their 95% confidence intervals are presented in Table 3. The confidence intervals are based on the standard errors of the unstandardized path coefficients, calculated using a wellestablished formula for standard error propagation in sums and products of uncorrelated statistics in small samples (Baron & Kenny, 1986;MacKinnon & Dwyer, 1993). Out of 21 correlation coefficients only 8 remained non-significant. With regard to the phenotypic correlations, dissociation was positively related to NS and ST, while negative r values were observed for SD and CO. Simple correlations of DES-II scores with SD and ST also proved to be substantially stronger than with NS and CO. The pattern of the genetic correlations was highly similar to that of the phenotypic correlations. In this case coefficients for NS, SD and CO were the highest and most significant. Five of the environmental correlations reached the .01 level of statistical significance.
The shares of the common genetic variance between TCI dimensions and dissociative experiences were substantially higher than those observed for the common environmental variance. It is worth noting that as per the 95% confidence intervals the significant genetic communalities across DES-II score and the traits of NS, SD and CO are highly unlikely to be lower than 6-13% while, conversely, they may be as high as 41-55%.
The sum of squared genetic correlations of dissociation with the Cloninger's dimensions plus its own genetic portion of variance equaled 62%. This is the estimate of dissociative experiences' heritability provided by the 7-factor AE Cholesky model.
Discussion
In the present study we utilized a twin study behavioral approach to examine genetic and environmental influences on the relationship between Cloninger's personality dimensions and dissociative experiences. The results indicate that the pattern of variances and covariances among the included variables is best explained by 7 independent additive genetic factors, unique environment and error of measurement. TCI dimensions that were found to be least correlated with dissociative experiences were HA and RD, whereas NS and PS showed high environmental communalities (also genetic for NS). This pattern may suggest a possibility that Gray's behavior activation system (BAS) could be part of the neurobiological underpinnings of dissociation, as opposed to the behavior inhibition system (BIS).
What contributes to the considerable phenotypic overlap between dissociation and the traits of novelty seeking, self-directedness, cooperativeness and selftranscendence are mostly genetic factors. However, a significant amount of environmental influence is present in the NS-dissociation relationship, pointing to the possibility that shared environmental factors exist for both dimensions. This study also replicates previous findings which identified the AE model as the one with the best fit-to-data and produced an estimate of dissociation's heritability that was substantially exceeding half of the total phenotypic variance in general population.
Taken the above into consideration, there seem to emerge two alternative yet not contradictory hypothetical mechanisms which to some extent could be responsible for bolstering dissociative capacity. The first of them concentrates on the temperamental determinants of behavior, mainly on the possible connections between dissociation, PTSD and the feature of novelty seeking. Not only did many studies lead to the discovery of significant dissociation-NS (Evren et al., 2008) and PTSD-NS phenotypic correlations (Richman & Frueh, 1997), but they have also provided some support for the present findings regarding genetic correlations by linking several dopaminerelated polymorphisms to novelty seeking (Benjamin et al., 1996;Ebstein et al., 1996;Munafo et al., 2008;Noble et al., 1998) and PTSD (Dragan & Oniszczenko, 2009). Although not all of the results are consistent with each other, they seem to indicate that neurobiological underpinnings of dissociation are to be looked for in the dopamine system. Therefore, significant phenotypic and genetic correlations of novelty seeking with dissociative experiences call for further analyses which would employ molecular techniques in order to find associations between those experiences and the DRD4 polymorphisms. However, if the dissociation-NS relationship is mediated somehow by dopamine, any molecular research should include possible environmental moderators, since the present study indicates that these two dimensions correlate environmentally with each other stronger than what is observed for any other Cloninger's personality trait.
The second hypothetical mechanism would be connected with cognitive functioning of an individual. It is supposedly related to the trait of hypnotic susceptibility, which is thought to share many clinical and experimental features with dissociation (Carlson & Putnam, 1989). Links have been demonstrated between homovanilic acid (HVA), a cerebrospinal fluid dopamine metabolite, and both hypnotizability (Spiegel & King, 1992) and dissociation (Demitrack et al., 1993). The finding of Spiegel and his colleague led them to a conclusion that neurotransmission activity within dopaminergic system, particularly involving functional activation of frontal lobe pathways, may underpin generation of hypnotic phenomena. This view is strongly supported by the recent findings of Szekely et al. (2010), who successfully associated the trait of hypnotic susceptibility with gene coding for COMT, another metabolite which is responsible for degradation of dopamine. Several studies demonstrated that COMT is an important factor in modulating the dopamine level in PFC (e.g., Malhotra et al., 2002) but to date there are no data on COMT-dissociation relationship. However, more links seem to exist between dissociative experiences and hypnotizability than just genetic and biochemical associations. Both phenomena are supposed to be closely related to the executive functions. Dopamine levels in PFC moderate cognitive performance in challenging hypnotic suggestion, while the same factor may influence dissociative capacity. Such an analogy would be compatible with some theoretical assumptions. Hilgard (1986) proposed the neodissociation model of hypnotic experience, according to which hypnosis might interfere with executive functions and thus influence activity of many hierarchically structured cognitive subsystems. On the other hand, Kennedy et al. (2004) suggested that dissociation may function as an important mechanism in development of PTSD, since it appears to be responsible -through executive functions -for inhibition of normal association processes that pertain to extremely threatening memories. Moreover, dissociation has been extensively researched with relation to borderline personality disorder (BPD) by Korzekwa et al. (2009), who presented a view that dissociation serves as a suppressing factor for excessive emotional arousal which stems from the amygdala.
The apparent relationship between dissociation and hypnotic susceptibility seems to support the pattern of correlations obtained by our team for Cloninger's character features. Laidlaw et al. (2005) demonstrated that the highest levels of both hypnotizability and absorption (a mild dissociative symptom) are observed in those individuals who score low on the SD and CO scales and high on the ST scale.
In the present study we showed that SD and CO correlate negatively and ST correlates positively with the propensity to dissociate. Interestingly, a very strong association (with 5.43% contribution) between ST and the DRD4 polymorphism has also been discovered (Comings, Saucier, & MacMurray, 2002). Also from a theoretical viewpoint ST scale, which measures one's ability to move outside oneself, fits well into the concept of dissociation. Moreover, while NS is not as strongly phenotypically correlated with dissociative experiences as SD or ST, its genetic correlation remains second-largest, being only slightly lower than that of the ST scale. Since both of these features have been associated with the same dopaminerelated polymorphism, it would be interesting to inquire whether a similar link can be established with dissociative capacity.
There seem to exist a number of different traces to follow in further research. Firstly, it would not seem superfluous to seek a big-sample replication of the most apparent correlations between dissociation and Cloninger's character traits. Additional goals of such a continuation study could be to explore the genetic and environmental basis of these associations using molecular analyses as well as measures of relevant psychological features (such as hypnotic susceptibility and emotion regulation) and/or neurobiological variables (such as executive functions of PFC and reactivity of the amygdala). Secondly, Steiger et al. (2007) associated the 5-HTTLPR polymorphism with novelty seeking, while Pieper and colleagues (2011) did the same with regard to dissociative experiences. These findings indicate that the serotoninergic system may be involved in development of dissociation. However, our study does not support the hypothesis of its relationship with harm avoidance as a trait linked to serotonin neurotransmission. It would be interesting to repeat analysis of genetic and environmental correlations between dissociation and temperamental traits using BIS/BAS scale in search of a more parsimonious model compared to 7-factor TCI-based model. Thirdly and tentatively, some neurobiological evidence suggests that noradrenaline may be related to dissociation (Simeon et al., 2007) and possibly underlie the dissociative mechanism of dampening emotional arousal that would normally be induced with traumatic memories (Korzekwa et al., 2009). Then again, this particular hypothesis remains completely unsupported by our findings, because according to them there does not seem to be any genetic overlap between dissociation and reward dependence, a feature that is supposedly constituted mostly by the noradrenergic system.
The present work has several limitations. First and foremost, the studied sample is small, what makes the statistical power for the obtained effects too weak and the confidence intervals for the estimates too long to provide any conclusive evidence. Moreover, this sample may not be sufficiently representative of the general Polish population, mostly due to a likely non-random recruitment of participants. This might raise concerns around the external validity of the research and is incidentally reflected by the substantial overrepresentation of female twins as well as by 16% of the twins being adolescents at the two different stages of personality development, which might render TCI results incongruent (Januszewski, 2009). Then again, the feminine prevalence may play no significant role in biasing the results obtained from our sample, because similar correlation patterns were observed between male vs. female dissociative experiences and Cloninger's character traits (Grabe, Spitzer, & Freyberger). However, it cannot go unnoticed that the ratio of MZ twin to DZ twins in the analyzed sample remains inverted in relation to the general population, where DZ twins are highly prevalent. This sampling artifact might itself indicate that willingness to participate in a study is heritable, but more importantly it may lead to an overall biased model estimation. All of the above should be taken into consideration and lead to a cautious approach to our findings. However, we believe that their surprisingly high coherence with many previous studies in different fields do warrant future replications on larger and more representative samples, as well as further research in the indicated directions by the means of molecular genetics as well as other psychological and neurobiological approaches.
|
2019-05-10T13:07:27.255Z
|
2016-12-01T00:00:00.000
|
{
"year": 2016,
"sha1": "d686f522a2ed099a073487794e619cc1b407645b",
"oa_license": null,
"oa_url": "http://journals.pan.pl/Content/105250/PDF/03.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "91318a467575be56bb5338772cfbfcd428f3ac94",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
254029074
|
pes2o/s2orc
|
v3-fos-license
|
On Sets Defining Few Ordinary Circles
An ordinary circle of a set P of n points in the plane is defined as a circle that contains exactly three points of P. We show that if P is not contained in a line or a circle, then P spans at least n2/4-O(n)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n^2/4 - O(n)$$\end{document} ordinary circles. Moreover, we determine the exact minimum number of ordinary circles for all sufficiently large n and describe all point sets that come close to this minimum. We also consider the circle variant of the orchard problem. We prove that P spans at most n3/24-O(n2)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n^3/24 - O(n^2)$$\end{document} circles passing through exactly four points of P. Here we determine the exact maximum and the extremal configurations for all sufficiently large n. These results are based on the following structure theorem. If n is sufficiently large depending on K, and P is a set of n points spanning at most Kn2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Kn^2$$\end{document} ordinary circles, then all but O(K) points of P lie on an algebraic curve of degree at most four. Our proofs rely on a recent result of Green and Tao on ordinary lines, combined with circular inversion and some classical results regarding algebraic curves.
Background
The classical Sylvester-Gallai theorem states that any finite non-collinear point set in R 2 spans at least one ordinary line (a line containing exactly two of the points). A more sophisticated statement is the so-called Dirac-Motzkin conjecture, according to which every non-collinear set of n > 13 points in R 2 determines at least n/2 ordinary lines. This conjecture was proved by Green and Tao [13] for all sufficiently large n. Their proof was based on a structure theorem, which roughly states that any point set with a linear number of ordinary lines must lie mostly on a cubic curve (see Theorem 5.1 for a precise statement).
It is natural to ask the corresponding question for ordinary circles (circles that contain exactly three of the given points); see for instance [8,Sect. 7.2] or [17,Chap. 6]. Elliott [12] introduced this question in 1967, and proved that any n points, not all on a line or a circle, determine at least 2n 2 /63 − O(n) ordinary circles. (Throughout the paper, by O( f (n)) we mean a function g(n) such that 0 g(n) C f (n) for some constant C > 0 and all sufficiently large n. Thus, −O(n) is a function g(n) satisfying −Cn g(n) 0 for sufficiently large n.) He suggested, cautiously, that the optimal bound is n 2 /6 − O(n). Elliott's result was improved by Bálintová and Bálint [1,Rem.,p. 288] to 11n 2 /247 − O(n), and Zhang [26] obtained n 2 /18 − O(n). Zhang also gave constructions of point sets on two concentric circles with n 2 /4− O(n) ordinary circles.
We will use the results of Green and Tao to prove that n 2 /4 − O(n) is asymptotically the right answer, thus disproving the bound suggested by Elliott [12]. Nassajian Mojarrad and de Zeeuw proved this bound in an earlier preprint [19], which is subsumed by this paper, and will not be published independently. We will find the exact minimum number of ordinary circles, for sufficiently large n, and we will determine which configurations attain or come close to that minimum. We make no attempt to specify the threshold implicit in the phrase 'for sufficiently large n'; any improvement would depend on an improvement of the threshold in the result of Green and Tao [13]. For small n, the bound 1 9 n 2 due to Zhang [26] remains the best known lower bound on the number of ordinary circles. Green and Tao [13] also solved (for large n) the even older orchard problem, which asks for the exact maximum number of lines passing through exactly three points of a set of n points in the plane. We refer to [13] for the history of this problem. The upper bound 1 3 n 2 is easily proved by double counting, but it is not the exact maximum. Using group laws on certain cubic curves, one can construct n non-collinear points with n(n − 3)/6 + 1 3-point lines, and Green and Tao [13] proved (for large n) that this is optimal. This does not follow directly from the Dirac-Motzkin conjecture, but it does follow from the above-mentioned structure theorem of Green and Tao for sets with few ordinary lines (Theorem 5.1).
The analogous orchard problem for circles asks for the maximum number of circles passing through exactly four points from a set of n points. As far as we know, this question has not been asked before. We determine the exact maximum and the extremal sets for all sufficiently large n.
Although we do not consider other related problems, we remark that similar questions have been asked for ordinary conics [7,10,25], ordinary planes [2], and ordinary hyperplanes [3].
Results
Our first main result concerns the minimum number of ordinary circles spanned by a set of n points, not all lying on a line or a circle, and the structure of sets of points that come close to the minimum. The first part of the theorem solves Problem 6 in [8,Sect. 7.2]. (ii) Let C be sufficiently large. If a set P of n points in R 2 determines fewer than n 2 /2 − Cn ordinary circles, then P lies on the disjoint union of two circles, or the disjoint union of a line and a circle.
In Sect. 4, we will describe constructions that meet the lower bound in part (i) of Theorem 1.1. For even n, the bound in part (i) is attained by certain constructions on the disjoint union of two circles, while for odd n, the bound is attained by constructions on the disjoint union of a line and a circle. The main tools in our proof are circle inversion and the structure theorem of Green and Tao [13] for sets with few ordinary lines, together with some classical results about algebraic curves and their interaction with inversion.
Let us define a generalised circle to be either a circle or a line. Because inversion maps circles and lines to circles and lines, it turns out that in our proof it is more natural to work with generalised circles. Alternatively, we could phrase our results in terms of the inversive plane (or Riemann sphere) R 2 ∪ {∞}, where ∞ is a single point that lies on all lines, which can then also be considered as circles. Yet another equivalent view would be to identify the inversive plane with the sphere S 2 via stereographic projection, and consider circles on S 2 , which are in bijection with generalised circles. All our statements about generalised circles in R 2 could thus be formulated in terms of circles in R 2 ∪ {∞} or on S 2 .
We define an ordinary generalised circle to be one that contains three points from a given set. Our proof of Theorem 1.1 proceeds via an analogous theorem for ordinary generalised circles, which turns out to be somewhat easier to obtain.
Theorem 1.2 (Ordinary generalised circles)
(i) If n is sufficiently large, the minimum number of ordinary generalised circles determined by n points in R 2 , not all on a generalised circle, equals (ii) Let C be sufficiently large. If a set P of n points in R 2 determines fewer than n 2 /2 − Cn ordinary generalised circles, then P lies on two disjoint generalised circles.
We also solve the analogue of the orchard problem for circles (for sufficiently large n). We define a 4-point (generalised) circle to be a (generalised) circle that passes through exactly four points of a given set of n points. The 'circular cubics' in part (ii) will be defined in Sect. 2.
(ii) Let C be sufficiently large. If a set P of n points in R 2 determines more than n 3 /24 − 7n 2 /24 + Cn 4-point generalised circles, then up to inversions, P lies on an ellipse or a smooth circular cubic. Theorem 1.3 remains true if we replace 'generalised circles' by 'circles'. This is because we can apply an inversion to any set of n points with a maximum number of generalised circles in such a way that all straight-line generalised circles become circles. Therefore, the maximum is also attained by circles only.
The proofs of the above theorems are based on the following structure theorems in the style of Green and Tao [13]. The first gives a rough picture, by stating that a point set with relatively few ordinary generalised circles must lie on a bicircular quartic, a specific type of algebraic curve of degree four that we introduce in Sect. 2. Theorem 1.4 (Weak structure theorem) Let K > 0 and let n be sufficiently large depending on K . If a set P of n points in R 2 spans at most K n 2 ordinary generalised circles, then all but at most O(K ) points of P lie on a bicircular quartic.
Ball [2] concurrently obtained a similar result as a consequence of a structure theorem for ordinary planes in R 3 . He shows that n points with O(n 2+1/6 ) ordinary circles must lie mostly on a quartic curve.
We define bicircular quartics in Sect. 2; they can be reducible, so in Theorem 1.4 the set P may also lie mostly on a lower-degree curve contained in a bicircular quartic. Our proof actually gives a more precise list of possibilities. The curve that P mostly lies on can be: a line; a circle; an ellipse; a line and a disjoint circle; two disjoint circles; a circular cubic that is acnodal or smooth; or a bicircular quartic that is an inverse of an acnodal or smooth circular cubic.
A more precise characterisation of the possible configurations with few ordinary generalised circles is given in the following theorem. The group structures referred to in the theorem are defined in Sect. 3; the circular points at infinity (α and β) referred to in Case (iii) are introduced in Sect. 2; and the 'aligned' and 'offset' double polygons are defined in Sect. 4. Theorem 1.5 (Strong structure theorem) Let K > 0 and let n be sufficiently large depending on K . If a set P of n points in R 2 spans at most K n 2 ordinary generalised circles, then up to inversions and similarities, P differs in at most O(K ) points from a configuration of one of the following types: (i) a subset of a line; (ii) a subgroup of an ellipse; (iii) a coset H ⊕ x of a subgroup H of a smooth circular cubic, for some x such that 4x ∈ H ⊕ α ⊕ β, where α and β are the two circular points at infinity; (iv) a double polygon that is 'aligned' or 'offset'.
Conversely, every set of these types defines at most O(K n 2 ) ordinary generalised circles.
In Sect. 2, we carefully introduce circular cubics and bicircular quartics, and show their connection to inversion. In Sect. 3, we define group laws on these curves, which help us construct point sets with few ordinary (generalised) circles in Sect. 4. In Sect. 5, which forms the core of our proof, we derive Theorems 1.4 and 1.5 from the structure theorem of Green and Tao [13]. In Sect. 6, we combine the structure theorems with our analysis of the constructions from Sect. 4 to establish the precise statements in Theorems 1.1, 1.2, and 1.3.
Circular Curves and Inversion
The key tool in our proof is circle inversion, as it was in the earlier papers [1,12,26] on the ordinary circles problem; the first to use circle inversion in Sylvester-Gallai problems was Motzkin [18]. The simple reason for the relevance of circle inversion is that if we invert in a point of the given set, an ordinary circle through that point is turned into an ordinary line. Thus we can use results on ordinary lines, like those of Green and Tao [13], to deduce results about ordinary circles. To do this successfully, we need a thorough understanding of the effect of inversion on algebraic curves, and in particular we need to introduce the special class of circular curves.
Circular Curves and Circular Degree
In this subsection, we work in the real projective plane RP 2 , and partly in the complex projective plane CP 2 . See for instance [22,App. A] for an appropriate introduction to projective geometry. We use the homogeneous coordinates [x : y : z] for points in RP 2 or CP 2 , and we think of the line with equation z = 0 as the line at infinity. An affine algebraic curve in R 2 , defined by a polynomial f ∈ R[x, y], can be naturally extended to a projective algebraic curve, by taking the zero set of the homogenisation of f . This curve in RP 2 then extends to CP 2 , by taking the complex zero set of the homogenised polynomial.
We define the circular points to be the points on the line at infinity in CP 2 . The circular points play a key role in this paper, due to the fact that every circle contains both circular points. Moreover, any real conic containing α and β is either a circle, or a union of a line with the line at infinity. We could thus consider a generalised circle to be a conic that contains both circular points.
Definition 2.1
An algebraic curve in RP 2 is circular if it contains α and β. For k 2, an algebraic curve in RP 2 is k-circular if it has singularities of multiplicity at least k at both α and β.
A classical reference for circular curves is Johnson [16], while a more modern one is Werner [24]. Let us make the definition more explicit in three concrete cases.
A generalised circle is an algebraic curve of degree two that contains α and β; equivalently, it is a curve in RP 2 defined by a homogeneous polynomial of the form where t ∈ R, and ∈ R[x, y, z] is a non-trivial linear form. If t = 0, then the curve is a circle, while if t = 0, the curve is the union of a line with the line at infinity.
A circular cubic is an algebraic curve of degree three that contains α and β; equivalently, it is any curve in RP 2 defined by a homogeneous polynomial of the form where u, v ∈ R, and q ∈ R[x, y, z] is a non-trivial quadratic homogeneous polynomial. Note that we do not require a circular cubic to be irreducible or smooth. For instance, the union of a line and a circle is a circular cubic, and so is the union of any conic with the line at infinity (take u = v = 0 in (1)).
A bicircular quartic is an algebraic curve of degree four that is 2-circular; equivalently, it is any curve in RP 2 defined by a homogeneous polynomial of the form where t, u, v ∈ R, and q ∈ R[x, y, z] is a non-trivial homogeneous quadratic polynomial (see [24,Sect. 8.2] for a proof that a quartic is 2-circular if and only if its equation has the form (2)). A noteworthy example of a bicircular quartic is a union of two circles, for which it is easy to see that the curve has double points at α and β, since both circles contain those points. Every circular cubic is contained in a bicircular quartic, since for t = 0 in (2) we get a union of a circular cubic and the line at infinity. A non-circular conic is also contained in a bicircular quartic, since for t = u = v = 0 in (2) we get a union of a conic and z 2 = 0, which is a double line at infinity.
Definition 2.2
The circular degree of an algebraic curve γ in RP 2 is the smallest k such that γ is contained in a k-circular curve of degree 2k.
The circular degree is well-defined, since given any curve γ of degree k, we can add k copies of the line at infinity, to get a k-circular curve of degree 2k.
For example, a line has circular degree one, since its union with the line at infinity is a 1-circular curve of degree two. A conic that is not a circle has circular degree two, since its union with two copies of the line at infinity is a 2-circular curve of degree four. Similarly, a circular cubic has circular degree two, since its union with the line at infinity is a 2-circular curve of degree four. We can thus classify curves of low circular degree as follows: • Circular degree one: lines and circles (that is, generalised circles).
This classification is important to us, because we will see that circular degree is invariant under inversion.
We have defined circular curves and circular degrees in the projective plane, because that is their most natural setting. In the rest of the paper, to avoid confusion between the projective and inversive planes, we will use these notions for curves in R 2 , with the understanding that to inspect the definitions we should consider RP 2 and CP 2 .
Inversion
Circular curves are intimately related to circle inversion, which we now introduce. A general reference is [6].
Definition 2.3
Let C( p, r ) be the circle with centre p = (x p , y p ) ∈ R 2 and radius r > 0. The circle inversion with respect to C( p, r ) is the mapping I p,r : R 2 \{ p} → R 2 \{ p} defined by for (x, y) = p. We write I p for I p,1 . We call p the centre of the inversion I p,r .
In the inversive plane R 2 ∪ {∞}, the inversion map can be completed by setting I p,r ( p) = ∞ and I p,r (∞) = p, so that inversions take generalised circles to generalised circles. The group of transformations of the inversive plane generated by the inversions and the similarities is called the inversive group. It is known that a bijection of the inversive plane that takes generalised circles to generalised circles has to be an element of this group, and that any element of this group is either a similarity or an inversion followed by an isometry [9, Thm. 6.71].
The image of an algebraic curve in R 2 under an inversion is also an algebraic curve, in the following sense.
Definition 2.4 For any algebraic curve γ there is an algebraic curve γ such that
We refer to γ as the inverse of γ with respect to the circle C( p, r ), and abuse notation slightly by writing γ = I p,r (γ ). Also, since for different choices of radius r , I p,r (γ ) differs only by a dilatation in p, we will often only consider the inverse I p (γ ) = I p,1 (γ ) and refer to it as the inverse of γ in the point p.
If a curve has degree d, then its inverse has degree at most 2d [24,Thm. 4.14]. If γ is irreducible, then its inverse is also irreducible. Note that inverses of algebraic curves can behave somewhat unintuitively; for instance, Proposition 2.6 states that the inverse of an ellipse has an isolated point, which is surprising if one thinks of an ellipse as just a closed continuous curve.
It is well known that the inverses of generalised circles are again generalised circles. It turns out that, more generally, circular degree is preserved under inversion. We now make precise what this means for curves of low circular degree. A proof can be found in the classical paper [16]; for a more modern reference, see [24, Sect. 9.2]. Lemma 2.5 (Inversion and circular degree) Let C k be a curve of circular degree k. Then: The inverse of C 2 in a singular point on C 2 is a non-circular conic; the inverse of C 2 in a regular point on C 2 is a circular cubic; the inverse of C 2 in a point not on C 2 is a bicircular quartic. (iii) The inverse of C 3 in a singularity of multiplicity three is a non-circular cubic; the inverse of C 3 in a singularity of multiplicity two is a circular quartic; the inverse of C 3 in a regular point on C 3 is a 2-circular quintic; the inverse of C 3 in a point not on C 3 is a 3-circular sextic.
One particular subcase of Case (ii) will play an important role in our paper, and we state it separately in Proposition 2.6. A proof can be found in [14, p. 202]. Let us recall that an acnodal cubic is a singular cubic with a singularity that is an isolated point; for example, (2x − 1)(x 2 + y 2 ) − y 2 = 0 is an acnodal circular cubic with a singularity at the origin.
Proposition 2.6
The inverse of an ellipse in a point on the ellipse is an acnodal circular cubic with the centre of inversion as its singularity; the inverse of an acnodal circular cubic in its singularity is an ellipse through the singularity.
Groups on Irreducible Circular Cubics
The extremal configurations in our main theorems are all based on group laws on certain circular curves. It is well known that irreducible smooth cubics (elliptic curves) have a group law (see for instance [22]). These groups play a crucial role in the work of Green and Tao [13]. The reason that these groups are relevant to ordinary lines is the following collinearity property of this group (when defined in the standard way). Three points on the curve are collinear if and only if in the group they sum to the identity element. For this property to hold, the identity element must be an inflection point. Here we will define a group in a slightly different way (described for instance in [22,Sect. 1.2]), in which the identity element is not necessarily an inflection point, and the same collinearity property does not hold. However, for circular cubics, we show that we can choose the identity element so that we get a similar property for concyclicity. First let γ be any irreducible cubic, write γ * for its set of regular points, and pick an arbitrary point o ∈ γ * . We describe an additive group operation ⊕ on the set γ * for which o is the identity element. The construction is depicted in Fig. 1. Given a, b ∈ γ * , let a * b be the third intersection point of γ and the line ab, and define a ⊕ b to be (a * b) * o, the third intersection point of γ and the line through a * b and o. When a = b, the line ab should be interpreted as the tangent line at a; when a * b = o, the line through a * b and o should be interpreted as the tangent line to γ at o. We refer to [22] for a more careful definition and a proof that this operation really does give a group. Now consider a circular cubic γ . Since the circular points α and β lying on it are conjugate, γ has a unique real point on the line at infinity, which we choose as our identity element o. We define the point ω to be the third intersection point of the tangent line to γ at o (if there is no third intersection point, then o is an inflection point, and we consider o itself to be the third point). Throughout this paper we will use ω to denote this special point on a circular cubic; note that ω is not fixed like α and β, but depends on γ . Also note that ω is real, since it corresponds to the third root of a real cubic polynomial whose other two roots correspond to the real point o. Observe that With this group law, we no longer have the property that three points are collinear if and only if they sum to o (unless o happens to be an inflection point). Nevertheless, one can check that three points a, b, c ∈ γ * are collinear if and only if a More important for us, four points of γ * lie on a generalised circle if and only if they sum to ω. This amounts to a classical fact (see [4,Art. 225] for an equivalent statement), but we include a proof for completeness. We use the following version of the Cayley-Bacharach Theorem, due to Chasles (see [11]). Recall from Sect. 2 that a generalised circle, viewed projectively, is either a circle, or the union a line with the line at infinity.
Proof We consider the cubic γ extended to CP 2 . We first show the forward direction. All statements in the proof should be considered with multiplicity.
If the generalised circle is the union of a line and the line at infinity ∞ , then ∪ ∞ intersects γ in a, b, c, d, α, β. Since intersects γ in at most three points, one of the points a, b, c, d must equal o, say d = o. Since ∞ also intersects γ in at most three points, we must have a, b, c ∈ . Thus a, b, c are collinear, and we have a ⊕ b ⊕ c = ω, by the definition of the group law. It then follows from Suppose next that the generalised circle is a circle σ , and intersects γ in a, b, c, d, α, β. The construction that follows is depicted in Fig. 2. Let 1 be the line through o and a * b (and thus through a ⊕ b), 2 the line through a and b (and thus through a * b), and 3 the line through c and a ⊕ b. Note that σ and ∞ intersect in α and β. Then γ 1 = σ ∪ 1 and γ 2 = 2 ∪ 3 ∪ ∞ are two cubic curves that intersect in nine points, of which the eight points a, b, c, a * b, a ⊕ b, o, α, and β certainly lie on γ ; the remaining point is the third intersection point of γ 1 and 3 beside c and a ⊕ b, which we denote by d . By Theorem 3.1, γ contains d . By the group law on γ , we
Groups on Other Circular Curves
We now define group laws on two other types of curves of circular degree two, and observe that they satisfy similar concyclicity properties. Let us note at this point that most bicircular quartics can also be given a group structure (if an irreducible bicircular quartic has no singularities besides α and β, then it is a curve of genus one, and thus has a group law by [21, Sect. III.3]). However, in our proofs we will handle bicircular quartics by inverting in a point on the curve, which by Lemma 2.5 transforms a bicircular quartic into a circular cubic. For that reason, we do not need to study the group law on bicircular quartics separately.
Ellipses We discuss a group law on ellipses, although we do not actually need it in our proof, because inversion lets us transform an ellipse into an acnodal cubic (Proposition 2.6), which we have already given a group structure in the previous subsection. Nevertheless, we treat the group law on ellipses here because it is especially elementary, and it would be strange not to mention it.
Consider the ellipse σ given by the equation x 2 + (y/s) 2 = 1, with s = 0, 1. For any point a ∈ σ , we project a vertically to the point a on the unit circle around the origin, as in Fig. 3, and call the angle θ a the eccentric angle of a. We define the sum of two points a, b ∈ σ to be the point c = a ⊕ b whose eccentric angle is θ c = θ a + θ b . This gives σ a group structure isomorphic to R/Z. The identity element is o = (1, 0), and the inverse of a point is its reflection in the x-axis. We have the following classical fact that describes when four points on an ellipse are concyclic (see [15] for the oldest reference we could find, and [5, Problem 17.2] for two detailed proofs).
Concentric circles
We now define a group on the union of two disjoint circles. For notational convenience, we identify R 2 with C. After an appropriate inversion, we can assume the circles to be with r > 1, and we represent each element of σ 1 ∪ σ 2 as r ε e 2πit with ε ∈ Z 2 (with the obvious convention r 0 = 1 and r 1 = r ). We define a group operation on σ 1 ∪ σ 2 by r ε 1 e 2πit 1 ⊕ r ε 2 e 2πit 2 = r (ε 1 +ε 2 ) mod 2 e 2πi(t 1 +t 2 ) , which turns σ 1 ∪ σ 2 into a group isomorphic to R/Z × Z 2 , with identity element o = 1 = r 0 e 2πi·0 . We again have the following concyclicity property, which is easily seen using symmetry.
Ellipse
Let σ be the ellipse defined by x 2 + (y/s) 2 = 1, with the group structure introduced in Sect. 3.2. Let n 5. We have a finite subgroup of size n given by By Proposition 3.3, the circle through any three points a, b, c ∈ S passes through the point d = a b c ∈ S. Therefore, the only way S spans an ordinary circle is when d coincides with one of the points a, b, c (which occurs if the circle is tangent to σ at that point). It follows that the number of ordinary circles is equal to Similarly, the number of 4-point circles is equal to
Circular Cubic Curve
Let γ be an irreducible circular cubic, and let ⊕ be the group operation defined in Sect. 3.1. It is well known (see for instance [13]) that the group (γ * , ⊕) is isomorphic to the circle R/Z if γ is acnodal or if γ is smooth and has one connected component, and is isomorphic to R/Z × Z 2 if γ is smooth and has two connected components. Let H n be a subgroup of order n of γ * , and let x ∈ γ * be such that 4x = ω h for some h ∈ H n . By Proposition 3.2, the number of ordinary generalised circles in the coset S = H n ⊕ x equals which is greater than the corresponding number in the previous construction.
'Aligned' Double Polygons
Let n 6 be even and set m = n/2. We identify R 2 with C. Let σ 1 be the circle with centre the origin and radius one, and σ 2 the circle with centre the origin and radius r > 1. Let S 1 = e 2πik/m k = 0, . . . , m − 1 ⊂ σ 1 and S 2 = re 2πik/m k = 0, . . . , m − 1 ⊂ σ 2 . Thus, S 1 and S 2 are the vertex sets of regular m-gons on σ 1 and σ 2 that are 'aligned' in the sense that their points lie at the same set of angles from the common centre (see Fig. 4).
Let S = S 1 ∪ S 2 . By Proposition 3.4, the points a, b ∈ σ 1 , c, d ∈ σ 2 are collinear or concyclic if and only if a ⊕ b ⊕ c ⊕ d = o. In particular, if a = b, then the generalised circle through the three points is tangent to σ 1 . It follows that if n 8, the ordinary generalised circles of S are exactly those through e 2πik 1 /m , re −2πik 2 /m , re −2πik 3 /m or through re −2πik 1 /m , e 2πik 2 /m , e 2πik 3 /m where 2k 1 + k 2 + k 3 ≡ 0 (mod m), with k 2 ≡ k 3 (mod m).
For generic r > 1, we then obtain that the number of ordinary generalised circles equals (although k 2 and k 3 are not ordered, we either have two points on σ 1 or two points on σ 2 ). This equals m(m − 2) if m is even and m(m − 1) if m is odd. That is, for generic r , we obtain n 2 /4 − n ordinary generalised circles if n ≡ 0 (mod 4) and n 2 /4 − n/2 ordinary generalised circles if n ≡ 2 (mod 4). If we choose r = (cos(2π k/m)) −1 (there are m/4 choices for r ), then the tangent lines at points of S 1 pass through two points of S 2 , so are ordinary generalised circles. Thus, for these choices of r we lose m ordinary circles, and obtain n 2 /4−3n/2 ordinary circles if n ≡ 0 (mod 4) and n 2 /4 − n ordinary circles if n ≡ 2 (mod 4). Note that this is much less than the number of ordinary circles given by Constructions 4.1 and 4.2.
Similarly, the number of 4-point generalised circles spanned by S equals , also much less than the number in Constructions 4.1 and 4.2.
Punctured Double Polygons
Let n = 2m − 1 11 be odd. Take Construction 4.3 with n + 1 = 2m points and remove an arbitrary point p ∈ S 1 .
First assume that m is odd. Before we remove p, there are m(m − 1) ordinary generalised circles. Of these, there are (m − 1)/2 tangent at p. There are also m − 1 ordinary generalised circles through p tangent at some point of S 2 . Thus, by removing p, we destroy 3(m − 1)/2 ordinary generalised circles and create m 2 − (m − 1)/2 new ones. Therefore, S\{ p} has ordinary generalised circles. That is, there are 3n 2 /8 − n + 5/8 ordinary generalised circles if n ≡ 1 (mod 4). Next assume that m is even. Before we remove p, there are m(m − 2) ordinary generalised circles, of which there are (m − 2)/2 through two different points of S 2 tangent at p, and there are also m − 2 ordinary generalised circles through p tangent at a point of S 2 . As before, we obtain ordinary generalised circles. Thus, we obtain 3n 2 /8−3n/2+17/8 ordinary generalised circles if n ≡ 3 (mod 4). Instead of starting with Construction 4.3, we can take the 'offset' Construction 4.4 and remove a point. It is easy to see that when n ≡ 1 (mod 4) we obtain the same number of ordinary generalised circles, while if n ≡ 3 (mod 4) we obtain more. Since
Inverted Double Polygons
We can use inversion to make new constructions out of old ones.
If we remove another point from this inverted construction, we obtain a set of n points where n is even, with 3n 2 /8 − O(n) ordinary circles.
Other Inverted Examples
If we invert Construction 4.1 in a point on the ellipse that is not in the set S, then by Proposition 2.6, we obtain points on an acnodal circular cubic (without its acnode) as in Construction 4.2, with the same number of ordinary and 4-point generalised circles.
If we invert a circular cubic in a point not on the curve, then we obtain a bicircular quartic by Lemma 2.5. There will again be n 2 /2 − O(n) ordinary circles (or ordinary generalised circles) and n 3 /24 − O(n 2 ) 4-point circles among the inverted points.
Proof of the Weak Structure Theorem
The proofs of our structure theorems for sets with few ordinary circles crucially rely on the following structure theorem for sets with few ordinary lines due to Green and Tao [13]. Recall that an ordinary line is a line containing exactly two points of the given point set. We commence the proof of Theorem 1.4. Let P be a set of n points spanning at most K n 2 ordinary generalised circles. We wish to show that P lies mostly on a bicircular quartic (we will repeatedly use 'mostly' to mean 'for all but O(K ) points').
Note that for at least 2n/3 points p of P, there are at most 9K n ordinary circles through p, hence the set I p (P\{ p}) spans at most 9K n ordinary lines. Let P be the set of such points. For n sufficiently large depending on K , applying Theorem 5.1 to I p (P\{ p}) for any p ∈ P gives that I p (P\{ p}) lies mostly on a line, a line and a conic, an acnodal cubic, or a smooth cubic.
If there exists p ∈ P such that I p (P\{ p}) lies mostly on a line, then inverting again in p, we see that P must lie mostly on a line or a circle.
If there exists p ∈ P such that I p (P\{ p}) lies mostly on a line and a disjoint conic σ , we have two cases, depending on whether p lies on or not.
If p ∈ , we invert again in p to find that P lies mostly on the union of and I p (σ ). By Lemma 2.5, I p (σ ) is either a circle (if σ is a circle) or an irreducible bicircular quartic (if σ is a non-circular conic). Furthermore, p is the only point that could possibly lie on both and I p (σ ). Since roughly n/2 points of P lie on , there must be another point q ∈ ∩ P that does not lie on I p (σ ). In I q (P\{q}), the line remains a line, and by definition of P the set I q (P\{q}) spans few ordinary lines, so Theorem 5.1 tells us I q (I p (σ )) is a conic. It follows from Lemma 2.5 that I p (σ ) cannot be a quartic, since we inverted in the point q outside I p (σ ) and did not obtain a quartic. That means I p (σ ) has to be a circle, and it is disjoint from . Thus, P lies mostly on the union of a line and a disjoint circle.
If p / ∈ , we invert in p to see that P lies mostly on the union of the circle I p ( ) and the curve I p (σ ), which is either a circle or a quartic. Again p is the only point that can lie on both curves. Inverting in another point q ∈ I p ( ) ∩ P , I q (I p ( )) becomes a line, so Theorem 5.1 tells us that I q (I p (σ )) is a conic, so that I p (σ ) must be a circle disjoint from I p ( ) as before. Thus, P lies mostly on the union of two disjoint circles.
The case that remains is when for all p ∈ P , the set I p (P\{ p}) lies mostly on an acnodal or smooth cubic γ . Fix such a p, and consider I p (γ ), which mostly contains P. If γ is not a circular cubic, then by the classification in Sect. 2 it has circular degree three, so I p (γ ) has circular degree three as well. For any q ∈ I p (γ ) ∩ P other than p, the curve I q (I p (γ )) is also a cubic curve, by the definition of P and Theorem 5.1. By Case (iii) of Lemma 2.5, this can only happen if q is a singularity of I p (γ ). But I p (γ ) is an irreducible curve of degree at most six, and so has at most ten singularities by [23,Thm. 4.4], which is a contradiction. So γ must be a circular cubic that is acnodal or smooth. If γ is acnodal, then I p (γ ) is either a bicircular quartic (if p / ∈ γ ), an acnodal circular cubic (if p is a regular point of γ ), or a non-circular conic (if p is the singularity of γ ). In the last case, the conic is an ellipse by Proposition 2.6. If γ is smooth, then I p (γ ) is either a bicircular quartic or a smooth circular cubic.
We have encountered the following curves that P could mostly lie on: a line, a circle, an ellipse, a disjoint union of a line and a circle, a disjoint union of two circles, a circular cubic, or a bicircular quartic. All of these are subsets of bicircular quartics, which proves the statement of Theorem 1.4.
Proof of the Strong Structure Theorem
We now prove Theorem 1.5. First of all, as explained in Sect. 4, a subgroup of an ellipse and an appropriate coset of a subgroup of a smooth circular cubic both have at most n 2 /2 ordinary generalised circles, and a double polygon has at most n 2 /4 ordinary generalised circles. It follows from Lemma 5.2 below that if we add and/or remove O(K ) points, then there will be at most O(K n 2 ) ordinary generalised circles.
Lemma 5.2 Let S be a set of n points in R 2 with s ordinary generalised circles. Let
T be a set that differs from S in at most K points: |S T | K . Then T has at most s + O(K n 2 + K 2 n + K 3 ) ordinary generalised circles.
Proof First note that if we add a point to any set of n points, we create at most n 2 ordinary generalised circles. Secondly, since two circles intersect in at most two points, the number of 4-point circles through a fixed point in a set of n points is at most 1 3 n−1 2 , so by removing a point we create at most 1 3 n−1 2 < n 2 ordinary generalised circles. It follows that by adding and removing O(K ) points, we create at most ordinary generalised circles.
Next, let P be a set of n points with at most K n 2 ordinary generalised circles. From the proof of Theorem 1.4 above, we see that P differs in at most O(K ) points from a line, a circle, an ellipse, a disjoint union of a line and a circle, a disjoint union of two circles, a circular cubic, or a bicircular quartic. Moreover, in the proof we saw that the circular cubic must be acnodal or smooth, and that the bicircular quartic has the property that if we invert in a point on the curve, the resulting circular cubic is acnodal or smooth.
Using inversions, we can reduce the number of types of curves that we need to analyse further.
• If P lies mostly on a line, then we are in Case (i) of Theorem 1.5, so we are done.
• If P lies mostly on a circle, then inverting in a point on the circle puts us in Case (i) again. • If P lies mostly on an ellipse, then inverting in a point of the ellipse places P mostly on an acnodal circular cubic. • If P lies mostly on a bicircular quartic, then inverting in any regular point on the curve gives us a circular cubic. As mentioned above, this cubic is acnodal or smooth. • If P lies mostly on a line and a disjoint circle, then an inversion in a point not on the line or circle places P mostly on two disjoint circles. • If P lies mostly on the disjoint union of two circles, we can apply an inversion that maps the two disjoint circles to two concentric circles [6, Thm. 1.7].
So, up to inversions, we need only consider the cases when P lies mostly on an acnodal or smooth circular cubic, or on two concentric circles. We do this in Lemmas 5.5 and 5.6 below, which will complete the proof of Theorem 1.5.
To determine the structure of P, we use a variant of a lemma from additive combinatorics that was used by Green and Tao [13]. It captures the principle that if a finite subset of a group is almost closed under addition, then it is close to a subgroup. The following statement is Proposition A.5 in [13].
Proposition 5.3 Let K > 0 and let n be sufficiently large depending on K . Let A, B, C be three subsets of some abelian group (G, ⊕), all of cardinality within K of n. Suppose there are at most K n pairs (a, b) ∈ A × B for which a ⊕ b / ∈ C. Then there is a subgroup H G and cosets H ⊕ x, H ⊕ y such that
The variant that we need is a simple corollary of Proposition 5.3. K n 2 triples (a, b, c) Proof By the pigeonhole principle, there exists an a 0 ∈ A such that there are at most ∈ D a 0 . Applying Proposition 5.3, we have a subgroup H G and cosets H ⊕ y, H ⊕ z such that Since |B ∩ (H ⊕ y)| n − O(K ), we repeat the argument above to obtain b 0 ∈ B∩(H ⊕y) such that there are at most O(K n) pairs (a, c) ∈ A×C with a⊕b 0 ⊕c / ∈ D, and Proposition 5.3 gives a subgroup H G and cosets H ⊕ x, H ⊕ z such that Lemma 5.5 (Circular cubic) Let K > 0 and let n be sufficiently large depending on K . Suppose P is a set of n points in R 2 spanning at most K n 2 ordinary generalised circles, and all but at most K points of P lie on an acnodal or smooth circular cubic γ . Then there is a coset H ⊕ x of a subgroup H γ * , with 4x ∈ H ⊕ ω, such that Proof Let P = P ∩ γ * . Then |P P | = O(K ), and by Lemma 5.2, P spans at most O(K n 2 ) ordinary circles. If a, b, c ∈ γ are distinct, then by Proposition 3.2, the generalised circle through a, b, c meets γ again in the unique point d = ω (a⊕b⊕c). This implies that d ∈ P for all but at most O(K n 2 ) triples a, b, c ∈ P , or equivalently a ⊕ b ⊕ c ∈ ω P . Applying Corollary 5.4 with A = B = C = P and D = ω P , we obtain H γ * and a coset H ⊕ x such that |P Lemma 5.6 (Concentric circles) Let K > 0 and let n be sufficiently large depending on K . Suppose P is a set of n points in R 2 spanning at most K n 2 ordinary generalised circles. Suppose all but at most K of the points of P lie on two concentric circles, and that P has n/2 ± O(K ) points on each. Then, up to similarity, P differs in at most O(K ) points from an 'aligned' or 'offset' double polygon.
Proof By scaling and rotating, we can assume that P lies mostly on the two concentric circles e 2πit t ∈ [0, 1) and re −2πit t ∈ [0, 1) , r > 1, which we gave a group structure in Sect. 3.2.
Let P 1 = P ∩ σ 1 and P 2 = P ∩ σ 2 . Then |P (P 1 ∪ P 2 )| = O(K ), and by Lemma 5.2, P 1 ∪ P 2 spans at most O(K n 2 ) ordinary circles. If a, b ∈ σ 1 and c ∈ σ 2 with a = b, then by Lemma 3.4, the generalised circle through a, b, c meets σ 1 ∪ σ 2 again in the unique point d = (a ⊕ b ⊕ c). This implies d ∈ P 2 for all but at most O(K n 2 ) triples (a, b, c) with a, b ∈ P 1 and c ∈ P 2 . Applying Corollary 5.4 with A = B = P 1 , C = P 2 and D = P 2 , we get cosets H ⊕ x and H ⊕ y of σ 1 ∪ σ 2 such that |P 1 (H ⊕ x)|, |P 2 (H ⊕ y)| = O(K ) and 2x ⊕ 2y ∈ H , where x ∈ σ 1 and y ∈ σ 2 . It follows that H σ 1 , hence H is a cyclic group of order m = n/2 ± O(K ), and H ⊕ x and H ⊕ y are the vertex sets of regular m-gons inscribed in σ 1 and σ 2 , respectively, either 'aligned' or 'offset' depending on whether x ⊕ y ∈ H or not.
Together these lemmas prove Theorem 1.5. It just remains to remark that if P differs in O(K ) points from a coset on an acnodal circular cubic, then we apply inversion in its singularity. By Proposition 2.6, we obtain that P differs in O(K ) points from a coset H ⊕ x of a finite subgroup H of an ellipse, where 4x = o. Thus, x is a point of the ellipse with eccentric angle a multiple of π/2. After a rotation, we can assume that x = o, which is Case (ii) of Theorem 1.5.
Extremal Configurations
In this section we prove Theorems 1.1, 1.2, and 1.3. We first consider generalised circles.
Ordinary Generalised Circles
Suppose P is an n-point set in R 2 spanning fewer than n 2 /2 ordinary generalised circles, and that P is not contained in a generalised circle. Applying Theorem 1.5, we can conclude that, up to inversions, P differs in O(1) points from one of the following examples: points on a line, a coset of a subgroup of an acnodal or smooth circular cubic, or a double polygon.
The first type of set is very easy to handle. Note that the lower bound is on the number of ordinary circles, not counting 3-point lines. Proof Let be a line such that |P ∩ | = n − K . For any p ∈ P ∩ and q ∈ P\ there are at most K − 1 non-ordinary circles through p, q, another point on P ∩ , and another point in P\ . Therefore, there are at least K (n − 2K ) ordinary circles through p. This holds for any of the n − K points p ∈ P ∩ , and we obtain at least K (n − 2K )(n − K )/2 ordinary circles. It is easy to see that when 1 K (n − 4)/2, K (n − 2K )(n − K )/2 is minimised when K = 1.
Cosets on cubics are also relatively easy to handle. We again obtain a lower bound on the number of ordinary circles, not including 3-point lines. Proof Suppose that P differs in K points from H ⊕ x. We know from Construction 4.2 that H ⊕ x spans n 2 /2 − O(n) ordinary circles, all of which are tangent to γ . We show that adding or removing K points destroys no more than O(K n) of these ordinary circles, so that the resulting set P still spans at least n 2 /2 − O(K n) ordinary circles.
Suppose we add a point q / ∈ H ⊕ x. For p ∈ H ⊕ x, at most one circle tangent to γ at p can pass through q. Thus, adding q destroys at most n ordinary circles. Now suppose we remove a point p ∈ H ⊕ x. Since ordinary circles of H ⊕ x correspond to solutions of 2 p ⊕ q ⊕ r = ω or p ⊕ 2q ⊕ r = ω, there are at most O(n) solutions for a fixed p. Thus removing p destroys at most O(n) ordinary circles.
Repeating K times, we see that adding or removing K points to or from H ⊕ x destroys at most O(K n) ordinary generalised circles out of the n 2 /2 − O(n) spanned by H ⊕ x. This proves that P spans at least n 2 /2 − O(K n) ordinary circles.
From the two lemmas above we know that there is an absolute constant C such that a set of n points, not all collinear or concyclic, spanning at most n 2 /2 − Cn ordinary generalised circles, differs in O(1) points from Case (iv) in Theorem 1.5. This case, where P is close to the vertex set of a double polygon, requires a more careful analysis of the effect of adding or removing points.
We use the following special case of a result due to Raz et al. [20].
Proposition 6.3 If P ⊂ R 2 is a set of n points contained in two circles, then the number of lines with at least three points of P is at most O(n 11/6 ).
Proof Denote the two circles by σ 1 and σ 2 . We use [20,Thm. 6.1], which states that for (not necessarily distinct) algebraic curves C 1 , C 2 , C 3 of constant degree, and finite sets S i ⊂ C i , the number of collinear triples ( p 1 , unless C 1 ∪ C 2 ∪ C 3 is a line or a cubic. Let C 1 = σ 1 and C 2 = C 3 = σ 2 . Set S i = P ∩ C i for i = 1, 2, 3. Every line with at least one point of S 1 and two points of S 2 = S 3 corresponds to a collinear triple in S 1 × S 2 × S 3 . Since the union of two circles is not a line or a cubic, we can apply the theorem to get the bound O(n 11/6 ) for the number of collinear triples in P with one point in σ 1 and two points in σ 2 . Similarly, the number of collinear triples in P with one point in σ 2 and two points in σ 1 is also O(n 11/6 ). Since a line intersects σ 1 ∪ σ 2 in at most four points, we also obtain the bound O(n 11/6 ) for the number of lines with at least three points. Now consider adding q ∈ B to S. For any pair of points from S\ A, adding q ∈ B creates a new ordinary generalised circle, unless the generalised circle through the pair and q contains three or four points of S\ A. We already saw that the number of ordinary generalised circles hitting a fixed point is O(n), so it remains to bound the number of 4-point generalised circles of S that hit q. If q lies on one of the concentric circles, then no 4-point generalised circles hit q, so we can assume that q does not. Applying inversion in q reduces the problem to bounding the number of 4-point lines determined by a subset of two circles. By Proposition 6.3, this number is bounded by O(n 11/6 ), so p lies on at most O(n 11/6 ) of the 4-point generalised circles spanned by S. Adding q to S thus creates at least n 2 − O(n 11/6 ) ordinary generalised circles. Note that each p ∈ A that was removed destroys at most n of these circles.
Adding q to S\ A also destroys at most O(n) ordinary circles, since for each p ∈ S there is only one circle tangent at p and going through q, and for each p ∈ A, at most m ordinary circles spanned by S\A go through p. Finally, since there are at most 2m circles through two points of B that also go through two points of S\A, P = (S\A)∪ B spans at least (1/4 + a/8 + b/2)n 2 − O(n 11/6 ) ordinary generalised circles. Theorem 1.2 then follows easily from the lemmas above.
Proof of Theorem 1.2 Suppose that P is a set of n points in R 2 with fewer than n 2 /2 − Cn ordinary generalised circles, where C is sufficiently large. Without loss of generality, n is also sufficiently large. By Lemmas 6.1 and 6.2, we need only consider the case where P differs by O(1) points from a double polygon. In the notation of Lemma 6.4, we have P = (S\ A) ∪ B and (2 + a + 4b)/8 < 1/2, which implies that a 1 and b = 0. So P is either equal to S, or is obtained from S by removing one point, which are exactly the cases in Constructions 4.3, 4.4, and 4.5 . In particular, the minimum number of ordinary generalised circles occurs in Construction 4.3 when n ≡ 0 (mod 4), in Construction 4.5 when n ≡ 1, 3 (mod 4), and in Constructions 4.3 and 4.4 when n ≡ 2 (mod 4).
Ordinary Circles
We now consider what happens if we do not count generalised circles that are lines, and prove Theorem 1.1.
Proof of Theorem 1.1 Let P be a set of n points not all on a line or a circle, with at most n 2 /2 − Cn ordinary circles, for a sufficiently large C. By a simple double counting argument, there are at most n 2 /6 3-point lines, so there are at most 2n 2 /3 − O(n) ordinary generalised circles. By Theorem 1.5, up to inversions and up to O(1) points, P lies on a line, an ellipse, a smooth circular cubic, or two concentric circles. By Lemmas 6.1 and 6.2, the first three cases give us at least n 2 /2 − O(n) ordinary circles, contrary to assumption. Therefore, we only need to consider the case where, when P is transformed by an inversion to P , we have P = (S\A) ∪ B, where S is a double polygon ('aligned' or 'offset'), and |A| = a, |B| = b.
By Lemma 6.4, P has at least (2 + a + 4b)n 2 /8 − O(n 11/6 ) ordinary generalised circles, which gives us the inequality (2 + a + 4b)/8 < 2/3, which in turn gives us a 3 and b = 0. Therefore, P lies on two concentric circles, and P lies on the disjoint union of two circles or the disjoint union of a line and a circle.
Suppose that a = 3 (and b = 0). Then P has 5n 2 /8 − O(n) ordinary generalised circles. Those passing through the centre of the inversion that transforms P to P , are inverted back to straight lines passing through three points of P. As in the proof of Lemma 6.4, there are n 2 /8 − O(n) ordinary generalised circles that pass through any point of A. Also, we can use Lemma 6.5 below to show that there are at most O(n) ordinary generalised circles spanned by S\A that intersect in the same point not in S. Indeed, by Lemma 6.5, there are at most n/2 ordinary generalised circles of S that intersect in the same point p / ∈ S. Furthermore, for each point q ∈ A there are O(n) generalised circles through p, q, and two more points of S. It follows that there are O(n) ordinary generalised circles spanned by S\A through p.
Thus, if the centre of inversion is in A, P has n 2 /2 − O(n) ordinary circles, which is a contradiction if C is chosen large enough. On the other hand, if the centre of inversion is not in A, then P has 5n 2 /8 − O(n) ordinary circles, also a contradiction. Therefore, we have a 2, which means that P is a set of n points as in Constructions 4.3, 4.4, 4.5, or 4.6.
Next, suppose that n is even. If a = 2, then there are n 2 /2 − O(n) ordinary generalised circles and through both points of A there are n 2 /8 − O(n) ordinary generalised circles. If we invert in one of these points in A, we obtain a set with 3n 2 /8− O(n) ordinary circles (as in Construction 4.6), which is not extremal. Otherwise, a = 0, P is as in Constructions 4.3 or 4.4, and there are at least n 2 /4 − n ordinary generalised circles if n ≡ 0 (mod 4) and n 2 /4 − n/2 if n ≡ 2 (mod 4). Let p be the centre of the inversion that transforms P to P . Then all the 3-point lines of P are inverted to ordinary circles in the double polygon P , all passing through p. By Lemma 6.5 below, there are at most n/2 ordinary circles that intersect in the same point not in P . Thus, in P there at most n/2 3-point lines, and the number of ordinary circles (not including lines) is at least n 2 /4 − 3n/2 if n ≡ 0 (mod 4) and n 2 /4 − n if n ≡ 2 (mod 4), which match Construction 4.3 (and Construction 4.4 if n ≡ 2 (mod 4)), if the radii are chosen so that each vertex of the inner polygon has an ordinary generalised circle that is a straight line tangent to it.
Finally, suppose that n is odd. Then a = 1 and P is as in Construction 4.5, with 3n 2 /8 − O(n) ordinary generalised circles. It follows that P must be as in Construction 4.6, with n 2 /4 − 3n/4 + 1/2 ordinary circles if n ≡ 1 (mod 4) and n 2 /4 − 5n/4 + 3/2 ordinary circles if n ≡ 3 (mod 4). This finishes the proof. Proof Denote the inner circle by σ 1 and the outer circle by σ 2 , both with centre o. We proceed by case analysis on the position of q with respect to σ 1 and σ 2 . Note that for each point p ∈ S, at most one of the ordinary generalised circles tangent at p can go through q.
If q lies on either σ 1 or σ 2 , then q does not lie on any ordinary generalised circle spanned by S.
If q lies inside σ 1 , then q lies on at most m ordinary generalised circles spanned by S, since ordinary generalised circles tangent to σ 1 cannot pass through q. Similarly, if q lies outside σ 2 , it lies on at most m ordinary generalised circles, since ordinary generalised circles tangent to σ 2 lie inside σ 2 .
The remaining case to consider is when q lies in the annulus bounded by σ 1 and σ 2 . Consider the subset S ⊂ S of points p such that there exists an ordinary generalised circle tangent at p going through q. Consider the four circles passing through q and tangent to both σ 1 and σ 2 . They touch σ 1 at a 1 , b 1 , c 1 , d 1 and σ 2 at a 2 , b 2 , c 2 , d 2 as in Fig. 6. Any circle through q tangent to σ 1 and intersecting σ 2 in two points, must touch σ 1 on one of the open arcs a 1 b 1 or c 1 d 1 . Similarly, any circle through q tangent to σ 2 and intersecting σ 1 in two points, must touch σ 2 on one of the open arcs a 2 c 2 or b 2 d 2 . It follows that S must be contained in the relative interiors of one of these four arcs. Since S consists of m equally spaced points on each of σ 1 and σ 2 , |S | < 2m( a 1 ob 1 + c 1 od 1 + b 2 od 2 + a 2 oc 2 ) 4π = m(θ + ϕ) π , where θ and ϕ are as indicated in Fig. 6. In order to show that |S | m, it suffices to show that the angle sum θ + ϕ is strictly less than π . This is clear from Fig. 6 (note that a 1 , o, a 2 are collinear with a 1 and a 2 on opposite sides of o).
Four-Point Circles
Proof For each p ∈ B, the number of ordinary generalised circles spanned by H ⊕ x passing through p is at most O(m). This is because each such generalised circle is tangent to the cubic at one of the points of H ⊕ x, and there is only one generalised circle through p and tangent at a given point of H ⊕ x. Also, for each pair of distinct p, q ∈ B, there are at most O(m) generalised circles through p and q and two points of H ⊕ x; and for any three p, q, r ∈ B there are at most O(1) generalised circles through p, q, r and one point of H ⊕ x. Therefore, again by inclusion-exclusion, by adding B we gain at most O(m) 4-point generalised circles.
It follows that the number of 4-point generalised circles determined by P is Since we assumed that we obtain a + 3b < 1. Therefore, a = b = 0 and P = H ⊕ x. The maximum number of 4-point circles in a coset has been determined in Constructions 4.1 and 4.2.
The final case, when all but O(1) points of P lie on an ellipse, can be reduced to the previous case. Indeed, by Lemma 2.6, if we invert the ellipse in a point on the ellipse, we obtain an acnodal circular cubic, and then the above analysis holds verbatim for the group of regular points on this cubic.
|
2022-11-28T14:13:53.059Z
|
2017-03-20T00:00:00.000
|
{
"year": 2017,
"sha1": "0db1b543dfb015174e9d5b90755a4ec0f09a6ad7",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00454-017-9885-8.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "0db1b543dfb015174e9d5b90755a4ec0f09a6ad7",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": []
}
|
237854546
|
pes2o/s2orc
|
v3-fos-license
|
Outcomes of induction versus spontaneous onset of labour at 40 and 41 GW: findings from a prospective database, Sri Lanka
Objectives The World Health Organization recommends induction of labour (IOL) for low risk pregnancy from 41 + 0 gestational weeks (GW). Nevertheless, in Sri Lanka IOL at 40 GW is a common practice. This study compares maternal/newborn outcomes after IOL at 40 GW (IOL40) or 41 GW (IOL41) versus spontaneous onset of labour (SOL). Methods Data were extracted from the routine prospective individual patient database of the Soysa Teaching Hospital for Women, Colombo. IOL and SOL groups were compared using logistic regression. Results Of 13,670 deliveries, 2359 (17.4%) were singleton and low risk at 40 or 41 GW. Of these, 456 (19.3%) women underwent IOL40, 318 (13.5%) IOL41, and 1585 (67.2%) SOL. Both IOL40 and IOL41 were associated with an increased risk of any maternal/newborn negative outcomes (OR = 2.21, 95%CI = 1.75–2.77, p < 0.001 and OR = 1.91, 95%CI = 1.47–2.48, p < 0.001 respectively), maternal complications (OR = 2.18, 95%CI = 1.71–2.77, p < 0.001 and OR = 2.34, 95%CI = 1.78–3.07, p < 0.001 respectively) and caesarean section (OR = 2.75, 95%CI = 2.07–3.65, p < 0.001 and OR = 3.01, 95%CI = 2.21–4.12, p < 0.001 respectively). Results did not change in secondary and sensitivity analyses. Conclusions Both IOL groups were associated with higher risk of negative outcomes compared to SOL. Findings, potentially explained by selection bias, local IOL protocols and CS practices, are valuable for Sri Lanka, particularly given contradictory findings from other settings. Supplementary Information The online version contains supplementary material available at 10.1186/s12884-022-04800-1.
Introduction
Over the past decades, induction of labour (IOL) rates have continued to rise, with a reported average incidence of one out of four births at term (from 37 + 0 gestational weeks [GW]) in high-income countries, and very similar rates in low and middle-income countries (LMIC) [1]. According to the World Health Organization (WHO), IOL should be performed only when there is a clear medical indication and the expected benefits outweigh its potential harms [2]. As perinatal risks increase with gestational age, the current recommendation from WHO, the National Institute for health and Care Excellence Page 2 of 10 Senanayake et al. BMC Pregnancy and Childbirth (2022) 22:518 (NICE), and most scientific societies is to perform IOL in women who are known with certainty to have reached 41 GW (i.e., from 41 + 0 GW) [1,[3][4][5][6]. However, especially in the last few years, the debate on optimal timing for IOL and, specifically, whether IOL around term improves birth outcomes, has become very lively. The most recent Cochrane review (2018) including 30 randomized clinical trials (RCTs), seven conducted in southeast Asia, highlighted that IOL from 37 GW compared to expectant management is associated with fewer perinatal deaths, neonatal intensive care unit admissions, babies with low Apgar score and caesarean sections (CS), but also with more operative vaginal deliveries (OVD) [7]. Authors concluded that further investigation is needed into optimal timing of IOL, together with exploration of women's risk profiles and preferences [7].
More recently, other evidence has emerged. In 2019, a meta-analysis of cohort studies including 15 million pregnancies in high-income countries reported that stillbirth increases slightly but significantly from 37 GW onward with a 64% increase in the risk of stillbirth at 41 GW compared to 40 GW [8], thus suggesting the opportunity of elective IOL even before the traditional cut-off of 41 GW.
Other relevant RCTs were published in parallel. A single-centre RCT in the UK among nulliparous women over 35 years old without complications showed no significant difference in maternal and newborn outcomes between IOL at 39 GW and expectant management [9]. More recently, the ARRIVE trial, a multicentre RCT conducted by Grobman et al. among 6106 low-risk nulliparous women in the US compared IOL at 39 GW to expectant management and found lower incidence of CS with IOL (RR 0.84; 95%CI 0.76-0.93) and no significant differences in perinatal deaths or severe neonatal complications (RR 0.80; 95%CI 0.64-1.00) [10]. A meta-analysis of cohort studies [11] confirmed the results of this trial [10].
Two other RCTs in uncomplicated singleton pregnancies-INDEX, a Dutch trial enrolling 1801 women [12], and SWEPIS, a Swedish multicentre trial in 14 hospitals including 2760 women [13]-found that IOL at 41 GW was associated with fewer adverse perinatal outcomes than expectant management until 42 GW [12,13]. Notably, the SWEPIS study was stopped early because of higher perinatal mortality with SOL [13].
On the other hand, a national retrospective registerbased cohort study evaluating the effects of changes in routine elective IOL policies in Denmark (42 GW versus 41 + 3 and 41 + 5 GW) found no differences in neonatal outcomes including stillbirth, despite the number of women with IOL increasing significantly [14]. Additionally, a systematic review reported that IOL at 41 versus 42 GW was associated with an increased risk of CS (RR 1.11; 95%CI 1.09-1.14) and adverse maternal outcomes [15].
In conclusion, evidence is still contradictory and the debate is quite polarized. No clear context-specific evidence exists on women's preferences on IOL. The ARRIVE trial reported that US women in the IOL group had a positive perception of increased control over birth [10,16], while other qualitative systematic reviews concluded that the majority of women feared medical interventions, preferring a physiological birth promoting their physical and psychosocial capacities [16,17].
In addition, literature on outcomes of IOL around term versus expectant management in LMIC is very scarce. According to the WHO Global Survey on Maternal and Perinatal Health, IOL was performed in Asia in 12.1% of deliveries and associated with negative neonatal outcomes [18]. According to existing estimates, Sri Lanka has the highest IOL rate in Asia (about 35.5% of total deliveries) [1,18] with 77.2% of all IOL being elective [18].
Elective IOL at 40 GW is often clinically justified by local professionals on the basis of supposed earlier loss of foeto-placental function in South Asian populations compared with Caucasian women or Asian counterparts, and on the fear of increased risk of foetal morbidity [19][20][21]. Nevertheless, no study from Sri Lanka has so far explored outcomes of women or newborns with IOL at 40 GW versus 41 GW.
The main objective of this study was to compare the absence of a maternal or neonatal complications between low-risk women induced at 40 GW and those in spontaneous onset of labour (SOL) at 40 or 41 GW. Secondary objectives were to compare the absence of maternal or neonatal complications between women induced at 41 GW and those in SOL at 40 or 41 GW; and to compare the mode of delivery between induced women and those in SOL. Data for this study were collected over four years in a prospective individual patient database established in 2015 at the De Soysa Teaching Hospital for Women, Colombo, the largest maternity hospital in Sri Lanka.
Study design
This is an observational study reported according to the STrengthening the Reporting of OBservational studies in Epidemiology (STROBE) statement (Additional Table 1) [22].
Population and setting
Data collection, data quality assurance procedures and standard operating procedures used for the individual patient database are reported elsewhere [23]. Briefly, 150 variables (i.e., maternal sociodemographic characteristics, risk factors, process indicators, maternal and neonatal outcomes) were collected for each birth on two wards of the University Obstetric Unit at De Soysa Teaching Hospital for Women, using a standardised two-page form, and entered in real time in an electronic database. De Soysa is the largest referral hospital for maternity care in Sri Lanka and all deliveries occurring in these two wards from May 2015 to May 2019 were entered in the database and considered for inclusion. Overall data quality was routinely monitored with external independent random review of 5% of forms and 5% of entered births to maintain an error rate in data collection below 0.02% [24]. Data were also externally monitored for completeness and internal consistency at roughly 4-month intervals [23]. We included "low risk women" with singleton pregnancies and a foetus in cephalic presentation whose delivery occurred between 40 + 0 and 41 + 6 GW. We excluded all cases with any maternal or foetal characteristics which may have affected outcomes, such as: maternal obesity (Asian criteria-based body mass index -BMI-more than 27.5 [24]), previous CS, macrosomia at ultrasonography (defined as estimated birthweight exceeding the 90 th centile for gestational age), hypertension disorders during pregnancy (i.e., pregestational or gestational hypertension, preeclampsia, eclampsia, HELLP syndrome), chorioamnionitis, major foetal malformations, intrauterine growth restriction at ultrasonography (IUGR), small for gestational age (SGA), pre-gestational diabetes, gestational diabetes with the need of drug therapy, maternal cardiac disease, maternal hypothyroidism, polyhydramnios, oligohydramnios, antepartum haemorrhage (APH), major placenta praevia, placental accretism, severe anaemia (haemoglobin < 7.0 g/dl) and other foetal and maternal pathological conditions, i.e., systemic lupus erythematosus, pre-pregnancy deep venous thrombosis, epilepsy, suspected cephalo pelvic disproportion, recurrent infection, pancreatitis or glomerulonephritis in pregnancy, chickenpox disease, chronic disease, signs of potentially impaired foetal wellbeing (non-reassuring or pathological cardiotocography, reduced foetal movement, meconium stained amniotic fluid). We also excluded macerated stillbirth before 41 + 0 GW, as those births are routinely induced. All women with a reported indication for IOL suggesting the presence of maternal or foetal characteristics described above, such as diabetes, macrosomia at ultrasound, IUGR/SGA, were excluded from the analysis.
Comparison groups and outcomes
We compared women with IOL at 40 GW (40 + 0 to 40 + 6 GW), women with IOL at 41 GW (41 + 0 to 41 + 6 GW), and women with SOL in between 40 + 0 to 41 + 6 GW. Artificial separation of membranes alone was not considered as induction. Low risk women with prelabour rupture of membranes were included in the SOL group.
The main outcome is the absence of "negative outcomes", defined in line with previous literature [2,3,7] as any birth that included an intervention (i.e., CS, OVD) and/or a maternal or neonatal complication (i.e., not completely physiological).
As listed in Additional Table 2, maternal complications included in the definition of negative outcomes were: abruptio placentae, amniotic fluid embolisms, cord prolapse, hysterectomy, intensive care unit admission, maternal death, near miss (defined as severe disease such as pre-eclampsia, eclampsia, sepsis, uterine rupture; critical interventions such as Intensive Care Unit admission, intervention radiology, laparotomy, blood transfusion; or organ dysfunction), operative theatre admission after delivery, perineal tears 3rd-4th degree, postpartum haemorrhage (defined as a blood loss above 500 ml), sepsis or severe infection, uterine rupture and other maternal complications not further specified in the database. Included neonatal complications were apgar score less than 5 at 10' , asphyxia (i.e., no spontaneous start of breathing, ventilation for at least 30s and/ or thoracic compressions or any drug administration), jaundice with exchange transfusion, major birth trauma (i.e., brachial plexus injury/arm palsy, fractures at any site, sub-aponeurotic hemorrhage), meconium aspiration syndrome, need of feeding support, Neonatal Intensive Care Unit or Special Care Baby Unit admission, neonatal length of stay more than 10 days, perinatal deaths included stillbirth (both macerated and fresh stillbirth based on clinical evaluation), phototherapy for more than 24h (included as a proxy of other neonatal complications such as large for gestational age), respiratory distress syndrome (defined as respiratory distress lasting more than 24h), major neurological complications (e.g., seizures, ventricular hemorrhage), sepsis or infection, ventilation in delivery room and other neonatal complications not further specified in the database.
Data analysis
Categorical variables were expressed as absolute numbers and compared among groups with χ 2 or Fisher exact test as appropriate.
We evaluated the association between each group and negative outcome(s), CS, and OVD using multiple logistic regression models adjusting for baseline characteristics i.e., mother age, education, parity [i.e., nulliparous, multiparous], BMI, neonatal weight). Results of logistic regression are also presented for CS and OVD since they were evaluated as clinical outcomes related to failed induction in Sri Lanka [25]. A one-sided Cochran-Armitage test for trend was performed to assess the influence of changes of clinical protocols and staff training practices [26,27] over different semesters of the study on CS and OVD. As secondary analyses we compared i) IOL at 40 GW to a group composed of IOL at 41 GW and SOL, in line with analyses by Rydahl and collegues [15], and ii) IOL at 40 GW to IOL at 41 GW. The former analysis allowed the comparison between IOL group at 40 GW and spontaneous labour at the same gestational age, and simultaneously took into account the risks of the ongoing pregnancy including all births at 41 GW, reducing possible bias, while the latter is a comparison of interest in the Sri Lanka setting due to the belief of an earlier loss of foeto-placental function in South Asian populations [19][20][21].
In addition, since for database construction we were not able to identify if reported hypertensive disorders (pregestational hypertension, preeclampsia, eclampsia, HELLP syndrome), chorioamnionitis, oligohydramnios, APH, and signs of potentially impaired foetal wellbeing (i.e. non-reassuring or pathological cardiotocography, reduced foetal movement, meconium stained amniotic fluid) from 41 + 0 GW were risk factors or complications related to the prolongation of the pregnancy, we performed a sensitivity analysis including women who developed these conditions and considering them as negative birth outcomes.
Data were analysed using STATA version 14.0 (Stata Corporation, College Station TX) and SAS/STAT ® software version 9. All statistical tests were two-sided and a p-value less than 0.05 was considered statistically significant.
Women's characteristics
A total of 13,670 women delivered in the hospital during the study period. Of these, 2359 (17.4%) matched our inclusion criteria of low-risk singleton pregnancy from 40 + 0 to 41 + 6 GW with the foetus in cephalic presentation (Fig. 1). Among the included women SOL was observed in 1585 women (67.2%), while among 774 cases of IOL, 456 (58.9%) were induced from 40 + 0 to 40 + 6 GW, and 318 (41.1%) from 41 + 0 to 41 + 6 GW. Prostaglandin alone was the most frequent method of induction. It was used for more than 40% of induced women (48.8% in IOL at 40 GW and 43.6% in IOL at 41 GW) followed by artificial rupture of membranes, foley, oxytocin, or a combination of techniques with no differences between groups except for the combination of prostaglandin, oxytocin and artificial rupture of membranes (6.8% in IOL at 40 GW vs 17.5% in IOL at 41 GW, p < 0.001) (Additional Table 3).
Some imbalances among groups were observed (Table 1). Women undergoing IOL at 40 GW had a significantly higher level of education compared to the SOL group (20.0% vs 13.3%, p = 0.001). Significantly more women were unmarried and overweight in the IOL at 41 GW group compared to SOL (unmarried women: 2.2% vs 0.9%, p = 0.040; overweight women: 29.9% vs 23.0%, p = 0.009). IOL group at 41 GW had an increased frequency of newborns with a birth weight between 3500 and 4000 g (19.2% vs 12.5% in IOL at 40 GW vs 14.8% in SOL, p = 0.035) and above 4000 g (2.5% vs 2.4% in IOL at 40 GW vs 0.8% in SOL, p = 0.006). Women with SOL were most often multiparous (52.4% vs 43.0% in IOL at 40 GW vs 37.7% in IOL at 41 GW, p < 0.001) and more frequently assisted at delivery by nurses (56.7% vs 43.9% vs 36.5%, p < 0.001), while mid-level medical staff (either senior house officers or registrars) was more often involved in IOL groups (30.7% vs 30.2% vs 14.1%, p < 0.001).
Primary outcomes
The overall incidence of births with one or more negative outcomes (including CS and OVD) is reported in Fig. 2. The rate was significantly lower in the SOL group (27.1%, p < 0.001) compared to IOL. The CS rate was significantly higher among women undergoing IOL either at 40 GW (25.4%) or at 41 GW (28.6%) when compared with SOL (10.3%, p < 0.001). However, no significant differences were found for OVD rate. The proportion of births with any other complication (see Additional Table 2 for the complete list of other complications) was not significantly different among groups (p = 0.222). Detailed data is reported in Additional Table 4.
The trend analysis (Additional Fig. 1) showed an increasing CS rate over semesters in the group with IOL at 40 GW only (trend test p = 0.021), whereas OVD rate decreased overall (trend test p = 0.016) and in IOL at 40 GW (p = 0.036).
Results of sensitivity analysis, which included additional women with oligohydramnios, APH and impaired foetal wellbeing complications from 41 + 0 GW (Additional Table 10), did not differ from the primary analysis with both IOL groups positively associated with higher odds of any negative birth outcome (Additional Table 11).
Main findings
In this study in Sri Lanka the practice of elective IOL at 40 GW or induction at 41 GW compared to SOL in a low-risk population was not associated with a reduction in complicated birth outcomes for the mother and/or the newborn. Both IOL groups were also associated with increased odds of CS compared to SOL.
Interpretation
Our findings are partially in line with the most recent Cochrane systematic review, confirming that there is evidence of higher OVD rate in IOL at 40 GW vs IOL at 41 GW [7]. Discrepancies between our results for CS rates and other studies [7,9,12,14,28] could be accounted for by differences in setting, study design, and different definitions of comparison groups. Our study was set in Sri Lanka and included recent data from a maternity hospital registry, evaluating optimal timing of IOL in routine circumstances in a LMIC setting at predefined GW. Only 9 of 30 RCTs included in the Cochrane review were conducted in LMIC, while 13 (43%) studies were published from 1960s-1980s [18]. Furthermore, comparison groups in the Cochrane review are not directly comparable since timing of IOL differed among included trials as well as group definition, timing, and monitoring of expectant management. Moreover, while RCT would be the most appropriate study design to address the question of optimal timing of IOL, this design has potential limitations due to difficulties in masking the intervention and high number of women declined participation (73% in the US study and 78% in the Swedish study [10,13]). The availability of a prospective database capturing characteristics and outcomes of each delivery provides the opportunity to easily monitor indicators over time and compare practices and results in a realworld setting.
Overall, findings of this study highlight the need for caution in generalizing the results of RCT conducted in high income settings to different clinical settings and populations. More studies should be conducted to further explore the ideal timing of IOL in LMICs.
Strengths and limitations
To our knowledge this is the first published study on the association between timing of IOL and maternal and newborn outcomes in low-risk pregnancies in Sri Lanka. It is also the first study from a setting with limited resources reporting on the use of a prospective Fig. 1 Study sample selection. Notes: 1 High risk pregnancy defined by the presence of one or more risk factors among: multiple pregnancy, non-cephalic presentations, BMI > 27.5, previous CS, hypertensive disorders (pregestational hypertension, preeclampsia, eclampsia, HELLP syndrome), chorioamnionitis, foetal malformations, IUGR/SGA, pregestational or gestational diabetes in drug therapy, maternal cardiac disease, polyhydramnios, oligohydramnios, APH, severe anaemia, systemic lupus erythematosus, pre-pregnancy deep venous thrombosis, epilepsy, pelvic dysfunction, recurrent infection, pancreatitis or glomerulonephritis in pregnancy, chickenpox disease, chronic disease, signs of potentially impaired foetal wellbeing (i.e. non-reassuring or pathological cardiotocography, reduced foetal movement, meconium stained amniotic fluid). We also excluded macerated stillbirth from the group IOL at 40 GW, as these births are always induced. 2 Reported indications for IOL suggesting the presence of the above risk factors. Abbreviations: APH = Antepartum haemorrhage; BMI = Body mass index; CS = Caesarean section; GW = gestational weeks; IOL = induction of labour; IUGR = Intrauterine growth restriction at ultrasonography; SGA = Small for gestational age; SOL = spontaneous onset of labour individual-patient database to analyse practices and outcomes of IOL [23]. This study contributes to current international and local debate on the appropriateness of IOL near term. These study findings are extremely relevant locally both for clinicians, researchers and policy makers, as IOL at 40 GW is a common practice in Sri Lanka and has a significant economic impact on the health system and healthcare resources. We acknowledge some limitations of this study. As an observational study, we could only assess associations between IOL and birth outcomes and not causation. Generalizability of study results may be limited by the characteristics of the local context and population in this single centre study. Larger sample sizes are required to detect significant differences in rare adverse events including stillbirth or maternal or perinatal death. Although gestational age was mostly determined by ultrasound examination, for 12% of the included women gestational age was estimated by menstrual dating.
Socio-cultural background and women's empowerment may have affected both requests for induction and the type of care offered by physicians. Specifically, early induction (IOL at 40 GW) occurred more often in women with a high level of education. Unmarried women, still subjected to social stigma in Sri Lanka [29], were significantly more represented in the group undergoing IOL at 41 GW. Thus, numbers of CS and neonatal complications may have been influenced by socio-economic status. Other authors have described similar results, where unmarried women could have limited access to care [29] while higher social status or economic condition is related to an increasing medicalization of birth [30,31]. However, in our study, since these imbalances among groups affect results in different directions, there may be limited risk of bias.
Though results were corrected for confounders, we cannot exclude heterogeneity among groups. Nulliparous women were more frequent in induced groups where the highest frequency of CS was recorded. Also, the combination of prostaglandin, oxytocin and artificial rupture of membranes was more frequent used in IOL at 41 GW. Since nulliparity, combination of induction techniques and induction itself are associated with an increased risk of CS [1,9], it is impossible to say whether the higher frequency of negative outcomes, maternal complications and CS in IOL groups is related to the interventions or have suffered from selection bias.
Furthermore, induced women may have differed on characteristics not captured or not reported in the data collection form (such as unreported small for gestation
|
2021-09-01T15:07:29.404Z
|
2021-06-28T00:00:00.000
|
{
"year": 2022,
"sha1": "7372cccf9e6eda32303703cce6ac4f787f90616b",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-579316/v1.pdf?c=1631899683000",
"oa_status": "GREEN",
"pdf_src": "Springer",
"pdf_hash": "890b08fcacb782816e29400953a4147b1cd199cf",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
1660696
|
pes2o/s2orc
|
v3-fos-license
|
Full counting statistics of multiple Andreev reflection
We derive the full counting statistics of charge transfer through a voltage biased superconducting junction. We find that for measurement times much longer than the inverse Josephson frequency, the counting statistics describes a correlated transfer of quanta of multiple electron charges, each quantum associated with the transfer of a single quasiparticle. An expression for the the counting statistics in terms of the quasiparticle scattering amplitudes is derived.
Due to the discrete nature of electric charge, the current in mesoscopic conductors generally fluctuates. Over the last decade, there has been an increasing interest, theoretical as well as experimental [1], in the physics of current fluctuations. Most studies have been focused on noise, the second moment of the fluctuations, but recently a considerable interest has been shown for the full distribution of charge fluctuations, the full counting statistics (FCS) [2]. A variety of theoretical approaches to the FCS, ranging from quantum mechanical [3,4], via quasiclassical [5] to classical [6], have been developed. The third moment of current fluctuations was very recently measured [7], opening up the road, as well, to experimental investigation of the higher moments of the fluctuations.
Noninteracting electrons in purely normal conductors are transferred one by one [2].
In normalsuperconducting junctions, the charge transfer mechanism across the normal-superconducting interface, at energies below the superconducting gap, is Andreev reflection. As a consequence, the FCS include terms describing correlated transfer of pairs of electrons [8]. Recently, Belzig and Nazarov [9] studied the FCS in superconducting junctions with a fixed phase difference between the superconducting electrodes. They found that the classical interpretation of the FCS, the probability to transfer a given number of electrons across the junction during the measurement, could imply negative probabilities. Coupling the junction to a detector [10], they showed that this resulted from an attempt to interpret the phenomena of supercurrent with classical means.
In voltage biased superconducting junctions, the physical situation is quite different. Due to the applied voltage bias, the superconducting phase difference oscillates with the Josephson frequency 2eV /h, giving rise to both dc and ac-components of the current. For measurement times much longer than the inverse Josephson frequency, only the dc-current, which is dissipative, contributes to the net charge-transport. Microscopically, the charge is transported between the two superconductors via coherent multiple Andreev reflections (MAR) [11]. The current has been studied in various junctions, both theoretically [12] and experimentally [13]. Recently, also the noise was studied [14,15,16].
In this paper we present the FCS of charge transfer through a voltage biased junction, in terms of the amplitudes for quasiparticle scattering. Each quasiparticle scattering process results in an integer number of electron charges being transferred across the junction. As a consequence the FCS can be interpreted in classical terms. At temperatures much lower than the superconducting energy gap ∆, many-quasiparticle scattering processes are exponentially suppressed, resulting in a simple probability distribution, containing only the probabilities for single quasiparticle scattering. This distribution reproduces known results for dc-current and zero-frequency noise. We discuss in detail the third cumulant for single channel junctions and diffusive junctions shorter than the superconducting coherence length.
We consider a superconducting junction consisting of two superconducting reservoirs connected via a normal, mesoscopic conductor (see Fig. 1). For simplicity of notation, we consider a junction with a single transport mode, the multi-mode generalization is discussed below. A voltage V is applied between the two reservoirs.
The single-particle wavefunctions in the junction, solutions to the time dependent Bogoliubov-de Gennes equation, are scattering states labelled by the incoming quasiparticle type, injection energy and reservoir. The scattering states are superpositions of amplitudes for quasiparticles at energies ±neV from the injection energy, counted from the local chemical potential in each contact. The amplitude for an incoming quasiparticle of type α at energy E m to exit the junction as a quasiparticle of type β Notethat all quasiparticles injected from the left at energies E0 + 2neV and from the right at E0 + (2n + 1)eV scatter on the same "ladder" in energy space.
at energy E n = E m + (n − m)eV is denoted s βα nm (see Fig. 1). This amplitude is a function of the scattering matrix of the normal conductor and the Andreev reflection amplitudes [17].
To access the FCS, we make the first important observation that quasiparticle scattering in a voltage biased superconducting junction is formally identical to scattering in a normal voltage biased junction, with an applied harmonic ac-field. The FCS in such a system was investigated in detail by Ivanov and Levitov [4]. Following Ref. [4] we note that for measurement times much longer than the quasiparticle scattering time, the inelastic single mode scattering problem can be mapped onto an elastic scattering problem with many modes. The scattering between all energies and quasiparticle types of a "ladder" (see Fig. 1) is correlated, while different ladders contribute incoherently. We denote the ladder by its energy E 0 in the interval −∆ − 2eV ≤ E 0 < −∆ counting its "leg" in the left lead (e.g. E 0 in Fig 1). Thus we may concentrate on the scattering matrix S, with elements s αβ nm , of a single ladder, and then integrate over the ladder energy E 0 . In general S has infinite dimensions, but in our case the vanishing probability of Andreev reflection far outside the gap naturally cuts the number of relevant modes to the order of ∆/eV . Quasiparticle current (but not charge current) is conserved in the scattering processes at the normal-superconductor interfaces. As a consequence S is unitary [19].
We are then, in line with [3,18], able to directly write down the characteristic function in terms of all different many-particle scattering probabilities where Λ is the set of counting fields λ nα , one for each mode nα. The outer sum runs over all possible sets of incoming modes i = {m 1 α 1 , m 2 α 2 , . . .} and outgoing modes o = {n 1 β 1 , n 2 β 2 , . . .}. The many-particle scattering probabilities are given by where f (E) = (1 + exp[E/kT ]) is the Fermi distribution function and |s o i | is the determinant of the matrix formed by taking the columns i and rows o of S.
Eqs. (1) and (2) gives us the FCS for quasiparticle transfer in a voltage biased superconducting junction. The object of main interest is however the FCS of the charge transfer. We then make the second important observation that for measuring times much longer than the inverse Josephson frequency, the net charge transfer is directly related to the quasiparticle transfer. A quasiparticle of type α incident at energy E m , which is scattered into an outgoing quasiparticle of type β at energy E n , transports exactly m − n electrical charges across the junction (for m < n, the transported charge is thus negative for the bias in Fig. 1). This is independent of the type of quasiparticles α and β, and also of the charge of the quasiparticles. This can be shown by studying any quasiparticle scattering path (see Fig. 1), keeping in mind that the process of Andreev reflection transfers exactly two electrons across the normal-superconductor interface, while a normal transmission transfers exactly one electron.
This can also be seen from energy conservation: An electron traversing the normal part of the junction from left to right will absorb the energy quantum eV from the electric field, while an electron moving in the opposite direction will emit the quantum eV . The effective number of quanta eV absorbed in a quasiparticle scattering process scattered thus equals the number of electrons transferred from left to right. We emphasize that this approach correctly counts all the electrons transferred, including the electrons entering the superconductor as Cooper pairs at energies within the gap.
We can thus count the transferred electrons by counting the transferred quasiparticles weighted by the number of electrons transferred in each quasiparticle scattering event. Thus by chosing the counting fields λ nα = nλ in Eq. (1) we can directly write down the characteristic function for charge transfer Following Ref [3], the characteristic function can be written on a determinant form as where the elements of the matrix S λ are given by (S λ ) αβ mn = s αβ mn e iλ(n−m)/2 and the diagonal matrixf has elementsf αα nn = f (E n ). This results provides a general solution to the temperature and voltage dependence of the full counting statistics of voltage biased superconducting junctions, for measurement times much longer than inverse Josephson frequency, and the quasiparticle scattering time, as defined by the energy dependence of the scattering matrix ∼h∂ E ln |s αβ mn |. Eqs. (3) and (4) shows that the charge is transferred in correlated quanta of multiple electron charges.
At low temperatures kT ≪ ∆ all quasiparticle states below the gap are filled, while all states above the gap are empty. This simplifies Eq. (2) significantly since it fixes the set of incoming modes i to all modes below the gap, but still all different sets of outgoing modes should be considered.
The MAR ladder forms a single mode for transport in energy space [17]. The scattering amplitudes s βα nm can be decomposed into amplitudes for entering the normal region, t m± mα , propagation along the MAR ladder, t ↑/↓ mn , and leaving the normal region, t nα n± , as and analogously for E n < E m . Using this decomposition we find that choosing two or more outgoing modes above the gap in the set o gives P i|o = 0, since the matrix s o i then contains two or more parallell rows. In other words, the many-particle scattering process of two or more quasiparticles passing the gap through the single mode in energy space is prohibited by the Pauli exclusion principle. For the outgoing sets with all outgoing modes below the gap |s o i | is evaluated through making s o i unitary by inserting a row and a column with the amplitudes t m+ mα t ↑ mg , where E g is the first energy above the gap in the ladder. Finally, the sets of one outgoing mode above the gap, using similar manipulations, give the single particle scattering probabilities. The characteristic function can then be expressed in single-particle scattering probabilities This simple expression is the main result of this paper. The cumulants of the charge transfer distribution function are obtained by taking derivatives of ln χ E0 (λ), and summing over all ladders (integrating over the energy E 0 ), where τ is the measurement time.
The integrands for the various moments are conveniently expressed in the physically relevant n-electron scattering probabilities [17], with P 0 (E 0 ), the probability of no quasiparticle passing the gap, giving the correct normalization ∞ n=0 P n (E 0 ) = 1. In Fig. 2 the probability of different n-electron scattering processes, as a function of voltage, are shown. The integrands for the first three cumulants in Eq. (7) are the spectral current density I(E) = ∞ n=0 nP n (E), the noise spectral density S I (E) = ∞ n=0 n 2 P n (E) − I(E) 2 , and the spectral density of the third cumulant C 3 (E) = ∞ n=0 n 3 P n (E) − I(E)(3S I (E) + I(E) 2 ). The expression for the current is identical to the one presented in Ref. [17], and the expression for the zero-frequency current noise reproduces the known results [14] (see the upper panel in Fig. 3).
The noise and the third cumulant, calculated numerically, are plotted in Fig. 3 for different transparencies of the normal contact. We see that the subgap structure is generally more pronounced in the third cumulant, compared to the current and noise. Furthermore C 3 changes sign both as a function of voltage and transparency.
In the tunnel limit D ≪ 1 where all scattering probabilities in Eq. (6) are small, the corresponding probability distribution is Poissonian, i.e. describes an uncorrelated transfer of quantas of multiple charge. Within the n-th subgap region, i.e. n − 1 < 2∆/eV < n, n-electron transfer dominate, giving S = 2enI and C 3 = (2en) 2 I, see the upper panel in Fig. 4. As seen in the right panel of Fig. 2 the picture is more complicated on the borders between the regions. Due to resonances up to three adjacent n-particle processes have similar strength [17], and the FCS is no longer Poissonian. So far we considered only single mode junctions. In the limit of a short junction, when the scattering matrix of the normal conductor is independent on energy on the scale of ∆, it is possible, just as for the current and noise [15], to write the generating function in terms of the transmission eigenvalues D n of the normal conductor only. For all mesoscopic conductors where the transmission eigenvalue distribution is known, the FCS can thus be obtained via averaging the single mode result in Eq. (6). As an example, we present in the lower panel of Fig. 4 the first three moments for a short, diffusive
|
2018-04-03T03:15:38.604Z
|
2003-08-05T00:00:00.000
|
{
"year": 2003,
"sha1": "acaba1285fb7ccbee3274f211333973ca02b82bf",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0308086",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "0dc65439b6780e1d986c49aa92938134088ba4ff",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
}
|
8702333
|
pes2o/s2orc
|
v3-fos-license
|
Potassium sodium (2R,3R)-tartrate tetrahydrate: the paraelectric phase of Rochelle salt at 105 K
Rochelle salt, K+·Na+·C4H4O6 2−·4H2O, is known for its remarkable ferroelectric state between 255 and 297 K. The current investigation, based on data collected at 105 K, provides very accurate structural information for the low-temperature paraelectric form. Unlike the ferroelectric form, there is only one tartrate molecule in the asymmetric unit, and the structure displays no disorder to large anisotropic atomic displacements.
Rochelle salt, K + ÁNa + ÁC 4 H 4 O 6 2À Á4H 2 O, is known for its remarkable ferroelectric state between 255 and 297 K. The current investigation, based on data collected at 105 K, provides very accurate structural information for the lowtemperature paraelectric form. Unlike the ferroelectric form, there is only one tartrate molecule in the asymmetric unit, and the structure displays no disorder to large anisotropic atomic displacements.
Comment
The radiation-induced free radical chemistry of dicarboxylic acids and their salts has received attention for several decades.
The Rochelle salt, is of particular interest as it as it exhibits a ferroelectric phase between 255 and 297 K, where the structure is monoclinic, space group P2 1 ; outside this temperature range the compound is paraelectric and presents orthorhombic phases in space group P2 1 2 1 2. The nature of the radicals formed in Rochelle salt is currently investigated in order to understand the mechanisms producing changes in the ferroelectric properties of this compound upon irradiation (Suzuki, 1974;Treeck, van & Windsch, 1977). For the analysis of the electron magnetic resonance data, precise knowledge of the low-temperature orthorhombic form is necessary. Structural data for the high-temperature orthorhombic form were first provided by Beevers & Hughes (1941). Iwata et al. (1989) carried out a neutron diffraction study for both orthorhombic forms; more accurate X-ray diffraction studies were later presented by Solans et al. (1997), who concluded that differences between the two P2 1 2 1 2 states are "small but significant". None of these structures are, however, available in the Cambridge Structural Database (Version 5.29 of November 2007;Allen, 2002). A high-precision redetermination of Rochelle salt at low temperature has therefore been executed.
Hydrogen bonds are listed in Table 1, the most unusual feature is the almost symmetric four-center interaction involving H31W.
When K + is replaced by NH 4 + [as, for instance, in II] the four shortest K2···O contacts are converted into hydrogen bonds, while only the two K1···O4 interactions are transformed into short hydrogen bonds, the K1···O1W and K1···O2W contacts being replaced by a three-center hydrogen bond.
Experimental
Rochelle salt was obtained from Sigma-Aldrich and tetrahydrate crystals were grown from saturated aqueous solutions. A large block-shaped speciemen was ground into a sphere in a mill and used for data collection.
Refinement
Full isotropic refinement was carried out for all H atoms. Fig. 1. : The molecular structure of (I). Displacement ellipsoids are shown at the 50% probability level. Metal ccordination has been indicated by dashed lines. Refinement. Data were collected by measuring six sets of exposures with the detector set at 2θ = 29° and 65°, crystal-to-detector distance 5.00 cm. Refinement of F 2 against ALL reflections.
|
2014-10-01T00:00:00.000Z
|
2008-03-05T00:00:00.000
|
{
"year": 2008,
"sha1": "cfc1695eae1dae6177856ef8e123b320a900ac17",
"oa_license": "CCBY",
"oa_url": "http://journals.iucr.org/e/issues/2008/04/00/bg2163/bg2163.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0960f2b98eade5bcdb124c20590eabc18ef718ce",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
238849554
|
pes2o/s2orc
|
v3-fos-license
|
Vehicle Driving State Estimation Using an Improved Adaptive Unscented Kalman Filter
: This paper proposes an improved adaptive unscented Kalman filter (iAUKF)-based vehicle driving state estimation method. A three-degree-of-freedom vehicle dynamics model is first established, then the varying principles of estimation errors for vehicle driving states using constant process and measurement noises in the standard unscented Kalman filter (UKF) are compared and analyzed. Next, a new type of normalized innovation square-based adaptive noise covariance adjustment strategy is designed and incorporated into the UKF to derive our expected vehicle driving state estimation method. Finally, a comparative simulation investigation using CarSim and MATLAB/Simulink is conducted to validate the effectiveness of the proposed method, and the results show that our proposed iAUKF-based estimation method has higher accuracy and stronger robustness against the standard UKF algorithm. the results show that compared to the UKF approach, our proposed iAUKF-18
Introduction
Currently, automotive active safety control systems (ASCSs) including rollover prevention system (RPS) [1], adaptive cruise control (ACC) [2], and lanedeparture avoidance system (LDA) [3], have been extensively investigated and developed in response to the continuously growing demand on vehicle driving safety and dynamics performances [4]. It is well known that the accurate vehicle states like sideslip angle, yaw rate, and longitudinal speed can significantly affect an ASCS's performance. How to accurately obtain these vehicle states has become a core technical premise in the development of automobile ASCSs [5].
However, considering the limits and costs of sensors, only a few of vehicle driving states can be directly measured. Consequently, it is necessary to propose an appropriate estimation method to achieve accurate and effective estimation of vehicle driving states using fewer on-board sensors [6].
At present, there exists two types of vehicle state estimation methods: kinematics and dynamics methods [7]. For the former one, the vehicle states are usually predicted by integrating the related vehicle states measured by sensors [8].
Note that the common kinematics methods include the closed-loop nonlinear observer [9] and neural network methods [10], etc. However, these methods have high dependency on the sensor accuracy, and the cumulative errors caused by continuous integration also limit their applications [11][12]. For the latter one, a large number of driving state estimation methods such as Kalman filter (KF) family algorithms, particle filters, and robust observers have been proposed by lots of researchers and scholars. In those methods, the KF family approaches are widely employed to estimate vehicle states due to its conveniences of providing optimal solutions and suppressing the effect of measurement and sensor noises.
For instances, the extended Kalman filter (EKF) was usually used to estimate vehicle slip angle and lateral tire force, and a variable model covariance method was introduced to design a stable estimator in order to improve local observability of the observer [13]. In the work of [14], an effective state estimation was proposed for a four-wheel-drive vehicle based on the minimum model error criterion and the EKF algorithm. Besides, an augmented EKF algorithm was utilized to simultaneously estimate the common vehicle states and the lateral stiffness of tires, thus the impacts of the time-varying parameters in this vehicle system on the estimation accuracy can be eliminated [15].
Although the EKF algorithm maintains the elegant and efficient recursive update form by computing Jacobian matrices, while the local linearization may unavoidably lead to large cumulative estimation errors and divergence [16][17][18].
Fortunately, the unscented Kalman filter algorithm (UKF) can avoid this problem by approximating the nonlinear probability distribution using a sampling method instead of evaluating the Jacobian [19]. Recently, the UKF has been broadly used to estimate vehicle driving states. In [20], both EKF and UKF algorithms were applied to estimate the sideslip angle and the tire lateral force using real car test data. It was conformed that the performance of the UKF algorithm was far superior to that of the EKF algorithm. Moreover, a new type of online estimation method based on joint UKF was presented in [21] to estimate vehicle sideslip angle, body mass, along with moment of inertia, and the effectiveness of this estimation method were verified using the results from real car measurements.
Similarly, an adaptive state estimator was proposed for all-wheel-drive electric vehicles based on UKF algorithm [22], and this estimator could provide an accurate estimation of the longitudinal and lateral speed, tire slip angle, as well as tire friction coefficients. Different from the above literatures, a multi-sensor optimal data fusion method was proposed in [23] to estimate vehicle states, in which the integrated Inertial Navigation System (INS)/Global Navigation Satellite System (GNSS)/Celestial Navigation System (CNS) were integrated with the common UKF algorithm together. Afterwards, the global optimal state estimation of vehicle states was achieved according to the linear minimum variance principle.
Compared with the EKF, the UKF can greatly improve the estimation accuracy of the state variables for a nonlinear system. However, since the noise covariances of the EKF and UKF are often set as constant values that may be not accorded with the practical situation, the estimation accuracy of vehicle states cannot be guaranteed. Additionally, it is difficult to obtain accurate noise covariances under the changeable working conditions, which will reduce the robustness of common UKF algorithm. Therefore, it is an interesting and challenging issue to derive an improved UKF algorithm in conducting vehicle driving state estimation.
For example, in the work of [24], an adaptive EKF algorithm approach was proposed to achieve state-of-charge estimation for lithium-ion battery packs used in electric vehicles, in which the normalized innovation square (NIS) was used to validate the effectiveness of the designed estimation method. To compensate for the uncertainties caused by model errors, an adaptive sideslip angle observer was developed to fulfill vehicle body estimation [25] by combing the adaptive technology with the UKF along with a sensor-based integration approach. The simulation and real-car experimental results demonstrated that this method exhibited better estimation performances than the common UKF. Furthermore, in [26], a composite estimation procedure of the side slip angle was presented to reduce the interference of noise using an adaptive neuro-fuzzy inference system and the UKF. A novel adaptive UKF-based state estimation method was also presented in [27], wherein the measured signals were categorized into different road levels such that the noise covariance of different roads can be adaptively adjusted. Moreover, in the work of [28], an adaptive square-root UKF approach was proposed for the state estimation/detection of nonlinear systems, in which process noise and measurement noises were unknown. By summarizing the above related research work, this paper proposes an improved adaptive unscented Kalman filter (iAUKF)-based estimation method of vehicle driving states with adjustable noise covariance. The main contributions of this work are summarized as follows: (1) By comparing and analyzing the influences of process and measurement noises on the estimation accuracy of vehicle states using the UKF algorithm, the varying principles of estimation errors for vehicle driving states are obtained.
(2) A NIS-based adaptive noise covariance adjustment strategy is designed and combined with the UKF algorithm to adaptively adjust process and measurement noise covariances, thus the proposed estimation method can improve the accuracy and adaptability of vehicle driving state estimation.
The rest of this paper is organized as follows. In Section 2, a three-degree-offreedom (3-DOF) vehicle dynamics model is constructed, and the problem statement of vehicle driving states is described. In Section 3, an adaptive adjustment strategy of the noise covariance is designed, and based on this, a novel iAUKF-based vehicle driving state estimation method is proposed. In Section 4, simulation investigations based on CarSim and MATLAB/Simulink software are presented to illustrate the effectiveness of the proposed method under different working conditions. Finally, the concluding remarks are summarized in Section 5.
Vehicle dynamics modeling and problem formulation 2.1 Vehicle dynamics modeling
In this section, a 3-DOF "bicycle" or "single-track" model is used to describe the motion characteristics of vehicle in the yaw, lateral, and longitudinal directions, as shown in Fig. 1. This 3-DOF dynamics model can reflect the vehicles dynamics behaviors in real driving conditions, which has been extensively utilized in the previous literature [29][30][31]. The tire is here assumed to exhibit linear elastic behavior and complies to the small-angle approximation rule.
The dynamics equations of this vehicle are governed by 2 ( 1) where a and b are the distances from the vehicle's center of gravity (CG) to the front axle and the rear axle, respectively; Kf and Kr are the equivalent cornering stiffness coefficients for the front axle and the rear axle, respectively; m is vehicle body mass; Iz is the moment of inertia for the yaw motion; vx is the longitudinal velocity; r is the yaw rate; β is the sideslip angle of CG; δf is the steering angle of the front wheel; and ay is the lateral acceleration.
Problem formulation of vehicle state estimation
Generally, Kf and Kr are constant in the vehicle's lateral dynamics modeling.
However, in the practical operating conditions of vehicles, Kf and Kr are continuously varied with the change of its internal parameters. Therefore, Kf and Kr are herein treated as variable parameters and recursively adjusted by estimator.
Considering the relationship between β, r, vx, ay, as well as Kf and Kr, the nonlinear vehicle dynamics model can be constructed by the state transition equation and the measurement equation [32], which are given as follows: In order to employ the first-order Euler approximation with sampling time Δt to discretize Eq. (5) [15], both f (.) and h (.) can be represented as follows: : : The state-space form of Eq. (7) and Eq. (8) can be written as where k A , k B , k C , and k D are provided in the Appendix.
Standard unscented Kalman filter algorithm
The unscented transformation (UT) in the standard UKF is used to approximate the nonlinear probability distributions of the covariance and the mean values of vehicle states. For the 3-DOF vehicle model, the estimation procedure of vehicle driving states using the standard UKF is summarized as follows: where 0 x is the initial values of x.
(2) Time updating Step-a: Creating the Sigma points.
Based on the symmetric sampling strategy, the Sigma points k ξ are created where n is the length of x, and it is set as n = 8 in this work; 2 () p n k n and it is used to adjust the distance between the Sigma points and x , and its optimal value is 2 under the Gaussian distribution.
Step-b. Computing the predicted values of k ξ .
The nonlinear transformation of k ξ is performed by using (9), and the predicted values of k ξ are then obtained Step-c. Computing the priori estimation of ˆk − x and , xk where k Q is the process noise covariance.
Step-d. Computing the priori estimation of k z .
Return to
Step-a, and recreate new Sigma points, afterwards, substitute these points into in Eq. (9) to calculate the prior measurements ˆk − z . To facilitate the subsequent derivation process, the calculation formulas are summarized as follows:
Robustness analysis of standard UKF algorithm
In the standard UKF algorithm, Qk and Rk are usually set as constant values to quantify the process and measurement noises. However, the actual process and measurement noises are often varied with the different driving conditions.
Therefore, it is difficult to obtain the accurate Qk and Rk, moreover, using the inaccurate Qk and Rk will result in greater estimation error and may reduce the robustness of standard UKF algorithm.
To assess the influence of process noise and measurement noise on the The estimation errors of β, r, vx, and ay using standard UKF in Case 1 and Case 2 are shown in Fig. 2(a) and Fig. 2(b), respectively.
To further demonstrate the estimation errors of β, r, vx, and ay using the standard UKF, the root mean squared error (RMSE) [33] is used to evaluate the related estimation errors, which is defined by where n is the size of X; Xi is the true value; ˆi X is the estimated value, where i = 1, 2, …, n.
The RMSEs of β, r, vx, and ay for the standard UKF under different process and measurement noises are shown in Table 1. It can be concluded from Fig. 2 and Table 1
The iAUKF-based vehicle driving state estimation
As described in Section 3.2, the variations of Qk and Rk have a significant impact on the accuracy of the estimated vehicle states with the standard UKF.
Therefore, by referring to a related study [24], an adaptive noise covariance adjustment strategy incorporating the UKF algorithm is herein proposed to conduct vehicle driving state estimation. This approach can adaptively adjust Qk and Rk in terms of the errors between the prior and actual measurements, which can reduce the estimation error and the divergence possibility.
First, in order to approximate the process noise, the discrete Qk is set as [34], rather than simply setting it as a diagonal matrix with n dimension. Moreover, Q is the continuous process noise matrix. Since Q is difficult to measure, it is set to be greater than the noise error of the common steering angle sensor and the accelerometer, as well as the standard deviation of the front-wheel and the rear-wheel lateral stiffness coefficients. Thus, we set Q = diag (0.6,0.1,500,500) [15]. In addition, the initial value of Rk is set as Rk,0 = diag (0.01,0.5).
Second, the innovation vk is defined as the error between the prior measurements and actual measurements, i.e., The theoretical innovation covariance is obtained by the UKF algorithm, which is equal to , P zk . Therefore, the theoretical innovation covariance Ck is denoted by Due to the influences of modeling and measurement errors, there usually exists a certain deviation between the actual and theoretical innovation covariance. The actual innovation covariance is obtained by the definition of the error covariance, which is defined as where M is the length of the sliding window.
where d is the adjustment factor of M; max Essentially, d should reflect the changes of vehicle states correctly. When d is greater than a certain threshold, the vehicle states are considered to change rapidly. In contrast, the vehicle states change slowly, and then M can be adjusted adaptively according to the value of d. Therefore, to evaluate the variation rates of vehicle states, the value of d is defined as the NIS that is commonly used in the Inertial Navigation System-Global Positioning System [24], which is expressed When vehicle running in a certain maneuver operation, the results estimated by the UKF algorithm will inevitably have large errors with the rapid change of vehicle driving states, which will eventually lead to the increase of the innovation and its NIS. Therefore, Eq. (29) is used as the judgment criterion of the UKF algorithm, and the adaptive noise covariance adjustment strategy is used to adjust Qk and Rk.
Here, the noise adjustment factor k is introduced to adjust Qk and Rk by comparing C k and Ĉ k . According to literature [36], k is defined as where is the multiplier coefficient of Qk.
In summary, the proposed iAUKF-based vehicle driving state estimation can be established if the above adaptive noise covariance adjustment strategy is combined with UKF algorithm, and the flowchart of the iAUKF-based vehicle driving state estimator is shown in Fig. 3.
Specifically, the inputs and measurements are obtained from the CarSim or the on-board sensors in terms of the steering wheel angle, and the prior estimation of x is then carried out, thus the time updating is completed. Subsequently, the adaptive noise covariance adjustment strategy is designed to realize the adaptive adjustment of Qk and Rk in standard UKF, which will improve the accuracy and adaptability of vehicle driving state estimation. Finally, the posterior states and the corresponding covariance are updated.
Simulation investigation
In is shown in Fig. 5. Note that the initial value of matrix P is set as P0 = I8×8, and the related parameters in the simulation are listed in Table 2.
Sinusoidal maneuver test
In the first test, the proposed iAUKF is implemented in case of a sinusoidal maneuver. The estimated β, r, vx, and ay obtained by the UKF, SHAUKF and our The curves of δf and ax simulated by CarSim in sinusoidal maneuver are displayed in Fig. 6. Besides, Fig. 7 shows the curves of β, r, vx, and ay using the UKF, SHAUKF and our designed iAUKF algorithm in the same sinusoidal maneuver. These three UKF algorithms have certain local errors, yet the estimation effect of SHAUKF and the proposed iAUKF algorithm are better than those of the traditional UKF, and the estimation results of the iAUKF are closer to the reference values than that of SHAUKF. Although the error of vx generated by the iAUKF is larger than that generated by UKF and SHAUKF in 10-50 s, the entire error of vx generated by the iAUKF exhibits a decreasing tendency, which makes its estimated curve fitting the simulation results of CarSim well.
Additionally, from the enlarged subplots A, B, and C in Fig. 7, it is clear that the corresponding vehicle states estimated by proposed iAUKF algorithm are closer to the reference values obtained by the CarSim, in compared to those obtained by the UKF and SHAUKF algorithm. Although the RMSE of vx using the iAUKF is larger than that of the SHAUKF, the RMSE reduction effects of proposed iAUKF are more obvious than those of the SHAUKF for other states.
Fishhook maneuver test
Two Fishhook maneuvers are considered, and they are separately discussed in this section.
Fishhook maneuver I
The mathematical expression of δs under fishhook maneuver I is expressed The simulated curves of δf and ax using CarSim are provided in Fig. 9. Fig. 10 shows the curves of β, r, vx, and ay estimated by the UKF, SHAUKF and iAUKF algorithms. Compared to the UKF and SHAUKF, it is obvious that the responses of β, r, vx, and ay generated by the iAUKF are closer to the corresponding ones simulated by CarSim no matter from the subplot A, B, and C, which demonstrates that the proposed iAUKF maintains a relatively higher accuracy during fishhook maneuver I.
Additionally, Fig. 11 shows the absolute error comparisons of β, r, vx, and ay in fishhook maneuver I. It is easily seen that the estimation errors of β, r, vx, and ay using the iAUKF are flatter and smaller than using the other two UKF algorithms. Especially, the peak values of the estimation errors for these four vehicle states using our iAUKF algorithm are much lower than those using the UKF and SHAUKF algorithms. Table 4 summarizes the RMSE values of β, r, vx, and ay using the UKF, SHAUKF and iAUKF algorithms in fishhook maneuver I. Regarding to the UKF, the RMSE reduction rates of β, r, vx, and ay using the iAUKF are 33.1%, 85.1%, 88.7%, and 33.4%, respectively, while the RMSE reduction rates using SHAUKF are only 18.4%, 50.7%, 56.0%, and 18.3%, respectively. It can be observed that the improvements of vehicle states estimation with our proposed iAUKF are entirely greater than those of vehicle states with the SHAUKF, which demonstrates that the proposed iAUKF holds superior estimation performances.
Fishhook maneuver II
The δs under fishhook maneuver II is expressed by The output curves of δf and ax from CarSim are provided in Fig. 12. Fig. 13 and Fig. 14 show the estimated curves of β, r, vx, and ay and the corresponding estimation errors, respectively, using the UKF, SHAUKF and iAUKF algorithms. As shown in Fig. 13, it is obvious that the proposed iAUKF could generate a more accurate estimation of vehicle driving states in comparison with the UKF and SHAUKF algorithm when facing a large change of δs.
Moreover, from the subplot A, B, and C, it is seen that the responses of β, r, vx, and ay generated by the iAUKF are closer to the reference values of CarSim compared to the other two UKF algorithms.
Additionally, as shown in Fig. 14, the absolute errors of β, r, vx, and ay generated by the iAUKF are far lower than those by the UKF and SHAUKF algorithm. Meanwhile, the maximum errors of β, r, vx, and ay generated by the iAUKF are smaller than those of the other two UKF algorithms. Moreover, the estimation errors of the vehicle driving states by the iAUKF exhibit a gradually decreasing tendency, i.e., the error curves of the four vehicle states get flatter gradually, which also illustrates the superior robustness of the proposed iAUKF.
Robustness analysis of the iAUKF algorithm
To further demonstrate the robustness of the designed iAUKF algorithm when encountering the variations of process and measurement noises, the same simulations are performed using the UKF and the proposed iAUKF for three different process and measurement noises. The absolute estimation errors of each driving state by the UKF and the iAUKF are provided in Fig. 15.
Under the three different process and measurement noises, the estimation errors of β, r, vx, and ay by the iAUKF are much lower than those by the UKF algorithm. Moreover, with the gradual increase of Qz and Rz, the increase magnitudes of the estimation errors of vehicle states by the proposed iAUKF are far less than those by the UKF algorithm. Besides, the estimated error of vx exhibits a downward trend with the increase of Qz and Rz, which further illustrates the higher robustness of the proposed iAUKF compared to the UKF.
To highlight the robustness of the proposed iAUKF algorithm more visually, the RMSEs of each vehicle driving state estimated by the UKF and iAUKF under different process and measurement noises are compared in Table 6, and its graphical presentation is provided in Fig. 16.
It is clear from Table 6 that the RMSEs of vehicle driving states obtained by the iAUKF are all less than those obtained by the UKF under different process and measurement noises, and the proposed iAUKF could retain a higher global accuracy, even when the process and measurement noises changed significantly.
Furthermore, as shown in Fig. 16, except for the RMSE of r at Qz = 0, the RMSEs of each driving state obtained by the iAUKF are all smaller than those by the UKF. In addition, based on the variation tendency of the RMSEs for the four vehicle states, as the process and measurement noises increased, the increase magnitudes of these four states estimated by the proposed iAUKF are very small, and even present a decreasing appearance in some cases, whereas the RMSEs for the UKF algorithm are all increasing. This further shows that the proposed iAUKF could lower the negative effects of the process and measurement noise changes on the estimation of vehicle driving state. In future work, this designed iAUKF approach will be employed to estimate the driving states of distributed-motor-driven electric vehicles, and those estimated states will be taken as the inputs of direct yaw-moment control.
Conclusions and future work
Meanwhile, the side slip angle is determined to be the control goal to fulfill the stability control of the electric vehicle under a turning maneuver.
Appendix
The state matrices of k A , k B , k C , and k D in Eq. (9) are provided here:
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Fig.1 A 3-DOF vehicle dynamics model Fig. 6 Simulated curves of δ f and a x using CarSim in sinusoidal maneuver Fig. 7 The comparison curves of β, r, v x , and a y in sinusoidal maneuver Fig. 8 The absolute errors of β, r, v x , and a y in sinusoidal maneuver Fig. 9 Simulated curves of δ f and a x using CarSim in fishhook maneuver I Fig. 10 The comparison curves of β, r, v x , and a y in fishhook maneuver I Fig. 11 The absolute errors of β, r, v x , and a y in fishhook maneuver I Fig. 12 Simulated curves of δ f and a x using CarSim in fishhook maneuver II Fig. 13 The comparison curves of β, r, v x , and a y in fishhook maneuver II
|
2021-09-27T20:33:47.246Z
|
2021-08-05T00:00:00.000
|
{
"year": 2021,
"sha1": "17b312582d903be43214eccdc8f319651c6fa8f7",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-770003/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "62715db9ef2f4d07d77a9a5daa9036c4066362e0",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
93238576
|
pes2o/s2orc
|
v3-fos-license
|
The role of the coating and aggregation state in the interactions between iron oxide nanoparticles and 3T3 fibroblasts
Recent nanotoxicity studies revealed that the physico-chemical characteristics of engineered nanomaterials play an important role in the interactions with living cells. Here, we report on the toxicity and uptake of the iron oxide sub-10 nm nanoparticles by NIH/3T3 mouse fibroblasts. Coating strategies include low-molecular weight ligands (citric acid) and polymers (poly(acrylic acid), MW = 2000 g mol-1). We find that most particles were biocompatible, as exposed cells remained 100% viable relative to controls. The strong uptake shown by the citrate-coated particles is related to the destabilization of the dispersions in the cell culture medium and their sedimentation down to the cell membranes.
Introduction
Engineered nanoparticles are ultrafine colloids of nanometer dimensions with highly ordered crystallographic structures. These particles exhibit usually remarkable electronic, magnetic or optical properties that can be exploited in a variety of applications. In contrast to conventional chemicals however, the possible risks of using nanomaterials for human health and the environment have not been yet fully evaluated [1,2]. To estimate these risks, large research efforts were directed towards the development of toxicology assays. The objectives of these assays are the quantitative determination of the viability of living cells that were incubated with nanomaterials [1,2]. In recent years, the viability of many different cell lines was investigated with respect to a wide variety of engineered nanoparticles. Recent reviews attempted to recapitulate the main features of the toxicity induced by nanomaterials. One of these features deals with the coating of the particles. In most in vitro studies, the chemistry of the interfaces between the nanocrystals and the solvent was anticipated to be a key feature of cell-nanoparticle interactions. In the present paper, we investigated the in vitro toxicity and internalization of sub-10 nm iron oxide (maghemite, γ-Fe 2 O 3 ) nanoparticles using mice NIH/3T3 fibroblasts. We have also developed an easy and widely applicable method to adsorb ion-containing polymers onto the nanoparticle surfaces [3,4]. We have found that this low-molecular weight polymers augmented the hydrodynamic diameters of the particles by only 4 nm, and at the same time preserved the long term colloidal stability in most water based solvents, including buffers and cell culture media [5]. This noticeable increase in stability as compared to classical ligand-coated particles has prompted us to perform toxicity assays, and to explore the effect of the dispersion state on intracellular uptake.
Experimental/Methodology
The synthesis of iron oxide nanoparticles used the technique of « soft chemistry » based on the polycondensation of metallic salts in alkaline aqueous media. The synthesis has been previously described, and we refer to this work for more details [6]. In the present study, two maghemite batches were synthesized with median diameters in the range of 7 -8 nm and a narrow polydispersity. In this work, two different coating agents were utilized : citric acid molecules and poly(acrylic acid) polymers. For citric acid, the complexation of the surface charges was performed during the synthesis by adding tri-sodium citrate in excess under vigorous stirring, followed by washing steps with acetone. To adsorb polyelectrolytes on the surface of the nanoparticles, we followed the precipitation-redispersion protocol [3,4]. The precipitation of iron oxide dispersions by PAA 2K was performed in acidic conditions (pH 2). The precipitate was separated from the solution by centrifugation, and its pH was increased by addition of ammonium hydroxide. The precipitate redispersed spontaneously at pH 7 -8, yielding a clear solution that contained the polymer coated particles.
Figure 1 : Transmission optical microscopy (10×) images of NIH/3T3 fibroblasts (a). In (b) and (c), the cells were incubated with Cit-γ-Fe 2 O 3 and PAA 2K -γ-Fe 2 O 3 during 24 h at a concentration of 1 mM. (d) MTT viability assays conducted on NIH/3T3 cells incubated with bare, citrate and PAA 2K coated particles.
NIH/3T3 fibroblast cells from mice were grown as a monolayer in Dulbecco's Modified Eagle's Medium (DMEM) with High Glucose (4.5 g L -1 ) and stable Glutamine (PAA Laboratories GmbH, Austria). This medium was supplemented with 10% Fetal Bovine Serum (FBS), and 1% penicillin/streptomycin (PAA Laboratories GmbH, Austria). Exponentially growing cultures were maintained in a humidified atmosphere of 5% CO 2 -95% air at 37°C, and under these conditions the plating efficiency was 70 -90 % and the doubling time was 12 -14 h. MTT assays were performed with coated and uncoated iron oxide nanoparticles for metal molar concentrations [Fe] between 10 µM to 10 mM. Cells were seeded into 96-well microplates, and the plates were placed in an incubator overnight to allow for attachment and recovery.
Results and Discussion
Optical microscopy : In order to determine their optimal growth conditions, the fibroblasts were first plated in culture medium without particles. Fig. 1a provides an illustration of the NIH/3T3 observed by optical microscopy at a 50 % coverage (objective 10×). Figs. 1b and 1c display NIH/3T3 fibroblasts that were exposed during 24 h with Cit-γ-Fe 2 O 3 and PAA 2K -γ-Fe 2 O 3 nanoparticles respectively, at [Fe] = 1 mM. Note that for contrast reasons the supernatant containing the citrate-coated particles was removed and after thorough washing with PBS, it was replaced by pristine medium. For the PAA 2K -coated particles, the images were recorded in the same conditions as for the control, the particles being dispersed in the cell medium. In Fig. 1, there is a marked difference between cells incubated with citrate and with PAA 2K -coated particles. Due to a massive internalization and/or adsorption of the nanomaterial by the cells, the cells exposed to the citrate-coated particles were more difficult to detect. The dark patterns seen in the bottom left image were stemming from internalized/adsorbed Cit-γ-Fe 2 O 3 . In contrast, the cells incubated with PAA 2K -γ-Fe 2 O 3 behaved as the control.
MTT assays : Toxicity assays alone can quantify the cell viability under nanoparticles exposure. MTT (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl tetra-zolium bromide) viability assays were conducted on NIH/3T3 cells for molar concentrations [Fe] varying from 10 µM to 10 mM and incubation time of 24 h. Fig. 1d displays the cellular viability as a function of the molar concentrations [Fe]. Uncoated, citrate-coated and PAA 2K -coated particles were surveyed. For the three iron oxide samples, the viability remained at a 100 % level within the experimental accuracy. These findings indicate a normal mitochondrial activity for the cultures. Our results are in good agreement with earlier reports from the literature which showed that both crystalline forms of iron oxide, maghemite γ-Fe 2 O 3 and magnetite Fe 3 O 4 were found biocompatible with respect to cell cultures [1].
Nanoparticle uptake : The uptake of nanoparticles by the cells was monitored by UV-Visible spectrometry. Aliquots of the supernatants in contact with the cells were collected at regular time intervals and analyzed with respect to their oxide concentration. In the following, we assume that the nanoparticles not present in the supernatant, and thus not detected by spectrometry were either adsorbed on the cellular membranes or taken up by the cells [7]. This assumption allowed us to evaluate the mass of metallic atoms M Fe internalized or adsorbed by the cells as a function of the time. Thanks to proliferation (data not shown), M Fe was normalized with respect to the number of cells present at a given time, resulting in masses expressed in picogram of iron per cell. Fig. 2 compares the mass of iron M Fe incorporated or adsorbed per cell for Cit-γ-Fe 2 O 3 and PAA 2K -γ-Fe 2 O 3 . M Fe was found to be unchanged for the polymer coated particles whereas it increased logarithmically with time for the citratecoated particles. After a 24 h incubation, the mass per cell reached a value of 250 pg, a result that compared well to those of the literature on human fibroblasts [7]. The inset of Fig. 2 displays centrifugation pellets of NIH/3T3 cells that were incubated with Cit-γ-Fe 2 O 3 and PAA 2K -γ-Fe 2 O 3 nanoparticles for 24 h. After incubation, the cultures were washed thoroughly using PBS in order to remove particles loosely adsorbed onto the cellular membranes. The surface of the flasks were then mechanically scraped and the cell suspensions were centrifuged in Eppendorf vials. The exposure of cells to Cit-γ-Fe 2 O 3 resulted in the significant darkening of the centrifugation pellet as compared to the PAA 2Kγ-Fe 2 O 3 treated cell line (inset of Fig. 2). It may seem surprising that the NIH/3T3 cells behaved differently with respect to particles which have the same charges at their surfaces. Cell membranes are indeed known to be negatively charged in average and therefore these membranes should exert a net electrostatic repulsion towards surrounding diffusing particles of the same charges. To understand the differences in uptake between citrate and PAA 2K -coated particles, the colloidal stability of the particles in various solvents, including brines, buffers and cellular growth media was recently put under scrutiny [5]. Here, we underscore the results obtained when the particles were dispersed in the complete culture medium. In DMEM, we found that the citrate-coated nanoparticle precipitate instantaneously, whereas the PAA 2K -coated particles remained dispersed (i.e. unaggregated) over weeks. We anticipate that the pronounced uptake exhibited by Cit-γ-Fe 2 O 3 is related to the destabilization of the initially dispersed nanoparticles and their accumulation by gravity in the vicinity of the cell membranes [2].
Conclusion
In this work, the toxicity and uptake of iron oxide nanoparticles by NIH/3T3 fibroblasts were investigated. The proliferative properties of the cells and their viability in presence of engineered nanomaterials were evaluated by i) transmission optical microscopy to determine the growth laws of the cell populations, ii) MTT assays as a function of the metal dose and iii) UV-Visible spectrometry for the estimation of the particles uptake. The stronger uptake shown by Cit-γ-Fe 2 O 3 was related to the destabilization of the initially dispersed nanoparticles in the cell culture medium and their sedimentation near by the surfaces of the cells. For the PAA 2K -coated particles, the polymer coating ensured a long term (> year) stability even in physiological conditions. For these particles, the uptake resulted only by diffusion and single adsorption on the cell membranes.
|
2019-04-04T13:14:47.553Z
|
2011-08-10T00:00:00.000
|
{
"year": 2011,
"sha1": "22f5553b14dc725a243a8ead31eed9e8ff40fb12",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.phpro.2010.11.059",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "22f5553b14dc725a243a8ead31eed9e8ff40fb12",
"s2fieldsofstudy": [
"Materials Science",
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry",
"Physics"
]
}
|
52033567
|
pes2o/s2orc
|
v3-fos-license
|
Rasfonin promotes autophagy and apoptosis via upregulation of reactive oxygen species (ROS)/JNK pathway
ABSTRACT Rasfonin is a fungal secondary metabolite demonstrating with antitumour effects. Reactive oxygen species (ROS) are formed as a natural by-product of the normal metabolism of oxygen and have important roles in cell signalling and homeostasis. Studies reported that many fungal secondary metabolites activated either autophagy or apoptosis through ROS generation. In former study, we revealed that rasfonin induced both autophagy and apoptosis, however, whether it promoted aforementioned processes via upregulation of ROS generation remains explored. In the current work, we demonstrated that rasfonin induced autophagy and apoptosis concomitant with a dramatically ROS production. N-Acetylcysteine (NAC), an often used ROS inhibitor, decreased both autophagic flux and caspase-dependent apoptosis by rasfonin. Flow cytometry analysis revealed NAC was able to reduce rasfonin-dependent apoptosis and necrosis. In methanethiosulfonate (MTS) assay, we observed that NAC significantly blocked rasfonin-induced cell viability loss. In addition, we found that rasfonin increased the phosphorylation of c-Jun NH2-terminal kinase (JNK), which was inhibited by NAC. SP600125, an inhibitor of JNK, reduced rasfonin-dependent autophagic flux and apoptosis. Moreover, we demonstrated that rasfonin inhibited the phosphorylation of both 4E-binding protein 1 (4E-BP1) and S6 kinase 1 (S6K1), two main substrates of mammalian target of rapamycin (mTOR). Collectively, rasfonin activated autophagy and apoptosis through upregulation of ROS/JNK signalling.
Introduction
Macroautophagy (hereafter called autophagy) is a degradative process that involves delivery of cytoplasmic components, such as proteins, organelles and invaded microbes to the lysosome for digestion (Hippert et al. 2006). Autophagy has been found to be implicated in various human diseases and can either promote cell survival or cell death (Kroemer and Levine 2008;Gump et al. 2014). In different cellular contexts, a complex of signalling pathways controls the activation of autophagy (Zhu et al. 2007;Maiuri et al. 2010). Reactive oxygen species (ROS) are highly reactive oxygen free radical or non-radical molecules that are produced by multiple mechanisms in cells (Apel and Hirt 2004). These ROS are important signalling molecules that mediating many signal transduction pathways, playing critical roles in cell survival and death and participating in many diseases (Ray et al. 2012). Recently, ROS were demonstrated to promote starvation-induced autophagy, antibacterial autophagy and autophagic cell death (Scherz-Shouval and Elazar 2007). There is now an accumulating consensus that ROS controls autophagy in multiple contexts and cell types Elazar 2007, 2011). Moreover, changes in ROS and autophagy regulation contribute to cancer initiation and progression (Tang et al. 2010). In tumour treatment, therapeutic drugs that increase ROS and autophagy were implicated in their mechanism for cell death (Ray et al. 2012).
For a long time, apoptosis was believed the sole form of programmed cell death during development, homeostasis and disease, whereas necrosis was considered as an unregulated and uncontrollable procedure. Growing evidence reveals that necrosis can also occur in a regulated manner (Elmore 2007). Based on morphology, three major types of programmed cell death have been coined: apoptosis, autophagic cell death and programmed necrosis (Eisenberg-Lerner et al. 2009). Under oxidative stress, ROS including free radicals, such as superoxide, hydroxyl radical and hydrogen peroxide are generated at high levels leading to cellular damage and cell death (Gump et al. 2014). This kind of cell death often involves induction of apoptosis through caspase activation. In macrophages, one study reported that ROS contribute to caspase-independent cell death (Yee et al. 2014). Therefore, in addition to autophagy, ROS is actively involved in the regulation of apoptosis.
Accumulating evidence has indicted that there are several molecular connections among autophagy, apoptosis and programmed necrosis (Eisenberg-Lerner et al. 2009). For cells undergoing persistent autophagy, hallmarks of apoptosis, such as caspase activation, necrotic cell death, organelles swelling and plasma membrane rupture, are often observed (Chu and Shatkin 2008). Depending on the cellular setting, the same proteins can regulate both autophagic and apoptotic processes. For example, p53, a potent inducer of apoptosis, also promotes autophagy via its target gene, damage-regulated modulator of autophagy (DRAM). Beclin 1, required for formation of the autophagic vesicles, was also found to interact with both Bcl-2 and Bel-xL (Swerdlow and Distelhorst 2007). Until now, three different types of interplays between autophagy and apoptosis have been suggested: autophagy can act as a partner, and antagonist or an enabler of apoptosis (Longo et al. 2008).
In response to a variety of different stimuli, mitogenactivated protein kinases (MAPK) transducer signals from the cell membrane to the nucleus and involve in various intracellular signalling pathways that control a wide spectrum of cellular processes, including growth, differentiation and stress responses (Edick et al. 2007). MAPKs include extracellular signal-regulated kinase (ERK), c-Jun NH2-terminal kinase (JNK) and p38 MAPK (Gregory et al. 2004). Different from ERK pathway, JNK and p38 MAPK are weakly activated by growth factors, but respond strongly to stress signals, including tumour necrosis factor, interleukin-1, ionizing and UV irradiation, hyperosmotic stress and chemotherapeutic drugs (Heinrichsdorff et al. 2008). Activation of these kinases is strongly associated with apoptotic cell death induced by stress stimuli (Chu and Shatkin 2008). Recent studies reported that JNK also played a critical role in the regulation of autophagy (Goussetis et al. 2010).
Many fungal secondary metabolites were demonstrated to increase levels of cellular oxidative stress (Wu et al. 2012). 11ʹ-deoxyverticillin A is a member of a class of fungal secondary metabolites known as epipolythiodioxopiperazines (ETPs) and its toxicity to animal cell by generation of ROS via redox cycling (Zhang et al. 2011). And X15-2, another small-sized compounds, promotes autophagy through generation ROS (Xue et al. 2015).
In present study, we explored whether rasfonin could produce ROS and, demonstrated ROS played a critical role in rasfonin-dependent autophagy and apoptosis. Moreover, we revealed that JNK signalling functioned downstream of ROS to mediate rasfonininduced autophagy and caspase-dependent apoptosis.
Result
Autophagy is involved in rasfonin-induced cell death processes Human renal cancer cell line ACHN was selected to detect rasfonin-induced cell death in the present study. As shown in Figure 1(a), rasfonin caused cell viability loss of ACHN cells in a time-dependent manner. In colony growth assay, rasfonin demonstrated to suppress cell growth remarkably (Figure 1(b)). Flow cytometry data revealed that the rasfonininduced cell death of ACHN could be either apoptotic or necrotic (either necrosis or secondary necrosis) (Figure 1(c)). Interestingly, 3-methyladenine (3-MA), widely used inhibitor of autophagy (Kabeya et al. 2004), partially rescued rasfonin-induced cell viability loss (Figure 1(d)), suggesting that autophagy is involved in rasfonin-activated cell death processes.
Rasfonin enhances autophagy and inhibits mTORC1 signalling
Electron microscopy (EM), which is considered as one of the most convincing approaches to detect autophagy (Klionsky et al. 2012), is used to determine whether rasfonin induces autophagy or not. We found that rasfonin rapidly induced an obvious accumulation of membrane vacuoles in ACHN cells at the both 0.5-and 1-h time points (Figure 2(a)). In immunoblotting assay, rasfonin revealed to increase the ratio of LC3-II to actin, which is an indicator of autophagy (Kabeya et al. 2004), at 0.5-, 1-and 12-h time points. Chloroquine (CQ), which is known as inhibitor of autophagosome-lysosome fusion and often used in autophagic flux detection (Klionsky et al. 2012), further increased rasfonininduced LC3-II accumulation, indicating that rasfonin can activate autophagic flux (Figure 2(b) and 2 (c)). Moreover, we observed that rasfonin was able to promote the degradation of p62/SQSTM1 (Sequestosome 1), a selective substrate of autophagy and degraded when autophagy is activated (Figure 2(c)). Since the kinase activity of mTOR can be inferred by measuring the phosphorylation of its two substrates, S6K1 and 4E-BP1, we next examined the phosphorylation of S6K1 and 4E-BP1 in response to rasfonin stimulation. Expectedly, rasfonin demonstrated to decrease the phosphorylation of either S6K1 or 4E-BP1, suggesting that rasfonin triggered autophagic process by downregulation of mTORC1 signalling (Figure 2(d)).
Rasfonin stimulates autophagy and apoptosis through rapidly ROS generation
Overproduction of ROS caused damage to the cells and was involved in the regulation of either autophagy or apoptosis (Apel and Hirt 2004); thus, we next determine the participation of ROS in rasfonin-induced cell death processes. It demonstrated that rasfonin dramatically increased ROS production to a maximum extent at the 0.5-h time point (Figure 3(a)). N-Acetylcysteine (NAC), an often used ROS inhibitor, reduced rasfonin-induced ROS generation (Figure 3(a)). And rasfonin-induced cell death was suppressed in 24 h and 48 h (Figure 3 (b)). Flow cytometry data revealed that NAC decreased rasfonin-dependent apoptosis and necrosis (Figure 3(c)), indicating that rasfonin activated above cell death pathways via mediating ROS production. Moreover, we observed that NAC attenuated rasfonin-induced autophagy as evaluating LC3-II accumulation and p62 degradation in the presence of CQ (Figure 3(d)). Although rasfonin decreased LC3-II levels at 2-h time point, yet, CQ was able to prevent LC3-II from degradation (Grumati et al. 2010;Klionsky et al. 2012), suggesting an enhanced autophagic flux (Figure 3(d)).
Meanwhile, NAC also blocked the cleavage of PARP-1 (Figure 3(e)), a hallmark of apoptosis (Amé et al. 2004), and indicating that ROS is also involved in rasfonin-induced apoptotic process.
Inhibition of JNK pathway attenuates both autophagy and caspase-dependent apoptosis by rasfonin JNK belongs to MAPK signalling pathways and is activated upon stimulation of ROS , and so does the downstream factor -NFκB. We observed that rasfonin increased the phosphorylation of JNK, which was inhibited by NAC (Figure 4(a)), confirming that ROS can act upstream of JNK. SP600125 (SP), an inhibitor of JNK, demonstrated to completely block rasfonindependent autophagy at the 1-h time point (Figure 4(b)). At both 2-and 12-h time points, it was able to decrease rasfonin-induced autophagic flux judging the LC3-II accumulation and p62 degradation in the presence of CQ (Figure 4(b)). MG132, a proteasome inhibitor and is often used to inhibit NFκB (Ko et al. 2010;Zanotto-Filho et al. 2012) attenuated rasfonin-induced autophagy as evaluating LC3-II accumulation and p62 degradation in the presence of CQ (Figure 4(c)). Moreover, we found that SP inhibited the cleavage of PARP-1 by rasfonin (Figure 4(d) , and MTS assays was used to detect the cell viability above. (c) After treatment with rasfonin (6 μM) for 12 h, the apoptosis and necrosis induced were determined by flow cytometry. Apoptotic: AV positive and PI negative; necrotic: PI positive. For histogram results, the data are presented as mean ± SD from three independent experiments. (d) ACHN cells were treated with rasfonin (6 μM) in the presence or absence of NAC (50 μM) for 2 h, and cell lysates were prepared and analysed by immunoblotting using the indicated antibodies. Densitometry was performed for quantification and the ratios of LC3-II and p62 to actin are presented below the blots. The ratios were representative of at least three independent experiments. Densitometry was performed for quantification and the presence of cleaved PARP (cPARP) means apoptosis was induced. N/A: not available.
indicated that JNK functioned downstream of ROS and played a critical role in the regulation of rafonin-induced either autophagy or caspase-dependent apoptosis.
Discussion
Rasfonin, a fungal secondary metabolite, stimulates autophagy and apoptosis; however, it remains unknown whether rasfonin can promote above cell death processes through ROS. In this study, we clearly revealed that rasfonin induced rapidly generation of ROS, which was likely to mediate rasfonindependent autophagy and apoptosis via JNK signalling pathway. Cancer cells produce higher levels of ROS than normal cells, and this leads to cancer progression (Hart et al. 2015). ROS are important signalling molecules that mediate many signal transduction pathways and benefit for cellular survival (Focaccetti et al. 2015); however, the overproduction of ROS damage cell by activation of apoptosis or necrosis. Growing evidence reveal that ROS also play an important role in the regulation of autophagy (Farah et al. 2016). Consistent with former study, here, we found that ROS are critical for rasfonin-dependent autophagy, necrosis and apoptosis. Huang et al. (2011) reported that ROS regulated autophagy through distinct mechanisms depending on cell types and stimulation conditions. In cancer treatment, while therapeutic drugs that augment ROS and autophagy have been implicated in their mechanism for cell death, other therapeutic drugs that generate ROS and promote autophagy seem to have a protective effect (Focaccetti et al. 2015;Koo et al. 2016). Concerning rasfonin, we found that, through ROS, it induced both autophagy and apoptosis. Immunoblotting data demonstrated that NAC abolished rasfonininduced PARP-1 cleavage, whereas flow cytometry results indicated that NAC only partially decreased rasfonin-dependent apoptotic cell death. Therefore, we speculated that rasfonin possibly activated caspase-independent apoptosis. Collectively, it is reasonable to assume that, through induction of ROS, rasfonin could undergo cell death via multiple pathways.
Many signalling pathways have been found to regulate autophagic process (Jung et al. 2010), such as Adenosine 5′-monophosphate (AMP)-activated protein kinase, Akt/mTOR and MAPK, etc. MAPKs include ERK, JNK and p38 MAPK, and control a wide spectrum of cellular processes (Kim et al. 2008a). Accumulating evidence indicated that MAPKs actively participated in the regulation of autophagic process (Peter et al. 2010). In Parkinson's and Lewy body diseases, human tissue study supports a role for ERK/MAPK in the regulation of autophagy (Jung et al. 2010). In colorectal cancer cells, a novel cell type-specific role of p38α MAPK is found to control and mediate autophagy (Kim et al. 2008a). Either autophagy or apoptosis has been found to be regulated by JNK-mediated Bcl-2 phosphorylation (Kim et al. 2008b). Wei et al. reported that JNK1-mediated Bcl-2 phosphorylation interferes with its binding to the proautophagy BH3 domaincontaining protein Beclin 1, and had a dual role in autophagy and apoptosis regulation (Wei et al. 2008). Similar to their observation, we demonstrated that rasfonin was able to activate JNK signalling pathway, and ROS functioned upstream of JNK to regulate both apoptosis and autophagy by rasfonin.
In conclusion, rasfonin induces autophagy through oxidative stress/JNK signalling, which provides a novel mechanism for this fungal secondary metabolite-activated cell death processes. These enriched the machinery and broaden our understanding of fungal secondary metabolite-induced autophagy, apoptosis as well as necrosis.
Cell culture and western blot analysis ACHN (human renal cancer cell line) were grown in Dulbecco modified Eagle medium (DMEM) medium containing 10% foetal bovine serum (GIBCO) and 1% antibiotics. Cells were grown to 70-80% and treated with varieties of compounds for indicated time. Whole cell lysates were prepared with lysis using Triton X-100/glycerol buffer, containing 50 mM Tris-HCl, pH 7.4, 4 mM ethylene diamine tetraacetic acid, 2 mM ethylene glycol tetraacetic acid and 1 mM dithiothreitol, supplemented with 1% Triton X-100, 1% sodium dodecyl sulfate (SDS) and protease inhibitors and then separated on a SDS-polyacrylamide gel electrophoresis gel (13, 10 or 8% according to the molecular weights for the proteins of interest) and transferred to polyvinylidene fluoride membrane. Immunoblotting was performed using appropriate primary antibodies and horseradish peroxidaseconjugated suitable secondary antibodies, followed by detection with enhanced chemiluminescence (Pierce Chemical).
Cell viability assay (MTS)
Cells were plated in 96-well plates (5000-10,000 cells per well) in 100 µl complete culture medium. After overnight culture, the medium was replaced with complete medium that was either drug-free or contained rasfonin or other chemicals. The cells were cultured for various periods and cellular viability was determined with CellTiter 96 Aqueous Non-Radioactive Cell Proliferation Assay (Promega).
Colony growth assay
Cells were seeded at a concentration 300 cells/ml and cultured for 2 weeks to allow colony growth in the presence or absence of the indicated concentration of rasfonin. Pictures were taken after 4% paraformaldehyde fixation and trypan blue stain, and then the numbers of colony were calculated by Image J.
Flow cytometry assay
ACHN cells were treated with the indicated compounds, then trypsinised and harvested (keeping all floating cells), washed with phosphate buffer saline (PBS) buffer, followed by incubating with a fluorescein isothiocyanate-labelled annexin V (FITC) and propidium iodide (PI) according to the instructions of an Annexin-V-FITC Apoptosis Detection Kit (Biovision Inc., K101-100) and analysed by flow cytometry (FACSAria, Becton Dickinson). Percentages of the cells with annnexin V positive and PI negative stainings were considered as apoptotic, whereas PI-positive staining was considered to be necrotic.
Electron microscopy
Electron microscopy was performed as described. Briefly, samples were washed three times with PBS, trypsinised, and collected by centrifuging. The cell pellets were fixed with 4% paraformaldehyde overnight at 4°C, postfixed with 1% OsO4 in cacodylate buffer for 1 h at room temperature (RT) and dehydrated stepwise with ethanol. The dehydrated pellets were rinsed with propylene oxide for 30 min at RT and then embedded in Spurr resin for sectioning. Images of thin sections were observed under a transmission electron microscope (JEM1230, Japan).
Reactive Oxygen Species Assay Kit
DCFH-diacetate (DA) passively diffuses into cells and is deacetylated by esterases to form nonfluorescent 2′, 7′-dichlorofluorescein (DCFH). In the presence of ROS, DCFH reacts with ROS to form the fluorescent product DCF, which is trapped inside the cells. Cells were plated in 96-well plates (20,000-30,000 cells per well) in 100 µl complete culture medium. After overnight culture, the culture medium was first removed and the cells were washed three times with PBS, DCFH-DA, diluted to a final concentration of 10 μM with DMEM/F12, was added to cultures and incubated for 20 min at 37°C. The fluorescence was read at 485 nm for excitation and 530 nm for emission with a fluorescence plate reader (Genios, TECAN). The increase value compared to control was viewed as the increase of intracellular ROS.
Statistical analysis
Normally distributed data are shown as mean ± SD and were analysed using one-way analysis of variance and the Student-Newman-Keuls post hoc test.
Disclosure statement
No potential conflict of interest was reported by the authors.
Funding
This work was supported by grants from the National Natural Science Foundation of China [grant number 31371403].
|
2018-08-21T22:42:14.650Z
|
2016-04-02T00:00:00.000
|
{
"year": 2016,
"sha1": "7b7658f3fc140f3d7f1e2fa609225c21def96515",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/21501203.2016.1170073?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7b7658f3fc140f3d7f1e2fa609225c21def96515",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
244577624
|
pes2o/s2orc
|
v3-fos-license
|
Characterization of Cutaneous and Subcutaneous Neoplasms in Canines and Malignancy Prediction Using B-Mode Ultrasonography, Doppler, and ARFI Elastography
Cutaneous and subcutaneous neoplasms are highly prevalent in dogs, ranging from benign to highly aggressive and metastatic lesions. The diagnosis is obtained through histopathology, however it is an invasive technique that may take a long time to obtain the result, delaying the beginning of the adequate treatment. Thus, there is a need for non-invasive tests that can help in the early diagnosis of this type of cancer. The aim of this study was to verify the accuracy of ultrasonography methods to predict malignancy in cutaneous and subcutaneous canine neoplasms. In addition, we aim to propose an ultrasonography evaluation protocol and perform the neoplasms characterization using these three proposed techniques. When and were proposed and characterization was done for all with at least
Based on the possibility of skin tumors malignancy prediction in canines using ultrasound techniques, this study aimed to evaluate cutaneous and subcutaneous neoplasms using B-mode, Doppler, and ARFI elastography, to determine the accuracy of ultrasonography methods, suggest an evaluation protocol for these neoplasms, and perform the ultrasonographic characterization of the speci c tumor types included.
Histopathologic results
A total of 130 cutaneous neoplasms (98 malignant and 32 benign) were evaluated, resulting in 21 histopathologic classi cations ( Table 1). The most prevalent malignant neoplasms in this study were the high-grade cutaneous mast cell tumors (18.46%). In comparison, the most prevalent benign neoplasms were lipomas (13.07%).
B-mode ultrasonography
In B-mode, measurements of length (3.02±2.85 cm), width (2.58±2.18 cm), and height (1.79±1.71 cm) were not associated with tumor malignancy, as well as echogenicity, capsule, and echotexture pattern (smooth or rough) ( Table 2). It was found that echotexture (P=0.007), invasiveness in adjacent tissues (P=0.002), hyperechogenic spots (P=0.031), and cavitary areas (P=0.001) were shown to be predictive characteristics of malignancy. This way, heterogeneous neoplasms with signs of invasiveness, presence of hyperechogenic spots, and cavitary areas were more likely to be malignant ( Figure 1). The predictive values of sensitivity, speci city, accuracy, PPV and NPV are shown in Table 2, and the ultrasonographic characterization of neoplasms is shown in Table 3. Se -sensitivity; Sp -speci city; Ac -accuracy; PPV -positive predictive value; NPV -negative predictive value. Although an association between vascularization intensity and malignancy was not observed (Table 4), only one benign neoplasm (in ltrative lipoma) presented intense vascularization. There were also no associations between tumor malignancy, location, and vascularization pattern. Se -sensitivity; Sp -speci city; AUC -area under the curve.
Identi cation of arterial ow using pulsed Doppler was only possible in 51 neoplasms. Of these, 42 were malignant (82.35%) and only nine benign (17.65%). It was found that the peak values of systolic velocity, diastolic velocity, and resistivity index were not predictive of malignancy using the pulsed Doppler. However, the pulsatility index proved to be signi cant in differentiating between malignant and benign neoplasms (P=0.015), with a cut-off value >0.93 as indicative of malignancy, with 90.5% sensitivity, 55.6% speci city, and 75.7% accuracy ( Figure 2). The Doppler ultrasonographic characterization of cutaneous neoplasms is shown in Table 5. Table 5 Ultrasonographic characterization by Doppler (intensity, location and vascularization pattern, systolic peak, diastolic velocity, resistivity index, and pulsatility index) of cutaneous and subcutaneous canine neoplasms for tumor types that presented two or more cases. *Only one neoplasm with the arterial ow; SP -systolic peak; DV -diastolic velocity; RI -resistivity index; PI -pulsatility index; SD -Standard deviation; NA -not applicable.
ARFI Elastography
Both qualitative and quantitative assessments were shown to be signi cant in predicting malignancy (Table 6). Regarding deformability, it was observed that 11 benign and nine malignant neoplasms were classi ed as deformable, while 21 benign and 89 malignant were nondeformable. Deformability was shown to be predictive of tumor malignancy with 90.2% sensitivity, 35.48% speci city, 87.09% accuracy, 81.3% PPV, and 55% NPV. It was observed that 85 neoplasms had at least four malignancy predictive characteristics (Table 8). Seventy-two neoplasms were indeed malignant, and only 13 were benign. Five or more malignancy predictive characteristics were found in sixty neoplasms, where 53 were malignant and seven were benign. Forty-ve neoplasms had at least six characteristics, where 41 were malignant, and four were benign. When considering all seven malignancy predictive characteristics, 16 neoplasms were computed, where 14 were malignant, and only two were benign.
Discussion
This study provides important information regarding the diagnosis and classi cation of cutaneous and subcutaneous canine neoplasms, as it was possible to establish malignancy predictive characteristics by all techniques used (B-mode, Doppler, and ARFI elastography). In addition, it was possible to determine an ultrasound examination protocol that could contribute to lesions diagnosis and prognosis and provide individual ultrasound characteristics for each studied tumor type. Because it is a complementary method, its characteristics are highly sensitive and have a positive predictive value. These results were obtained in all three ultrasound techniques that were performed.
Given the high number of cutaneous and subcutaneous neoplasm types, it should be considered that they have different structural components and biological behaviors. They can range from benign to highly aggressive and metastatic lesions [18], which justi es the moderate results observed. The authors would like to emphasize the importance of studies regarding speci c cancer types, as the present study results differed from previous canine mammary tumors ndings. In another study, with breast tumors, different characteristics and predictive values of malignancy were found [14].
There were no associations between malignancy and tumor measurements in this study, which can be explained by the fact that neoplasms were diagnosed at different stages. There were no associations with echogenicity, which may be related to the different pathological processes involved, such as active in ammation or tissue necrosis in different tumor types [19]. A preliminary study involving 42 cutaneous neoplasms showed an association between malignancy and hypoechogenicity [15]. A greater number of neoplasms and speci c types of skin cancer that were included in the present study may explain the discrepancy between the two studies.
The heterogeneous echotexture indicative of malignancy seen in cutaneous and subcutaneous tumors is explained by the different structural components, such as the presence of cavitary areas, points of brosis, or microcalci cations. The association between heterogeneous echotexture and malignancy was already demonstrated in previous studies with different types of neoplasms (cutaneous and mammary) in both humans and animals [13,[15][16][17]20].
It was possible to identify the signs of invasiveness in adjacent tissues because of their reactivity or the di cult tumors delimitation and then associate it with malignancy. This association is justi ed because malignant neoplasms tend to be more aggressive and invasive than benign ones, even requiring a greater safety margin when surgically removed [21].
On Doppler, it was not veri ed any qualitative characteristic with malignancy. It is known that tumor growth, both in malignant and benign lesions, is dependent on the blood supply [22]. Therefore, it is reasonable the fact that no signi cant results were obtained in neoplasm differentiation through these characteristics even though other researchers showed associations with malignancy in other tumor types, such as breast cancer in women and canine mast cell tumors [23,24].
Even though no vascularization points were observed in some tumors by color Doppler, the lack of vascularization should not be ruled out. It is known that the color Doppler technique has limitations at microvascular level and tissue perfusion, requiring other methods for diagnostic complementation, such as contrasted ultrasound [14]. Nevertheless, this technique was not available and could not be tested in the present study. This Doppler technique limitation contributed to the impossibility of evaluating all neoplasms by pulsed Doppler, with the Doppler velocimetry indices being calculated for only a portion of those who presented vascularization in color Doppler.
The lack of association between RI, systolic peak, and diastolic velocity with malignancy could be because it was only possible to identify the arterial ow in 9 benign neoplasms, predominantly in malignant lesions (82.35% of cases). However, a PI increase in malignant neoplasms was veri ed. The increase in this index has already been associated with malignancy in other types of lesions, such as ovarian and thyroid tumors in humans and metastases in canine lymph nodes. These may be related to the compressive effect tumor, the angiogenesis process, and the presence of arteriovenous shunts, which promote turbulent ows with high perfusion rates [25][26][27].
In the same way, as B-mode observed heterogeneity, the increased rigidity observed in malignant neoplasms can also be explained by tissue components they may present. In a previous study, greater stiffness was found in malignant mammary tumors in female dogs compared to benign ones, and this increase in stiffness was justi ed by the presence of areas of brosis, microcalci cations, and even collagen deposition [14].
The study of the rigidity of skin neoplasms in dogs has already been carried out qualitatively and semiquantitatively (through scores) through elastography, with greater rigidity being observed in malignant tissues, however no real quantitative values of the shear wave velocity were obtained. only subjective analysis [17]. On the other hand, this study provides more detailed information regarding neoplasms stiffness since it was possible to verify that an SWV greater than 3.52 m/s was indicative of malignancy. In addition, the elastography method used (ARFI method) allows more reliable results that are easy to perform, with greater reproducibility and less interobserver variability than sonoelastography [28].
Some benign neoplasms, such as adenomas, showed high tissue stiffness, justi ed by the accumulation of keratin and predominantly lymphoplasmacytic in ammatory in ltrate [29], that cause rigidity alterations in the keratinocytes and extracellular matrix [30,31].
Because ultrasonography is a complementary exam and should not be used alone to diagnose neoplasms, in this study, we demonstrate the importance of the association between the ndings of the different techniques performed. These have been already described for evaluating breast tumors in women, where an increase in accuracy was found when elastography and Doppler ndings were associated [24]. In our study, as we increased the number of malignancy predictive characteristics, there was a decrease in the number of false positives, increase in protocol speci city, and positive predictive value.
Among the study's limitations, it should be considered that some tumor types had a low experimental number, and as noted in this discussion and we had some values discrepancies (e.g., adenomas), which may be responsible for the low speci city and accuracy values that were observed.
Conclusions
Findings from this study indicate that ultrasonography has good applicability in the malignancy prediction of cutaneous and subcutaneous canine neoplasms through different techniques. It provides quick and noninvasive results and can be used as a complementary method for this diagnosis. Furthermore, we found that the assessment protocol by associating the ndings of different ultrasound techniques allows for greater reliability in diagnosing malignancy in this type of cancer.
Experimental design
This study was carried out according to the ARRIVE guidelines 2.0 (2020). Prospective data collection was conducted between September 2019 and June 2021. Sixty-six dogs of different breeds and ages (9.45±2.58 years) from the hospital routine presented cutaneous or subcutaneous neoplasms were enrolled in the study. The Veterinary Oncology sector previously evaluated all patients.
Ultrasound evaluation
Trichotomy of the tumor region was done with up to two centimeters of the peritumoral region. In order to maintain the patient's comfort during the examination and without sedation or anesthesia, patients were positioned in decubitus according to the anatomical location of the neoplasms. ACUSSON S2000™ equipment (Siemens®, Munich, Germany) was used for all the techniques performed, with a linear transducer and frequency ranging from 8 to 10Mhz. In addition, an ultrasonographic conductive gel was used throughout the examination.
B-mode ultrasound
The transducer was positioned in the central super cial region of the neoplasms, adjusting the focus, gain, and depth as needed. After adjusting the device, the nodules and masses were measured in longitudinal (length and height) and transversal (width) sections. The characteristics of echogenicity (hypoechogenic or hyperechogenic), echotexture (homogeneous or heterogeneous), echotexture pattern (coarse or smooth), invasiveness in adjacent tissues (presence or absence), capsule (presence or absence), cavitary areas (presence or absence), and hyperechogenic points (presence or absence) were evaluated.
Doppler
The color Doppler function was activated to identify neovascularization, and the pulse repetition frequency (PRF) was adjusted to 977 Hz.
When necessary, changes were made to the pre-established PRF. Tumor neovascularization was characterized according to its intensity (absent, mild, moderate, or intense), location (central, peripheral, or diffuse), and pattern (perinodular, mosaic, or network).
The pulsed wave Doppler was activated and used only for those neoplasms that presented vascularization at color Doppler examination. At this stage, the PRF used in the qualitative assessment was maintained, and the caliper was adjusted to cover 2/3 of the vessel's caliber, and using an angulation towards the vessel when necessary, respecting the limit of 60º degrees. At least three spectral traces were obtained [14] to get the peak values of systolic velocity (m/s), diastolic velocity (m/s), resistivity index (RI), and pulsatility index (PI).
ARFI Elastography
The elastographic evaluation was performed using the VTIQ method (virtual touch tissue imaging quanti cation, 2D-SWE technique). Color elastograms were performed in the qualitative study. Where blue colors represented more elastic areas, green and yellow represented intermediate stiffness, and red corresponded to more rigid areas. Based on the color pattern, neoplasms were classi ed according to their deformability (deformable or non-deformable). The same elastograms were used for quantitative analysis, and at least three areas of interest were selected. Those areas were randomly chosen to obtain the mean shear wave velocity (SWV -m/s), quanti ed by the VTIQ software, and using total stiffness as a representative value [14].
Histopathological evaluation
After ultrasound examinations, clinical care was continued, and biopsies (incisional or excisional) were performed to obtain the de nitive diagnosis. Patients were individually anesthetized, and surgical protocols were de ned under the recommendation of the responsible veterinarian. These tumor samples were xed in 10% formalin and sent to the veterinary pathology laboratory within the university, where histological cuts were performed to make slides stained with hematoxylin and eosin and, in cases of mast cell tumors, with toluidine blue.
After histopathological diagnosis, neoplasms were classi ed as benign or malignant, as established by the World Health Organization (WHO).
Statistical analysis
All data were analyzed using the SPSS Statistics 20 package (IBM®, New York, United States), and a signi cance level of 95% was used for all tests (P < 0.05). Echogenicity, echotexture, texture pattern, invasiveness, capsule, hyperechogenic spots, cavitary areas, and deformability were associated with malignancy using the Chi-square test, and sensitivity, speci city, accuracy, and positive (PPV) and negative (NPV) predictive values were calculated. Logistic regression was performed to differentiate malignancy according to the intensity, location, and pattern of vascularization. The other characteristics were submitted to the Kolmogorov-Smirnov normality test. The Mann-Whitney test was performed to analyze length, width, height, systolic peak, diastolic velocity, and pulsatility index. While for the resistivity index and SWV, a ttest was performed for independent samples. A ROC curve was used to obtain the cut-off point, sensitivity, speci city, and area under the curve for signi cant results.
Afterward, the variables with signi cant results were selected, and a descriptive analysis of the association between the different ultrasound techniques was performed. Furthermore, they were grouped into four groups: 1) presence of at least four predictive malignancy characteristics; 2) at least ve characteristics; 3) at least six characteristics; 4) seven characteristics. Thus, the chi-square test veri ed an association with malignancy, and the values of sensitivity, speci city, accuracy, PPV, and NPV were calculated. Additionally, descriptive analysis was performed and expressed in percentages of the qualitative ultrasonographic characteristics and the mean and standard deviation of the quantitative characteristics for each tumor type included in this study, except for single cases.
Declarations
Ethics approval and consent to participate This study was approved by the Animal Care and Use Committee of Universidade Estadual Paulista "Júlio de Mesquita Filho", Jaboticabal, Brazil (Protocol 010047/19) and the owners formally agreed, through signing a term of responsibility, to enroll their animals in this study. All
Supplementary Files
This is a list of supplementary les associated with this preprint. Click to download. data.xlsx
|
2021-10-17T15:06:35.430Z
|
2021-10-15T00:00:00.000
|
{
"year": 2021,
"sha1": "e593d7c3fd425b259b4a1883a5cbe9226bebdbae",
"oa_license": "CCBY",
"oa_url": "https://bmcvetres.biomedcentral.com/track/pdf/10.1186/s12917-021-03118-y",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "9e87ec216d7debb385498cbc8e52de25ea785ff0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
18114235
|
pes2o/s2orc
|
v3-fos-license
|
SILC for SILC: Single Institution Learning Curve for Single-Incision Laparoscopic Cholecystectomy
Objectives. We report the single-incision laparoscopic cholecystectomy (SILC) learning experience of 2 hepatobiliary surgeons and the factors that could influence the learning curve of SILC. Methods. Patients who underwent SILC by Surgeons A and B were studied retrospectively. Operating time, conversion rate, reason for conversion, identity of first assistants, and their experience with previous laparoscopic cholecystectomy (LC) were analysed. CUSUM analysis is used to identify learning curve. Results. Hundred and nineteen SILC cases were performed by Surgeons A and B, respectively. Eight cases required additional port. In CUSUM analysis, most conversion occurred during the first 19 cases. Operating time was significantly lower (62.5 versus 90.6 min, P = 0.04) after the learning curve has been overcome. Operating time decreases as the experience increases, especially Surgeon B. Most conversions are due to adhesion at Calot's triangle. Acute cholecystitis, patients' BMI, and previous surgery do not seem to influence conversion rate. Mean operating times of cases assisted by first assistant with and without LC experience were 48 and 74 minutes, respectively (P = 0.004). Conclusion. Nineteen cases are needed to overcome the learning curve of SILC. Team work, assistant with CLC experience, and appropriate equipment and technique are the important factors in performing SILC.
Introduction
Single-incision laparoscopic cholecystectomy (SILC) has been increasingly performed for benign gallbladder disease over the last few years with comparable operative results with conventional 4-port laparoscopic cholecystectomy (CLC). With results from randomized controlled trials (RCTs) [1][2][3][4][5] and series of publications [6][7][8][9] showing that SILC is equally safe, with no obvious additional scar and potentially have less postoperative pain and earlier return to daily activity [5], more surgeons are embarking on learning the technique.
As SILC is a new approach to gallbladder disease, many aspects of this new technique have not been studied in detail. Most surgeons embarking on this technique are concerned with its learning curve, conversions, and potential longer operating time. To date, very limited work has been done to look into this important issue and few publications have looked into learning curve of SILC from conversion point of view.
To perform SILC safely and successfully, there may be changes in surgical technique, need of new equipment, and modifications in the role of assistant.
In this study, we report an SILC learning experience of a tertiary university hospital with advanced laparoscopic facility. Operating time, potential problems, and ways to overcome them as well as surgical technique were included in this report. Our paper aims at facilitating and smoothening the learning curve of surgeons especially those who are starting to perform SILC or those facing difficulty in performing SILC.
Methods
All patients who underwent SILC from April 2009 to August 2011 (28 months) by two HPB attending surgeons (Surgeons 2 Minimally Invasive Surgery A and B) who both have been attending grade for more than 7 years and routinely performed laparoscopic cholecystectomy for all benign gallbladder disease in a tertiary university hospital were studied retrospectively. The unit performs about 400 laparoscopic cholecystectomies per year.
Operating time, conversion rate, and reason for conversion of individual surgeons were recorded. Conversion is defined as adding additional port(s) at other parts of the abdomen or minilaparotomy. Identity of first assistants was collected and analysed. Risk factors of conversion such as patient's BMI, presence of acute cholecystitis, and previous abdominal surgery were recorded and compared.
Cumulative summative (CUSUM) analysis is used to identify learning curve of SILC of Surgeon A, and standard conversion rate is defined as 5%. -test is used to compare continuous variable, and < 0.05 is defined as statistical significance. SPSS Statistics version 17.0 is used to analyse the data.
Operating time of all CLC done by Surgeon A at the same period of time was collected to establish the baseline operating time for comparison with SILC operating time of Surgeons A and B.
SILC Surgical Methods.
All procedures were performed under general anaesthesia. The patients were placed at supine or split-leg (French) position depends on availability of different operating tables. Marcaine 0.25% is infiltrated around the umbilicus then a 1.5 cm vertical incision is made in the umbilicus, and SILS port (Covidien, Dublin, Ireland) is then inserted. A 5 mm 30 ∘ Endo-EYE surgical videoscope (Olympus, Tokyo, Japan) is used for visualization of the entire operation. Prolene suture with straight needle is introduced percutaneously at the right hypochondrium and is made to pierce the gallbladder at the seromuscular plane before exiting the peritoneal cavity at the right hypochondrium ( Figure 1); care is taken not to pierce through the mucosa to prevent bile spillage. This serves as a retraction suture to facilitate the exposure of the Calot's triangle and subsequent dissection.
An articulating endoforcep, Roticulator (Covidien, Dublin, Ireland), is introduced to provide lateral retraction of the gallbladder, and careful dissection to achieve critical view of safety is then completed (Figure 2).
Both the surgeon and the assistant will be on the patient's left if the patient is on supine position, whereas the operating surgeon will be standing between patient's legs and the assistant will be on the patient's left side if the patient is on split-leg position. The assistant would sit in front of the surgeon. In most parts of the surgery, he will be providing gentle lateral traction of the gallbladder by manipulating the Roticulator while the primary surgeon holds the EndoEYE and the dissecting instruments in the "snooker cue guide" position ( Figure 3). This position allows the camera and the dissecting instrument to move in a coordinated fashion to ensure optimal visualization of the dissecting process which is critical in safely exposing the Calot's triangle to identify the cystic artery and duct. Five mm Hem-o-lock (Teleflex Medical, USA) clips are used to ligate both cystic artery and duct before they are divided between clips. Gallbladder is then placed into a self-constructed bag intracorporeally and removed from the abdominal cavity; fascia is closed with nonabsorbable suture in figure-of-eight fashion, and skin is closed subcuticularly.
Results
One hundred and nineteen patients who underwent SILC for their gallbladder diseases between April 2009 and August 2011 by 2 HPB consultants (Surgeons A and B) were retrospectively studied. One hundred and nineteen cases were performed by Surgeons A and B, respectively. 7 (5.8%) cases were acute cholecystitis and 75 cases (94.1%) were chronic cholecystitis. Diagnosis of gallbladder disease was achieved by clinical information and pre-op radiological investigations (ultrasound scan or CT scan). There were 8 cases (6.7%) that needed extra working port(s) to complete the procedure; no open conversion was needed in our experience.
Learning Curve of SILC.
We defined acceptable conversion rate of SILC as 5% after learning curve is overcome as this is considered traditionally an acceptable conversion rate in CLC. Surgeons A and B had 6 (6%) and 2 (10.9%) conversions respectively. Figure 4 shows the CUSUM analysis of learning curve of Surgeon A; vertical line at the 19th case indicates the predicted minimal number of cases required to overcome the SILC learning curve. Surgeon B is excluded from CUSUM analysis in this study due to limited number of cases performed.
Most conversions of Surgeon A happened before the first 19 cases, and subsequently his learning curve reached a plateau except two conversions in the 32nd and 67th case. Surgeon B had two conversions in his 1st and 4th case. Most conversions were due to dense adhesion at the Calot's triangle and vital anatomical structures cannot be visualized clearly. One (5%) patient with previous abdominal surgery required conversion and one (5%) patient with active acute cholecystitis required conversion. Table 1 shows the operative and patient profile of the first 19 cases of Surgeons A and B. Table 2 shows the profile of cases that required conversion in the first 19 cases. When comparing cases which required conversion and cases which did not require conversion, there is no significant difference between patients (1) with previous, without previous, or on-going acute cholecystitis, (2) previous abdominal surgery, and (3) mean BMI. Table 3 demonstrates the comparison of potential risk factors between cases with and without conversion.
Operating
Time. Surgeon A's mean operating time is significantly lower (62.5 minutes versus 90.6 minutes, = 0.04) after he has overcome the learning curve. Conversion rates were lower as well (2.5% versus 21%, = 0.36). Mean operating times, conversion rate, and patients' profile of Surgeons A before and after the first 19 cases is shown in Table 4. Figure 6 demonstrates the trend line of operating time of Surgeon A (dashed line) and B (dotted line). We found that the trend line of operating time of Surgeon B is steeper than Surgeon A, hence suggests that guidance from another surgeon who is experienced in SILC can facilitate the learning curve rapidly. Surgeon A SILC operating time trend line crosses his CLC operating time trend line (straight line) at the 82th case, which is suggestive of that SILC operating time may be faster than CLC eventually as the experience increases further.
We compared the 2 HPB fellows who have assisted in most of the SILC cases of Surgeon A in our institution, one who had previous CLC experience and the other without. We found that the mean operating time of cases assisted by the assistant with CLC experience is significantly shorter in comparison with cases assisted by the assistant without previous CLC experience (48 versus 74 minutes, = 0.004). Mean operating time of cases assisted by the 2 assistants and the trend are demonstrated in Table 5 and Figure 7 respectively.
Operating Time and Conversion.
Our studies demonstrated that the operating time of SILC was more than 90 minutes at the beginning of both surgeons. Surgeon A was able to achieve mean operating time of below 60 minute after about 50 cases of SILC and his mean operating Other publications [11][12][13] that looked into SILC operative time and learning curve reported a mean operative time between 46.9 minutes and 80 minutes. Hernandez et al. [11] found that mean operative time was reduced significantly after 75 cases of SILC and was not significantly longer than mean operative time of CLC. Our institution showed similar studies data. Qiu et al. [12] reported a much shorter mean operative time of 46.9 minutes with no conversion in their highly selected 80 patients, all of whom have minimal sign of gallbladder inflammation and no surgical history of the right upper quadrant of abdomen. They were able to perform SILC with mean operative time of below 40 minutes after 40 cases. Joseph et al. [14] concluded that surgical trainees who were proficient in CLC had significant reduction in operative time along their learning curve. Recently published RCTs [1][2][3][4][5] reported mean operative time between 46 minutes and 88 minutes with 3 studies [1,2,4] which showed significant longer operative time of SILC; however, these RCTs did not specify the surgeons' previous CLC and SILC experience and all of them did not include patients with acute cholecystitis. There were 8 (6.7%) cases in our studies which required additional port(s) to aid dissection of the Calot's triangle due to dense adhesion at the area; no open conversion or laparotomy was needed in our studies. Four (80%) out of the 5 conversions of Surgeon A happened before his first 20 cases. Surgeon B had two conversions at his 1st and 7th case. The systemic review published by Antoniou et al. [10] reported a conversion (additional ports required) rate of 9.3% and an open conversion rate of 0.4%. Most common conversion reason that was reported was an obscured anatomy of the Calot's triangle due to adhesions, acute or chronic inflammation (71.1%). Seven out of 8 (87%) of our conversions were due to severe adhesions at the Calot's triangle as well. In conclusion, our study was found to have very similar rate and reason of conversion with Antoniou's study [10].
One of our conversions was associated with previous abdominal surgery. However, the reason for inserting an additional port was to place a clip at a leaking cystic duct. Hence, we do not think that the previous abdominal surgery has any significance on this conversion. In another conversion which was associated with an on-going acute cholecystitis, two additional ports were added to provide retraction for adequate visualization as well as to secure haemostasis from the liver bed. We performed SILC on 4 other cases of acute cholecystitis with no significant issues.
In our center, Surgeon A was the first HPB surgeon who adopted SILC into his routine treatment option for gallbladder diseases, followed by Surgeon B. From our CUSUM analysis, Surgeon B had less conversion in the early stages of his SILC learning curve in comparison to Surgeon A. Hence, we deduced that during the process of pioneering this new surgical technique in our center, Surgeon A inevitably had more conversions than other surgeons in the center before his learning curve was overcome.
Once the expertise is shared among other surgeons, we would expect less conversion and smoother learning curve in the subsequent cases. This phenomenon was demonstrated in the steeper trend line of operating time of Surgeon B, after Surgeon A has overcome his learning curve of SILC. With less skin incisions in SILC hence less closure time, we believe the operating time could be faster than CLC eventually as the experience increases, as shown in our results.
Analyzing the CUSUM, significantly less conversion was experienced after the 19th case; we therefore conclude that surgeons who routinely perform CLC for gallbladder diseases need about 19 cases to overcome SILC learning curve.
Assistant Factor.
In the beginning phase of adopting new surgical technique or equipment in our center, we found that there are always benefits if the same group of surgeons and nurses can provide feedbacks among themselves to hasten the learning process.
We compared the operating times with 2 HPB fellows as assistants; one routinely performs CLC in her practice and one was new to CLC; both were new to SILC. We found that there was significant shorter mean operating time in cases that were assisted by the fellow who was familiar with CLC. SILC is a procedure that requires advanced laparoscopic skills. In addition, the surgeon and his/her assistant must be able to work closely with each other in a more limited space without colliding their instruments against each other. In addition, having CLC experience prior to assisting SILC is an invaluable advantage. Qiu et al. [12] and Solomon et al. [13] both had similar learning experience, and hence they encouraged surgeons to work with skilled assistant and obtaining preceptorship in order to overcome one's SILC learning curve.
We also encouraged other surgeons to record a video of all their SILC cases, and subsequently watch the video together with their assistant, with the aim of identifying weaknesses and mistakes and avoid them in subsequent cases.
Technique and Equipment Issues.
In SILC, all surgical equipment is introduced from the umbilical port site. Manipulation of the instruments intra-and extracorporeally is thus very challenging due to the limited working space and loss of the traditional laparoscopic triangulation. We started our SILC practice with SILS port as intraperitoneal access, it accommodates all working instruments, insufflation and camera port, and is inserted through a single fascial defect. This port does increase the cost of surgery, however in our experience, there is no significant surgical or technical problems caused by the port, and we continued to improve our operating time and conversion rate with the help of this port; therefore, it remains as the port of choice for intraperitoneal access.
In order to overcome the loss of laparoscopic triangulation, we utilized the Roticulator forceps, which is held by the first assistant, who sits at the right side of the surgeon. The forceps provide lateral retraction of the gallbladder to facilitate the dissection of Calot's triangle. We realized that with SILS surgery, especially in someone who just started performing SILS surgery, loss of conventional triangulation in manipulating the instruments and loss of working space can be frustrating to the surgeons and dangerous to the patients; we recommend surgeons who are new to SILC to use articulating or prebend instruments to facilitate the surgery in the first few cases of SILC, and with the increased experience in SILC, they can make a choice to continue in using these instrument or switch to conventional laparoscopic instruments. Again, these articulating or pre-bend instruments add extra cost to the patients; however, in view of the advantages mentioned above, we believe it has an important role in SILC, especially in those surgeons who are new to SILC.
The other equipment which we found to be of value is the Olympus Endoeye, which is a very compact and highly manipulable laparoscopic camera that provides adequate visualization for the scope of SILC surgery without occupying much space.
We routinely used extracorporeal hanging suture to enhance the visualization of SILC. In this way, 2 instruments can actively be used in performing the surgery. We manipulate the straight needle laparoscopically and pierce the thickest part of the gallbladder at the seromuscular layer, to prevent bile spillage; so far, there is no issue in all the cases we performed in this series. In addition, hanging suture has been shown to reduce complication rates in comparison with instrumental anchorage [10] (3.3% versus 13.3%, < 0.0001).
Port site hernia has been a concern in SILC due to the bigger umbilical fascial defect if compared to CLC, a 52patient retrospective study [15] published a port site hernia rate of SILC of 5.8%. Multiple up-to-date meta-analysis [16][17][18] has not shown significant increase in port site hernia so far; the majority of the RCTs performed up-to-date utilized commercialized umbilical access port, and these studies are limited with their short follow-up period. Goel and Lomanto [19] concluded in their review that port site hernia in singleincision laparoscopic surgery can be minimized with good suture closure of the fascial defect. We close all umbilical fascial defects with 1 or 2 figure-of-eight sutures; there is no umbilical hernia detected in this series of patients during followup.
Patient Selection.
Patients with risk factors such as previous abdominal surgery, history of acute cholecystitis or on-going cholecystitis and obese patient were thought to have higher chance of conversion in SILC [10]. However in our experience, all of our patients who needed conversion to CLC, did not evidently presented with the above risk factors. In fact, the most common reason for conversion was dense adhesions and failure to identify vital structures due to poor visualization. Patients with the above risk factors are shown to increase operative time [12], therefore we suggest selecting patients sensibly at the early stage of performing SILC. Once our learning curve has been overcome, we were able to perform SILC in majority of the gallbladder condition in the general patient population with minimal conversion rate.
Conclusion
Single-incision laparoscopic cholecystectomy is a safe and feasible procedure. Nineteen cases were needed to overcome the learning curve in our experience. Comparable conversion rate and operating time with conventional laparoscopic cholecystectomy were observed after learning curve has been overcome. Team work, careful patient selection, assistant with conventional laparoscopic cholecystectomy experiences, and appropriate equipment and technique are important factors at the beginning stage of performing SILC.
|
2018-04-03T02:06:51.796Z
|
2013-05-09T00:00:00.000
|
{
"year": 2013,
"sha1": "ae0c352400612f30b238ef81af7d0c251ed6a781",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/mis/2013/381628.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dd729ad92cd8ccce88078eae71c08f5c839cb54c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
261441075
|
pes2o/s2orc
|
v3-fos-license
|
Impacts of public health and social measures on COVID-19 in Europe: a review and modified Delphi technique
Introduction The emergence of the COVID-19 pandemic in early 2020 led countries to implement a set of public health and social measures (PHSMs) attempting to contain the spread of the SARS-CoV-2 virus. This study aims to review the existing literature regarding key results of the PHSMs that were implemented, and to identify the PHSMs considered to have most impacted the epidemiological curve of COVID-19 over the last years during different stages of the pandemic. Methods The PHSM under study were selected from the Oxford COVID-19 Government Response Tracker (OxCGRT), supplemented by topics presented during the Rapid Exchange Forum (REF) meetings in the scope of the Population Health Information Research Infrastructure (PHIRI) project (H2020). The evidence- based review was conducted using Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines to identify which reviews have already been published about each PHSMs and their results. In addition, two modified Delphi panel surveys were conducted among subject matter experts from 30 European countries to uphold the results found. Results There were 3,212 studies retrieved from PubMed, 162 full texts assessed for eligibility and 35 included in this PHSMs summary. The measures with clearest evidence on their positive impact from the evidence-based review include social distancing, hygiene measures, mask measures and testing policies. From the modified Delphi panel, the PHSMs considered most significant in the four periods analyzed were case isolation at home, face coverings, testing policy, and social distancing, respectively. Discussion The evidence found has significant implications for both researchers and policymakers. The study of PHSMs’ impact on COVID-19 illustrates lessons learned for future pan- and epidemics, serving as a contribution to the health systems resilience discussion. These lessons, drawn from both the available scientific evidence and the perspectives of relevant subject matter experts, should also be considered in educational and preparedness programs and activities in the public health space.
Introduction
Since the emergence of the COVID-19 pandemic in early 2020, countries all over the world have selected and implemented several Public Health and Social Measures (PHSMs) in the process of trying to contain the spread of the SARS-CoV-2 virus.These clusters of PHSMs have significantly impacted the population and their application has been questioned in the political, social, and economic dimension.
In the present paper, instead of non-pharmaceutical interventions (NPIs), the concept of PHSMs is used as suggested by the World Health Organization (WHO), due to its clear and inclusive characteristics that describe public health and social interventions as "measures or actions by individuals, institutions, communities, local and national governments and international bodies to slow or stop the spread of an infectious disease, such as COVID-19" (1).
Many health-related policy measures that were applied to different degrees across the world, in combination or individually, can be considered PHSMs.These are usually a set of public health and social tools that have proved effective in limiting the spread and reducing the incidence and prevalence of infections during previous epi-or pandemic outbreaks, such as influenza A H1N1 (2).
During the early stages of the SARS-CoV-2 pandemic in March-May 2020, health systems preparedness, resilience, and capacity response in terms of allocating healthcare workers to combat shortages were considered foremost priorities and received a considerable amount of political attention (3).Still, after 3 years, these issues appear not to have found reflection in mid-to long-term policies, making it imperative to question the actual preparedness of health systems for future crisis events.Several stakeholders have developed interactive maps, dashboards, and catalogs summarizing PHSMs applied per country over time (4,5).International agencies and universities, specifically the European Center for Disease Control (ECDC) and the University of Oxford, have created interactive maps displaying the epidemiological evolution of the pandemic and the PHSMs applied by countries with the aim of informing the public.Specific platforms were also built to communicate the changeable status of specific PHSMs.As an example, Re-Open EU was launched to provide up-todate information on applicable travel and health measures in the European Union (EU) and the Schengen Associated countries (6).
At the same time, collaborative European projects conducted by Member State institutions and supported by EU programs, have arisen to address the need for a distributed research infrastructure on population health information as well as the need for rapid ad-hoc exchanges of information on health research and policy between countries during the pandemic.The Population Health Information Research Infrastructure (PHIRI) project maintains a continuously updated database of currently applicable PHSMs pertaining to COVID-19 in the participating countries, as well as of relevant research infrastructures, national health information sources, and training resources via the public Health Information Portal (7).PHIRI also conducts bi-weekly Rapid Exchange Forum (REF) gatherings between national project members and expert stakeholders from supra-national institutions to discuss specific urgent topics (8).Questions and topics discussed during each REF meeting are based on countries' public health institutions' requests, reflecting their most pressing needs for ad-hoc Pan-European information exchange on national population health developments, policies and research.To date, the REF meetings have addressed such questions particularly in the scope of COVID-19 and its impact on different dimensions of population health and health systems.
Strengthening and improving the resilience of health services requires proper and focused policymaking.Therefore, it is essential to understand how health-related policies and measures helped to contain the spread of the SARS-CoV-2 virus and which combinations of measures may have had the highest positive impact.
This paper aims to review the existing literature regarding the key effects of selected PHSMs, substantiate the findings through surveys with subject matter experts, and identify the PHSMs considered to have most impacted the epidemiological curve of COVID-19 over the last years during four different periods of the pandemic.
Methods
To continuously map COVID-19 health-related policies, indicators and impact measures considered to influence the pandemic's epidemiological curve were identified.Analysis of those indicators selected on the basis of relevant scientific literature and countries' requests via the PHIRI REF mechanism, were organized by PHSMs (indicator) in an evidence-based review.
Selection of indicators
Most of the indicators were selected from the Oxford COVID-19 Government Response Tracker (OxCGRT) which was developed by the Blavatnik School of Government, University of Oxford.The OxCGRT collected publicly available information on 24 indicators of government response to the pandemic, three of which were retired before the end of the Tracker's active collection and publication period.The indicators included containment and closure policies such as school closures and restrictions in movement, economic policies, health system policies such as the COVID-19 testing regime, emergency investments into healthcare, and vaccination policies, among others.The OxCGRT collected and published real-time updates on different policy responses from 1st January 2020 to 31st December 2022, covering more than 180 countries, coded into multiple indicators (9,10).For this study, 13 indicators were selected from among the OxCGRT indicators, particularly its categories C (containment and closure policies) and H (health systems policies) (11).Two of the selected indicators were modified based on topics presented in and priorities identified during the REF meetings in the scope of the PHIRI project (i.e., closure of kindergartens was added to the school closures indicator; aspects of several indicators within the OxCGRT category V, vaccination policies, were collated under one indicator
Review of evidence research -reviews
This review of reviews aims to provide a summary of different reviews on PHSMs to answer our research question "Which PHSMs can be considered to have most impacted the epidemiological curve of COVID-19 from 2020 to 2022?."This type of review was introduced in healthcare due to the high number of health interventions and clinical studies that exist, and the Cochrane group has described it as an Overview of Reviews (47).Review of reviews have a similar structure to systematic reviews but include reviews rather than primary studies.Therefore, we have used the PRISMA statement guidelines to report the review (48).
Eligibility criteria
Studies had to meet the following criteria: any type of review presenting structured methods and evidence of the impact of the selected PHSMs on reducing the epidemiologic curve of COVID-19.Exclusion criteria were a focus on the impact of PHSMs on topics outside of general population health (e.g., mental health) and on diseases other than COVID-19.
Information sources and search string
We developed a search string on PubMed MEDLINE using a combination of terms relating to the pandemic, the development of this search string focusing on the exposure (COVID- Then it was combined with all the indicators selected using the Title/ Abstract term in the query box.No filters apart from limiting the search to the type of study eligible and to studies published in the English language were applied.
Selection process
All the citations retrieved from PubMed were uploaded into Covidence -Better systematic review management (49), which automatically removes duplicates and is ready to provide a PRISMA flowchart.A thorough screening of titles and abstracts followed by full-text screening was performed by four reviewers.Each methodological step was conducted by two independent researchers on Covidence to ensure their eligibility to be used to produce a narrative of the key results per PHMS.
Data items
Data was extracted using a prepared and specific sheet focusing on the type of scientific paper, the PHSMs reported, the type of review (type of study), the period where the study was conducted, the type of studies included by study design, and the number of studies included, the key results and recommendations if any.The data extraction items were tested in an excel sheet by two reviewers and then inserted into Covidence to allow double extraction.Each reviewer was blinded during the data extraction stage and a final data sheet was created highlighting any conflicts between the two extractors.The conflicts were all solved by the same author to ensure consistency.This last data extraction sheet was downloaded, and results were summarized by cluster of PHSM.
Study risk of bias assessment
Risk of bias in the studies included was assessed by a tool developed by Whiting and colleagues to assess the risk of bias in systematic reviews -ROBIS (50).This tool focuses on assessing four domains, comprising the adequate objectives and eligible criteria, the search and databases used, the study characteristics provided, and the synthesis of results, plus, an overall risk domain.
Modified Delphi technique
The modified Delphi technique sought to elucidate the most significant PHSMs applied in European countries during different periods of the pandemic in 2020-2021 by studying the perception of relevant subject matter experts from the PHIRI project network.The Delphi techniques are still evolving, but they can be defined as a complex method to structure group communication processes to reach a consensus, based on collected experts' judgments (51).
First and second-round surveys
A two-round survey was designed and distributed to subject matter experts from all 30 countries participating in the PHIRI project.In the scope of the survey, subject matter experts were defined as members of the PHIRI project consortium and network who have been actively engaged in COVID-19 responses throughout the pandemic in the scope of research, design of policy, or advice to policymakers.This includes experts from side of national authorities such as Ministries of Health, national public health institutes, or other relevant government agencies and research institutions.In the first stage of the survey, the subject matter experts were asked to select, by way of multiple-choice questions from a list of initially 19 included measures, the three PHSMs which they considered had most impacted the epidemiological curve of COVID-19 during each period of the pandemic (March-May 2020, September 2020-February 2021, March-May 2021 and October-December 2021).After receiving the responses from the first round, analysis was performed.All the PHSMs that were selected among the most impactful measures by the majority of respondents of the first survey were taken forward for inclusion in the second round.The second survey was created from the resulting list of PHSMs per surveyed period, and again distributed to all members of the PHIRI project network.In this second survey, subject matter experts from each PHIRI member country were asked to rank the remaining PHSMs in order of their relative importance to decreasing or controlling the epidemiological curve of COVID-19 in each period of the pandemic, to arrive at a ranking of PHSMs deemed most impactful overall in each period by participants of the Delphi Panel.
Evidence-based review
The search performed identified 3,212 citations.After duplicate removal, 3,050 were considered ineligible at the titles and abstract screening stage, 162 were screened for full-text and 35 were included (Figure 1).
Of the 35 studies included, 16 were systematic reviews, 7 were systematic reviews and meta-analyses, 8 were rapid reviews, three were scoping reviews, and one was an evidence-based review.The number of papers included in these studies varied from 9 to 90 and all reviews addressed multiple countries.Most of the studies, 17 reviews, examined more than one PHSM in the same paper.The most studied PHSM was social distancing (Table 2).
Of the studies included, 23 (64%) had performed risk of bias assessment and another 23 (64%) had performed quality of evidence assessment of the studies included in their analysis (either quantitative or qualitative).From the risk of bias assessment performed, we assessed 33 studies as overall low risk of bias and two studies as unclear risk.The domains that contained more concerns were related to the search and the selection of databases ("Did the search include all appropriate range of databases for published and unpublished studies?Were methods additional to database search used to identify relevant reports?") and to the sufficient studies characteristics available ("Were sufficient study characteristics available for both review authors and readers to be able to interpret the results?").Please see the Supplementary material for a detailed risk of bias assessment.
The results are organized by PHSM, providing a summary of evidence on each measure.Table 1 shows an overview of the studies included for each PHSM.
Access measures
Thirteen studies assessed the impact of school and kindergarten closure on the transmission and incidence of SARS-CoV-2 (12)(13)(14)(15)(16)(17)(18)(19)(20)(21)(22)(23)(24).Six studies recommend the implementation of this PHSM with caution (12,16,20,(22)(23)(24) and four showed inconclusive results because it must be considered that the studies evaluated a set of combined measures (e.g., social distancing) and not this measure alone (13,17,19,21).Nevertheless, eight studies showed evidence of positive impact on the epidemiological curve when the measure was implemented in periods of low incidence of SARS-CoV-2.One study assessed the impact of workplace closures, whereby it was possible to conclude that implementing this measure had a moderate impact on reducing transmission (21).As for the closure of non-essential shops, gastronomy and cultural events, two studies reported a significant reduction (between 12 and 29%) in SARS-CoV-2 transmission associated with this measure (21,24).In the studies reviewed, there was no assessment of the impact of canceling public events on reducing the epidemiological curve of COVID-19.
Distancing measures
The most frequently implemented measures during public gatherings were providing hand disinfectant, wearing face masks, ensuring adequate ventilation, symptoms screening (i.e., temperature, symptom, travel, or close contact screening) and contact tracing.One study presented evidence that implementing a range of PHSMs can reduce the risk of SARS-CoV-2 transmission at mass gatherings (25); however, this risk is unlikely to be eliminated.All studies adopted a layered mitigation approach involving multiple public health measures; therefore, the effectiveness of any single measure under the umbrella of restrictions on public gatherings could not be determined.Some studies considered physical distancing, quarantine and contact tracing as part of the results analysis of social distancing (24, 26).For hygiene measures, the interventions studied fell into three broad categories: the main use of hand sanitizers, the use of soap, and those that provided education on hygiene practices only (24, 31).Further recommendations to be applied in specific contexts such as using gloves, gowns, and eye protection, saline nasal washing and gargling were also identified.From the included studies, we can infer that the implementation of hygiene measures had an impact on reducing the incidence of COVID-19.In addition, one study evaluated surface disinfection with chlorine or ethanol-based disinfectant, concluding its effectiveness in reducing SARS-CoV-2 transmission (24).There was consensus among 10 studies that using face covering reduces the risk of transmission, incidence, mortality, and hospitalization for COVID-19 (22,24,26,(31)(32)(33)(34).The impact of not using a mask, using a face mask, surgical or medical masks, and N95 masks were analyzed.The latter may be associated with a greater risk reduction compared to surgical or similar masks according to included studies, particularly when mandatory use of N95 masks is implemented (26,33).Five included studies demonstrated that voluntary quarantine by contacts effectively suppressed transmission of COVID-19 in conjunction with contact tracing.However, one study reports that particular consideration needs to be given to providing appropriate measures for vulnerable populations, as quarantine and screening may not be sufficient to address their needs (27).
Movement restrictions
As with voluntary quarantine of close contacts, case isolation at home was shown to be effective in suppressing transmission of COVID-19 (20,22,23,27,28,35,36).However, the effectiveness of this measure has been questioned with increasing evidence of natural immunity and disease resistance.
The two studies that analyzed the impact of the public transport closure did not show any significant difference in the progression of the epidemiological curve associated with this measure (21,23).This may have been due to other measures already implemented during the studied time periods (resulting in low public transport congestion and face masks worn on public transport).One of the studies included an analysis of 12 papers, of which only one found an association between public transport closures and the reproduction number, growth rate, or case-related outcomes of COVID-19 (23).
The stay-at-home campaign focused on two complementary measures: lockdown and quarantine.Regarding the impacts of lockdown, all the studies that evaluated stay-at-home or isolation measures reported reductions in transmission, incidence, hospital and ICU admissions, and deaths from SARS-CoV-2.One of the studies reported that a combination of four measures, including restrictions on mass gatherings, school closures, workplace closures, and lockdowns in 32 countries, was associated with decreased incidence of COVID-19 (21).A similar decreasing incidence was observed when public transport closures were added.Quarantine was reviewed in two studies, which concluded that its implementation reports a decrease in the incidence of COVID-19.
The effectiveness of travelers' quarantine and the need for arrival or exit screening were examined as measures to restrict international movement.Included studies show that the effectiveness of traveler quarantine depends on compliance and increases when traveler quarantine is implemented as a mandatory measure.Four studies demonstrate quarantine's impact, especially for travelers from countries with a high prevalence of SARS-CoV-2 and detecting new cases that were initially negative (22,(36)(37)(38).In addition, it was found that reopening borders without travelers' quarantine measures rapidly increased the number of new cases of COVID-19 (37).Studies also showed that screening travelers allowed for a delay in the next epidemic peak and a reduction in the number of cases compared to earlier peaks.However, in two separate countries lifting travel restrictions did not increase the number of cases when accompanied by other measures of physical distancing and quarantine of travelers.Complementarity of measures, such as quarantine and screening, showed a high impact in reducing the number of cases, mainly when screening was performed before day 14 of quarantine.
The included studies on international travel control measures, such as travel restrictions and border closure measures, showed mixed results in the association between border closure and reduction of critical cases or overall mortality from COVID-19 (22,28,29,37,40).
On one hand, reducing transmission between countries appears to be more effective through border closures than screening for symptoms at airports, and is particularly useful in the early phase of an outbreak before the widespread distribution of the disease (36).On the other hand, border restrictions in combination with other measures (quarantine, isolation, social distancing, closure of schools and workplaces, working from home, and restrictions on internal movement) show that together these measures were effective in reducing the number of cases of COVID-19, but could not isolate the impact of international travel control measures.Additionally, one study presented inconclusive results in analyzing domestic and international travel restrictions (38).
Test, trace, and vaccinate
Specific to testing policies, most of the studies reviewed conclude that testing policies (along with case isolation, social distancing, and face masks) can effectively control a new outbreak of COVID-19.However, a study assessing the accuracy of screening strategies showed inconclusive results on the usefulness of combined screening, repeated symptom assessment, and rapid laboratory tests (41).
As for contact tracing, the nine studies included in this analysis showed the usefulness and benefits that digital tools -Contact Tracking Apps (CTA) with text warning systems -could have in managing an outbreak.Despite promising results on contact tracing policies, two studies highlight that for the results to be effective, these apps need to provide faster feedback on a positive test result and notify close contacts (42,43).Two other studies mention the need for a high app usage rate (44,45).Another study mentions that CTA must be combined with other interventions (such as social distancing and random testing) to reduce the epidemiological curve of SARS-CoV-2.
Combining both measures is essential, as one study found that each new case requires an average of 36 individuals to be analyzed, and laboratory testing (within 2 h) can increase the efficiency of this process (35).Vaccination policy in terms of the polices and strategies to vaccinate all or certain groups of the population was not addressed in any eligible studies included in this review.
Communication measures
Another PHSM that we investigated were public information campaigns, but there were no available studies describing the impact of public health campaigns on the epidemiological curve of COVID-19.
Modified Delphi technique
The first round of the survey received responses from subject matter experts from Belgium, Bosnia Herzegovina, Croatia, Estonia, Hungary, Italy, Norway, Portugal, Romania, Slovakia, Slovenia and Spain.For the period of March to May 2020, the PHSM considered most impactful among those implemented during this period were Stay-at-home campaigns with a relative majority of 45% survey participants choosing this measure in their responses.For the period of September 2020 to February 2021, the PHSMs deemed most impactful by respondents were "face coverings (of all types)" and "case isolation at home" each chosen by 45% of participants, respectively.For the period of March to May 2021 the PHSM deemed impactful by the highest number of respondents was "vaccination policy" chosen by 68% of participants.Finally, for the period of October to December 2021, the PHSM considered most important overall was again "vaccination policy," chosen by 68% of participants.Results are presented in Figure 2.
The second round of the survey received responses from subject matter experts from Albania, Austria, Belgium, Croatia, France, Italy, Norway, Portugal, Slovakia, Spain, and Sweden Public Health Experts.For the period of March to May 2020, the PHSM considered most impactful was "case isolation at home" as chosen by 50% of participants.For the period of September 2020 to February 2021, respondents considered "face coverings (of all types)" as the most relevant measure with 29% of participants choosing this option.For the period of March to May 2021, the PHSM considered most impactful was "testing policy" chosen by 21% of participants, followed by a tie between social distancing and vaccination policy.Finally, for the period of October to December 2021, the PHSM considered most impactful was "social distancing" chosen by 36% of participants, followed by vaccination policy (29%).Results are presented in Figure 3.
Key findings
PHSM are key interventions at the disposal of policymakers in the public health space to address an epi-or pandemic, to limit spread of infectious disease, and to mitigate its impact.From this review, and in the context of COVID-19, one could consider that there are three sets of PHSMs, distinguished by different levels of evidence currently available regarding their impact: one set of PHSMs where there is clear evidence, one set with just moderate evidence, and another with to date still little evidence on their impact.
PHSMs with clear evidence of positive impact from the literature review are closure of non-essential shops, gastronomy and cultural events, hygiene measures, face coverings, voluntary quarantine by contacts, case isolation at home, stay-at-home campaign, restrict internal movement, and testing policies.
PHSMs with a moderate level of evidence, often to be implemented as a combined intervention are workplace closure, restrictions on public gatherings, social distancing, international travel control measures and contact tracing.
PHSMs with little evidence available to date, eventually requiring more studies are closure of schools and kindergartens, cancelation of public events, public transport closure, vaccination policy strategies, and public information campaigns.
Overall, combined interventions have shown to be effective and have a high impact in reducing the transmissibility of the disease, the collapse of health care services, and mortality (28).
Another interesting result lies in the comparison of evidence from the rapid review with the perceptions reported by subject matter experts via the modified Delphi panel technique.During the initial period (from March to May 2020), the PHSM identified as most impactful by subject matter experts was "case isolation at home, " which fits well with the evidence presented in the review and illustrates a strong focus during this period of the pandemic on measures that restricted people from their usual movements and activities.
In the next period (from September 2020 to February 2021), the PHSM identified as most impactful by the experts were "face coverings (of all types), " again aligning with the evidence drawn from the rapid review and representing a shift in focus during this period of the pandemic toward measures that would allow people to leave their homes and be somewhat active while still engaging in preventative measures.
In the following period (from March to May 2021), the PHSM considered most impactful by experts was "testing policy, " followed by a tie between social distancing and vaccination policy, representing a stage of the pandemic where societies were forced to adopt -and continuously adapt -different parallel clusters of measures to keep functioning.It was also the period when COVID-19 vaccination first became available to the population.
In the last surveyed period (from October to December 2021), the PHSM identified as most impactful by experts was "social distancing, " followed by vaccination policy.With most populations already vaccinated at this stage, this result seems to point toward a focus on avoiding infections among vaccinated people, as well as a continued prioritization of raising immunization rates in populations with still low rates of vaccination.Interestingly, the fact that we do not have vaccination policy studies focusing on priorities of the population to be vaccinated, just vulnerable groups vs. all, including children or not, reflected on the need of further studies as the vaccination policies rather just on the vaccines efficacy itself.
The review also identified a set of PHSMs that still require more study to determine their impact during the SARS-CoV-2 pandemic and potential utility in subsequent outbreaks.For instance, the use of contact tracing apps could represent an important tool in the future if their present limitations are addressed, and no eligible studies were identified in scope of this paper which examined the impact of communication measures, such as public information campaigns, pointing to the need for further research.Information technologies are still to make a difference; therefore it represents an area where researchers and policymakers should pay more attention and eventually invest more.
Lessons learned and future implications
The study of PHSMs' impact on COVID-19 provides significant lessons learned for the expected next pandemic.These lessons should be integrated into both education-and preparedness programs in the public health space, as well as informing the policy decision process in acute phases of a comparable health crisis.
Besides PHIRI, several other European projects and activities have been initiated to address the impact of the COVID-19 pandemic and provide evidence-based support for future decision-making regarding the implementation of effective PHSMs across the region.A closer future collaboration between some of these projects and entities, as well as between the projects and national policymakers across Europe, could both stimulate wider scientific discussion and build additional resilience to fight future health crises armed with learnings from the previous pandemic.Notable European When considering the policymakers perspectives, it is clear how the evidence during the COVID-19 pandemic was used -not only taking into consideration its absolute scientific validity but also the social context, whereby the evolution of PHSMs' relevance follows the pandemic stages and shows a shifting focus on different dimensions of PHSMs over time, alongside new epidemiological developments as well as an evolving societal understanding of and amenability to different measures.Considering the available scientific evidence on PHSMs' impact, as well as the subjective perspectives of subject matter experts who advise health policy, has allowed the present study to form a nuanced understanding of different measures' significance over time and to highlight the complex overlapping dimensions of decision-making during a pandemic, which can be taken into account both by national policymakers and the experts engaged in counseling policy during future health crises.For this reason, analyses such as the one conducted in the scope of this paper can form a vital building block in informing health policy processes going forward.
Strengths and limitations
During the COVID-19 pandemic, thousands of scientific papers were published at a very fast pace, particularly in 2020 and 2021, making it difficult to assess and identify the key results to aid decisionmaking.The choice of performing a review of reviews, therefore, presents as a strength to summarize which PHSMs have most impacted the COVID-19 pandemic in Europe according to available evidence.The second strength to highlight is the use of the PHIRI REF network to engage subject matter experts in the modified Delphi panel, as they represent relevant expertise from many different European Member States and associated countries.
A limitation regarding the rapid review lies in some of the characteristics of available studies that were considered for inclusion.A good proportion of the excluded studies focused exclusively on the impact of PHSMs on outcomes related to anxiety, depression, loneliness and other aspects of mental health, thereby failing to meet the eligibility criteria as defined in the Methods chapter.Many studies conducted at the beginning of the pandemic were solely based on modeling approaches, i.e., intended to support only short-term decision-making.Among observational and modeling studies, we have extracted data only from observational studies.Another limitation of the review lies in the often mutually confounding nature of the PHSMs under study and is reflected in the fact that the available literature often examined a combination of measures from multiple clusters as defined in this paper.However, this circumstance simultaneously presents a strength: While it limits the ability of this study to rank individual PHSMs by the size of their impact on reducing the epidemiological curve of COVID-19 in absolute terms, it allows for observations on the complex interplay of available measures and their effects during different periods of the pandemic that hold meaningful lessons for researchers and policy makers in context of future outbreaks.
In terms of limitations affecting the methodology of the modified Delphi panel survey technique, the subject matter experts' opinions and views do not represent official data from their countries but personal perception, and this can be seen as a limitation due to the resulting subjectivity.However, in addressing this limitation, we maintain that policy decisions, especially during a fast-moving crisis, are often based on the perception of relevant subject matter experts and the policymakers they advise.Studying the subjective perception of relevant experts in the field, therefore, provides a significant complement and substantiation -another piece of the evidence puzzle -to the results of the literature review.
Conclusion
The review identified one set of PHSMs with clear evidence of their positive impact -including closure of non-essential shops, gastronomy and cultural events, hygiene measures, face coverings, voluntary quarantine by contacts, case isolation at home, Stay-athome campaign, restrict internal movement and testing policies -and another set of PHSMs with moderate available evidence, including workplace closure, restrictions on public gatherings, social distancing, international travel control measures, and contact tracing.Furthermore, evidence from the published literature appears to be largely congruent with the studied perceptions of national subject matter experts from European countries who were actively engaged in research, policy and policy advice during the pandemic.This knowledge is very important for public health decision-makers to be better prepared for the next pandemic.
projects and programs in this domain include HERA (European Health Emergency Preparedness and Response Authority), VACCELERATE (Vaccine Infrastructures and Communication for Europe), COVID-19 Social Sciences Research Tracker, RECOVER-E (Rapid European COVID-19 Emergency Research response), CoVaRR-Net (COVID-19 Vaccine-induced Immunity, Variants, and Re-infections Network), and EPIPose (Epidemic Intelligence to Minimize COVID-19 Impacts on European Society, Public Health and the Economy).
FIGURE 2
FIGURE 2Results from the first Delphi technique round.
FIGURE 3
FIGURE 3Results from the second modified Delphi technique round.
).Furthermore, six indicators not individually tracked in scope of the OxCGRT were added to the selected indicators through the REF meeting mechanism (i.e., Closure of non-essential shops, gastronomy and cultural venues; Access restriction to shops, gastronomy and cultural venues; Social distancing; General hygiene measures; Voluntary quarantine by contact persons; Case isolation at home), arriving in total at 19 indicators, respectively PHSMs, under study.The 19 PHSMs under study were grouped into five clusters: access measures; distancing measures; movement restrictions; test, trace, vaccinate; communication measures (please see Table1).
TABLE 1
Reviews reporting on the selected public health and social measures found.
TABLE 2 Table
of characteristics of the studies included.Cumulative Index to Nursing and Allied Health Literature.2RandomizedControlTrial.3World Health Organization.
|
2023-09-02T15:07:47.848Z
|
2023-08-31T00:00:00.000
|
{
"year": 2023,
"sha1": "eeab22964ce17484d68e0c3a861986a6fcc718c2",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpubh.2023.1226922/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1344b0ec336832666e786e98cb75c584359e1858",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
214794655
|
pes2o/s2orc
|
v3-fos-license
|
Asymptomatic Carotid Stenosis and Risk of Stroke (ACSRS) study: what have we learned from it?
The Asymptomatic Carotid Stenosis and Risk of Stroke (ACSRS) study is the largest natural history study on patients with 50–99% asymptomatic carotid stenosis (ACS). It included 1,121 ACS individuals with a follow-up between 6 and 96 months (mean: 48 months). During the last 15 years, several important ACSRS substudies have been published that have contributed significantly to the optimal management of ACS patients. These studies have demonstrated that specific baseline clinical characteristics and ultrasonic plaque features after image normalization (namely carotid plaque type, gray scale median, carotid plaque area, juxtaluminal black area without a visible echogenic cup, discrete white areas in an echolucent part of a plaque, silent embolic infarcts on brain computed tomography scans, a history of contralateral transient ischemic attacks/strokes) can independently predict future ipsilateral cerebrovascular events. The ACSRS study provided proof that by use of a computer program to normalize plaque images and extract plaque texture features, a combination of features can stratify patients into various categories depending on their stroke risk. The present review will discuss the various reported predictors of future ipsilateral cerebrovascular events and how these characteristics can be used to calculate individual stroke risk.
Introduction
The Asymptomatic Carotid Stenosis and Risk of Stroke (ACSRS) study was a multicenter study conducted under the auspices of the International Union of Angiology (1)(2)(3). Until this day, ACSRS is the biggest natural history study on patients with 50-99% asymptomatic carotid stenosis (ACS) including a total of 1,121 patients with a follow-up between 6 and 96 months (mean: 48 months). The conception of the ACSRS study took place in 1996, following the report of the Asymptomatic Carotid Atherosclerosis Study (ACAS) (4). Patient recruitment began in 1998 and ended in 2002 (1)(2)(3).
Study design
All centers participating in ACSRS had a non-invasive vascular laboratory equipped with a color duplex facility. Each ACSRS center examined >500 patients/year, and employed staff that was experienced in the investigation and management of patients with extracranial cerebrovascular disease consisting of a neurologist, a vascular physician/ vascular surgeon and a radiologist (1)(2)(3). By screening new attendees to their practice, each participating center was able to identify and recruit on average 15 individuals with ACS (1)(2)(3).
Patients were eligible for recruitment to ACSRS if: (I) they had 50-99% ACS in relation to the carotid bulb diameter [European Carotid Surgery Trial (ECST) method], (II) they reported no previous ipsilateral cerebral or retinal ischemic (CORI) events and (III) they demonstrated no neurological abnormalities on examination (1)(2)(3). Patients with previous contralateral CORI events were also considered for inclusion provided they had been asymptomatic for >6 months at the time of recruitment. In patients demonstrating bilateral ACS, the side which exhibited more severe stenosis was recorded as ipsilateral (i.e., the study artery) (1)(2)(3).
A detailed patient history and physical examination were recorded at baseline to make sure that patients were truly asymptomatic upon study entry. Upon admission to the study, a duplex ultrasound scanning was performed to grade internal carotid artery (ICA) stenosis. This duplex examination was repeated every 6 months thereafter to monitor progression of ICA stenosis (1-3). All patients were followed up with the aim to identify specific subgroups at high (or low) risk for future CORI events. Each patient was reviewed at all visits by both a neurologist and a radiologist to record clinical information as well as the rate of ICA stenosis progression (1)(2)(3).
Duplex examination
Velocities were recorded at the site of maximum ICA stenosis at the center of the common carotid artery lumen, with the beam of the ultrasound at a 60° angle to the arterial wall and to the direction of flow (1)(2)(3). A disadvantage of absolute velocity measurements is that they can underestimate (e.g., in the presence of cardiac arrhythmia) or overestimate ACS (e.g., in the presence of severe contralateral disease). Consequently, the radiologists performing the Duplex examinations in each centre were trained to use a combination of absolute velocity measurements and velocity ratios (1)(2)(3). Using these velocities and velocity ratios, they could express the percentage diameter stenosis in relation to both the distal normal ICA [North American Symptomatic Carotid Endarterectomy Trial (NASCET) method] and to the bulb diameter (ECST method). Measurement of arterial flow in the vertebral arteries flow (cephalad, reversed or absent) was also noted. The complete duplex examination was recorded on S-VHS videotape and was submitted to the coordinating center in order for image analysis and quality control to be performed centrally.
Plaque images were obtained in B-Mode (black and white), and color or power Doppler. The latter methods were used to outline juxtaluminal black areas (JBA) of plaque. The depth was minimized in order for the plaques to occupy a large part of the image. The ultrasound beam was at 90° to the arterial wall. The overall gain was adjusted to minimize but not completely abolish noise. By minimizing (but not completely abolishing) noise, the radiologists managed not to reduce the gain too much to lose low-intensity features in the plaque. This also ensured that there remained a black area without noise (which represented blood in the vessel lumen) and could be used for image normalization. Finally, the position of the probe was adjusted so that adventitia adjacent to the plaque was clearly visible as a hyperechoic band that could also be used for image normalization. This technique of image capture with ultrasound at right angles to the arterial wall was essential and has not been adequately appreciated. The reason for having the beam of ultrasound at right angles to the vessel wall is that one cannot do image analysis on plaques images obtained with the ultrasound beam at 60° because the adventitia is not shown at its brightest and thus image normalization cannot be performed. Image normalization consisted of adjusting the Gray Scale Median (GSM) value of 2 reference points (blood to be 0 and adventitia to be 190) with all other values automatically being adjusted on a linear scale. This ensured that reproducible GSM measurements could be obtained if a patient was scanned on different equipment in different ambient lights and that comparable readings of GSM measurements in a serial follow-up would be available in a multicenter study.
A detailed description of image texture analysis is beyond the scope of this review and it is presented in detail elsewhere (3).
Results and key messages
The key outcomes reported in the various ACSRS substudies and preliminary reports over the course of 15 years are presented in
Stenosis
Ipsilateral % ACS was recorded as mild (50-69%) in 198 study participants, as moderate (70-89%) in 598 individuals and as severe (90-99%) in 325 patients (9). The cumulative 5-year ipsilateral CORI event rate for these three ACS groups was 9%, 15% and 20%, respectively (log-rank test P=0.009). This resulted in an average annual event rate of 1.8%, 3.0% and 4.0%, respectively. Furthermore, the cumulative stroke rate over 5 years for the 3 ACS groups was 4%, 7% and 12%, respectively (log-rank test P=0.011). This resulted in an average annual stroke rate of 0.8%, 1.4% and 2.4%, respectively (9). For >70% ACS, the cumulative 5-year stroke rate was 8%, giving an average annual stroke rate of 1.6% (9). Based on these results, it was concluded that the severity of stenosis was a relatively poor indicator of stroke risk with a receiver operator characteristic (ROC) area under the curve (AUC) of 0.603 [95% confidence interval (CI): 0.525-0.682] (5). In the subgroup of 325 patients with stenosis greater than 90% (ECST) the average annual stroke rate was 2.4%. However, this subgroup contained only 25 of the 59 strokes (or 42%) that occurred in the whole group. In other words, the severity of stenosis alone could not identify a high-risk group that contained the majority of strokes.
An important finding was that the ECST percentage stenosis had a linear relationship-while the NASCET percentage stenosis had an S-shaped relationship-to stroke risk (5) and as such, it could not be used in a multivariable linear logistic regression. A possible explanation for this finding is that ECST percentage stenosis correlates to the plaque volume in the bulb, while NASCET percentage stenosis correlates to lumen diameter reduction in relation to the lumen diameter of the normal distal ICA.
Plaque type and stroke risk
An important issue that has not received enough attention is that in order to successfully analyze texture features of carotid plaques, one should first do image normalization (6). This issue has been missed by international guidelines and recommendations. The majority of previous natural history studies have employed several techniques to classify plaque without first performing image normalization. Image normalization changes the appearance of plaques markedly resulting in a reclassification of a considerable percentage of them. In ACSRS, 652 (60%) of the plaques changed category after image normalization (6). When comparing plaque types 1-3 vs. types 4 and 5 before image normalization, the relative risk (RR) of suffering a CORI event was 1.12 (95% CI: 0.76-1.66; P=0.45). In contrast, when plaque types 1-3 were compared with plaque types 4 and 5 after image normalization, the RR of a CORI event was 4.8 (95% CI: 2.27-10.28; P=0.0001) (6).
An important association is the incidence of ipsilateral ischemic stroke vs. both plaque type and ACS severity (6). For patients with 50-69% ACS, the stroke rate was low regardless of plaque type. For patients with 70-89% ACS, the incidence of stroke was 5.7% in individuals with plaque types 1-3 and 0.8% in those with types 4 and 5. Finally, for individuals with 90-99% ACS, the incidence of stroke was 7.7% in study participants with plaque types 1-3 vs. 0% in those with types 4 and 5 (6). Therefore, for the 905 study participants with 70-99% ACS, the incidence of stroke was 6.5% (47 of 724) for those with plaque types 1, 2 and 3 and only 0.55% (1 of 181) for those with plaque types 4 and 5 (RR: 11.7; 95% CI: 1.63-84.5; P=0.003) (6). Individuals with plaque types 1-3 had a cumulative stroke rate of 14% at 7 years (or 2%/year), while those with plaque types 4 and 5 had a cumulative stroke rate of 1% at 7 years (i.e., 0.14% per year) (6).
In ACSRS, the type of plaque determined by the software after image normalization demonstrated a clear-cut separation of stroke risk by plaque type. This finding lends further support to the importance of image normalization before image analysis and proves that plaque types 4 and 5 are associated with a low stroke risk, regardless of the degree of stenosis. Importantly, 76% of the strokes (45 of 59) occurred in the 38% of patients (426 of 1,121) with type 1 and 2 plaques at baseline, while severe and fatal ipsilateral strokes occurred exclusively in patients with plaque types 1 and 2 (6). The conclusion reached was that ACS study participants with plaque types 4 and 5 following image normalization, are at low stroke risk even when the degree of stenosis is severe and therefore may not need to offered a carotid endarterectomy (CEA) (6). By contrast, patients with high degree of ACS and plaque types 1 through 3 are at increased risk and may therefore require a prophylactic CEA (6).
Carotid plaque area
Prospective studies from two different centers reported that carotid plaque area (15,16) were associated with the development of ipsilateral ischemic strokes. The results from ACSRS showed that carotid plaque area can be used for cerebrovascular risk stratification in individuals with plaques producing >50% stenosis (9). Plaque area was large (>80 mm 2 ) in 114 individuals, intermediate (40-80 mm 2 ) in 489 and small (<40 mm 2 ) in 518 study participants. The cumulative incidence of 5-year stroke for these three groups was 5%, 7% and 23%, respectively (log-rank test P<0.001), equating to an average annual stroke rate of 1.0%, 1.4% and 4.6%, respectively (9). However, the subgroup of 114 patients with plaque area >80 mm 2 contained only 16 (27%) of the 59 strokes that were recorded in the entire group during follow-up.
JBA without a visible echogenic cup
The finding of a JBA without a visible echogenic cap has been reported to be associated with symptomatic plaques in several cross-sectional studies (17)(18)(19). However, the various cut-off points for size of this area had not been reported. In ACSRS, JBA was classified into four groups; JBA was <4 mm 2 in 704 patients, 4-8 mm 2 in 171 patients, 8-10 mm 2 in 46 patients and >10 mm 2 in 198 patients (10). The cumulative incidence of 5-year stroke in these 4 groups was 2%, 7%, 16% and 23%, respectively, equating to an average stroke rate of 0.4%/year, 1.4%/year, 3.2%/year and 5.0%/year, respectively (log-rank test P<0.001) (10). A JBA of ≥8 mm 2 found in 245 patients was associated with a 4.1% average annual stroke rate. This subgroup presented with 42 (71%) of the 59 strokes that were recorded in the whole group during follow-up. A JBA of ≥8 mm 2 together with a GSM <15 were associated with a high prevalence of symptomatic plaques and demonstrated the highest combined specificity and sensitivity for the development of hemispheric symptoms. Importantly, the cut-off value of 8 mm 2 separated two groups, one with a high and one with a low prevalence of symptomatic plaques regardless of the grade of stenosis (10). These results strongly suggest that JBA can be used to identify a high-risk group which will contain most of the strokes. It also provides proof that a JBA ≥8 mm 2 is a critical cut-off point.
Discrete white areas (DWA)
An earlier study already demonstrated that plaque heterogeneity is associated with symptomatic plaques (20) and this is often produced by the presence of DWA in hypoechoic plaques or plaque areas. These DWA do not have an associated acoustic shadow which excludes calcification and they consist of neovascularization as shown by perfusion studies using microbubble agents (21). In the ACSRS study, DWA were absent in 403 patients and in this subgroup they were associated with an average annual stroke rate of 1.2%. They were present in 718 patients and associated with an average annual stroke rate of 1.8%. The latter group contained 45 (76%) of the 59 strokes that developed in the whole group during follow-up.
Clinical/biochemical features associated with increased risk
A history of contralateral TIAs or stroke was the only clinical factor demonstrating a strong association with future CORI events and stroke (9). The cumulative incidence of 5-year stroke was 6% for those individuals (n=948) without a history of contralateral TIA or stroke, while it was 17% for the group of 173 patients with prior symptoms (logrank test P<0.001), equating to an average annual stroke rate of 1.2% vs. 3.4%, respectively (9). Although a previous contralateral TIA or stroke is noted only in a minority of patients, it should be viewed as a feature associated with high future stroke risk, especially when combined with the high-risk plaque features discussed above.
Plaque progression and regression
In an ACSRS sub-study, several independent baseline predictors of plaque regression were identified including younger age, high grades of ACS, absence of DWA in the plaque and administration of statins (11). In contrast, high serum creatinine, male gender, lack of statin use, low grade of ACS and increased plaque area were independent baseline predictors of plaque progression (11). Of the 59 strokes that occurred, 40 (68%) were recorded in individuals whose stenosis remained unchanged, whereas 19 (32%) occurred in those patients with progression and 0 in those with regression (11). For the whole group, the 8-year cumulative ipsilateral cerebral ischemic stroke rate was 0% in patients with regression, 9% if the stenosis was unchanged and 16% if there was progression (average annual stroke rates of 0%, 1.1% and 2.0%, respectively; log-rank P=0.05) (11).
Calculation of individual stroke risk
Stenosis severity, history of contralateral TIAs or stroke, a low GSM, high plaque area, JBA ≥8 mm 2 and plaque heterogeneity resulting from DWA without acoustic shadow in hypoechoic areas were associated with the development of future ipsilateral CORI events in both univariate and multivariate analysis (9). However, in the presence of JBA as a covariate, plaque area and GSM were no longer significant independent predictors. Thus, in this latest model only degree of ACS, history of contralateral TIA or stroke, presence of DWA and JBA remained significant independent predictors (10). As a result, the risk of any patient could be calculated or obtained from specially constructed tables derived from this model. The predicted annual average stroke rate was <1.0% (very low risk) in 734 patients, 1.0-1.9% (low risk) in 94, 2.0-3.9% (moderate risk) in 134 (11%), 4.0-5.9% (high risk) in 125 (11%) and >6.0% (very high risk) in 34 (3%) patients (10). Calculation of this risk for individual patients was provided by the software. Patients in the ACSRS study were not on what is currently considered to be optimal medical therapy. Assuming patients are now on optimal medical therapy which is expected to reduce stroke risk by 50%, then only the patients allocated to the high risk and very high risk by the ACSRS methodology, i.e., 13% of those with >50% ECST stenosis or 17% of those with ECST stenosis >70% (50% NASCET) would be considered for CEA.
Mortality
A key issue when considering an intervention in an individual with ACS is life-expectancy. It is not worth considering a prophylactic CEA in an asymptomatic patient, if this individual will not survive long enough to benefit from the procedure. To this end, an ACSRS interim report was produced aiming to identify risk factors able to predict long-term mortality (7), while a more recent analysis attempted to develop a model for predicting the 5-year risk of cardiovascular death in patients with ACS (12). Age, gender, stenosis, diabetes, cardiac failure, left ventricular hypertrophy and lack of antiplatelet therapy were independent predictors of cardiovascular mortality (12). Based on the risk prediction model that was developed, the 5-year all-cause mortality was <10% in 236, 10-25% in 579, 25-40% in 204 and >40% in 102 patients. In patients with predicted 5-year mortality >40%, the death/stroke ratio was high even in those predicted to have a 5-year risk of stroke >10%. In fact, the majority of the patients in this subgroup died before a stroke would occur even in the presence of an unstable carotid plaque. Such patients should not be considered as candidates for CEA. The risk prediction model developed is unique not only because it was derived from the ACSRS, a natural history study which included >1,100 asymptomatic patients, but also due to the fact that-in contrast to previous studies-it provided a tool for stratifying both cardiovascular mortality and stroke risk. An interesting finding is that the predicted mortality risk is not related to the stroke risk. The predicted mortality risk remained high even in the presence of a low stroke risk group. This means that patients with low stroke risk who will not be considered for carotid surgery because of a low-grade stenosis or a stable (e.g., echogenic or calcified) plaque should not be denied aggressive risk factor modification. They are at a very high risk of a myocardial infarction, and therefore, a cardiac assessment is indicated. The authors point out that such an assessment is a unique opportunity that may never occur again in the patient's lifetime.
Recent histological studies and ultrasonic texture features after image normalization
Early histological studies have shown that in symptomatic plaques the necrotic core is twice as close to the lumen compared with asymptomatic plaques (22). A number of subsequent cross-sectional studies using ultrasound demonstrated a relationship between JBA with the presence of patient neurological symptoms (17,19,23).
Correlations between ultrasonic texture features after image normalization and plaque histology have been made more recently. A low GSM is associated with a histologically unstable plaque (24), less calcification (25), low collagen content, a large lipid core, a thin fibrous cap (24,26), increased inflammation and neovascularization (26). DWAs in a hypoechoic area of a plaque were associated with intraplaque haemorrhage and inflammation in one study (25) and with neovascularization, increased number of macrophages and intraplaque haemorrhage in another (26). A JBA near the luminal portion of the plaque (without a visible cap) is associated with a necrotic core located close to the lumen on histology (23), macroscopic plaque ulceration (25), decreased numbers of smooth muscle cells, large lipid core, thin fibrous cap and plaque rupture (26).
The above histological studies provide an explanation for the association of many ultrasonic plaque texture features with plaque stability or instability and they explain the ability of such features to predict future strokes as demonstrated in the ACSRS study. Discussion ACSRS was the largest prospective study on asymptomatic carotid patients managed with medical intervention alone. It showed that baseline clinical characteristics and ultrasonic plaque features are independent predictors of future ipsilateral CORI events.
Four important points from ACSRS have not been so far appreciated: (I) The fact that image normalization is essential before analyzing texture features of carotid plaques; (II) Equipment settings and plaque image capture with ultrasound at right angles to the arterial wall are similarly essential; (III) The texture features used in the ACSRS were not data-derived. The texture features used in the ACSRS study had been found to be associated with symptomatic plaques in earlier cross-sectional studies, which validated externally their value, and (IV) The best stroke risk stratification depends on a combination of texture features (such as JBA and DWA) together with percentage of ACS (ECST method), as well as with history of contralateral stroke/TIA. ACSRS was unique because it did not concentrate only on one feature, but rather it demonstrated that plaque characteristics can add significantly/improve stroke risk stratification. More importantly, it provided a method for the calculation of stroke risk for each individual patient. The implications of ACSRS are that clinical and ultrasonic plaque features can be used to stratify stroke risk. This can subsequently lead to refinement of the indications for CEA. The availability of user-friendly software for image analysis ("Plaque Texture Analysis Software" by LifeQ Medical Ltd.; lifeqmedical.com) and for automatic calculation of risk may establish this method as part of routine practice in the vascular laboratory. This software can calculate stroke risk for each individual patient and may provide a report of the measurement of key texture features including the predicted annual stroke risk.
Conclusions
The ACSRS results suggest that clinical and ultrasonic plaque features can be used to stratify stroke risk and may therefore refine the indications for CEA in asymptomatic individuals. User-friendly software for image analysis is currently available and automatic calculation of stroke risk for each ACS individual may help targeting prophylactic CEA procedures to those asymptomatic patients most likely to develop symptoms in the future. Information derived from the ACSRS study should help clinical decisionmaking. Some carotid plaques may continue to grow despite implementation of rigorous optimal medical therapy. There are many risk factors and/or genetic factors we are not aware of. Furthermore, there are other emerging risk factors (such as homocysteine), which we do not routinely treat. For example, individuals with polymorphism of methylenetetrahydrofolate reductase (MTHFR) and inadequate folic acid intake may develop high homocysteine levels (27). In this context, there have been 2 randomized controlled trials in the past treating homocysteine vs. placebo in patients with advanced coronary artery disease which were negative in terms of the coronary artery disease, but there was a significant reduction in stroke (28,29). ACSRS has provided a valuable source of exploring and testing new methods of image analysis and new algorithms for calculation of stroke and mortality risk. Future studies should further explore additional pathways to provide optimal management of patients with ACS.
|
2020-04-05T20:26:01.086Z
|
2020-03-19T00:00:00.000
|
{
"year": 2020,
"sha1": "c5375439c69422a20137659fd72c4e74a556ab1b",
"oa_license": "CCBYNCND",
"oa_url": "https://atm.amegroups.com/article/viewFile/38489/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4a7a59a037f3c8e8b6caaf164c18692a791d3b66",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
270996466
|
pes2o/s2orc
|
v3-fos-license
|
Assessing the impact of a respiratory care bundle on health status and quality of life of chronic obstructive pulmonary disease patients in Jordan: A quasi-experimental study
BACKGROUND: This study aimed to evaluate the effectiveness of a respiratory care bundle, including deep breathing exercises, incentive spirometry, and airway clearance techniques, on the quality of life (QoL) of chronic obstructive pulmonary disease (COPD) patients in Jordan. MATERIALS AND METHODS: A quasi-experimental study design and convenience sampling method was used to recruit 120 COPD patients, with 54 in the intervention group and 66 in the control group. The intervention group received additional respiratory care bundle training, while the control group received only discharge instructions and an education program. The St. George’s Respiratory Questionnaire (SGRQ-C) was used to assess participants’ QoL before and after the intervention. Independent t-tests, paired t-tests, and analysis of covariance (ANCOVA) analysis were used to analyze the data. RESULTS: The study found no significant differences between patients’ characteristics, health status, and SGRQ-C scores between the two groups at baseline. After the intervention, there were statistically significant differences in all SGRQ-C subscales, which were lower in the intervention group compared to the control group. The paired t-test showed significant reductions in all SGRQ-C symptoms components (t = 7.62, P < .001), activity component (t = 7.58, P < .001), impact component (t = 7.56, P < .001), and total scores post-intervention (t = 7.52, P < .001) for the intervention group. The ANCOVA analysis showed significant differences in scores of SGRQ-C components and total scores (f = 11.3, P < .001) post-intervention between the two groups. CONCLUSION: The study’s findings suggest that providing additional respiratory care bundle training for COPD patients can significantly improve their QoL, as measured by the SGRQ-C scores. The respiratory care bundle intervention was effective in reducing COPD symptoms and improving the QoL of COPD patients. Healthcare providers should consider implementing respiratory care bundles as part of COPD management to improve patients’ outcomes.
Introduction
R espiratory diseases, including chronic obstructive pulmonary disease (COPD), are major contributors to global morbidity and mortality, affecting millions.[3] As COPD prevalence escalates, effective interventions are imperative. [1,3]Breathlessness curtails activity induces anxiety and restricts independence.Sleep disturbances due to physiological changes, hypercapnia, and inflammation worsen COPD patients' well-being.Insomnia's prevalence amplifies health-related quality of life concerns.[4] COPD's recurrent symptoms engender fatigue, reduced activity, and absenteeism, necessitating interventions to enhance well-being. [3,5,6]PD management, crucial for preventing exacerbations and enhancing patients' quality of life (QoL), encompasses a combination of pharmacological and nonpharmacological measures, including nutrition assessment, rehabilitation, physical activity, and oxygen therapy. [2,3,7]However, ensuring coordinated and timely delivery in busy clinical environments can be challenging.The use of recommended respiratory care bundles proves advantageous, providing an evidence-based framework to administer care for COPD patients effectively.These bundles facilitate timely interventions, thereby improving clinical outcomes and reducing healthcare costs. [2,8,9]respiratory care bundle amalgamates interventions to enhance treatment outcomes and QoL in respiratory conditions.For COPD patients, recommended interventions vary based on disease severity and care settings. [2,3,10]Common components encompass smoking cessation advice, bronchodilators, corticosteroids, oxygen therapy, exacerbation assessment, and patient education.Furthermore, earlier research has highlighted various interventions incorporated within respiratory care bundles and frequently applied to individuals with COPD.[12] Randomized controlled trials found that a program of deep breathing exercises significantly improved pulmonary function and reduced dyspnea in patients with moderate to severe COPD. [13]ncentive spirometry is another intervention that can help patients with COPD to improve respiratory function and reduce dyspnea.Several studies have demonstrated the efficacy of incentive spirometry in reducing dyspnea and improving respiratory function in patients with it. [11,14,15]irway clearance techniques are a common approach of the respiratory care bundle that can help patients with COPD improve respiratory function and reduce dyspnea.Airway clearance techniques can include coughing, chest physiotherapy, and vibration therapy.These interventions help clear mucus and secretions from the lungs, improving respiratory function and reducing dyspnea. [10,16,17]These interventions underscore the value of respiratory care bundles in COPD management, offering tailored strategies to enhance QoL.
Respiratory therapists and nurses are usually responsible for educating patients about the importance of performing these different interventions, providing guidance on performing them correctly, monitoring their progress, and adjusting the plan as needed.20] International studies showed that respiratory care bundles had been developed to improve the management of dyspnea in COPD patients.22] Research on the use of respiratory care bundles in Arab countries, notably Jordan, remains scarce, and their efficacy in ameliorating dyspnea in Jordanian COPD patients remains unexplored.The study is particularly relevant and needed in the specific country of Jordan due to several reasons related to the country's healthcare landscape and the prevalence of COPD among its population.Jordan, like many countries, faces the challenges of providing effective and efficient healthcare services to its citizens.Given the unique healthcare context of Jordan, it becomes essential to explore interventions that can improve the health status and quality of life of COPD patients.COPD is a global health concern, and its prevalence is affected by various factors including smoking, air pollution, and genetics.In the case of Jordan, a specific understanding of the prevalence and impact of COPD is crucial due to potential risk factors like tobacco smoking, occupational exposure, and indoor air pollution.Research specific to the Jordanian population can provide insights into the burden of COPD and the need for tailored interventions.The impact of COPD can be influenced by cultural practices, socioeconomic factors, and access to healthcare services.These factors can vary significantly from one country to another.A study conducted in Jordan can explore how these factors interact with respiratory care bundles and their effectiveness, thus offering insights into the unique challenges and opportunities in managing COPD in the Jordanian context.
Study design and setting
This study used a quasi-experimental design with the pre-post-test method to assess the impact of a respiratory care bundle on managing symptoms in COPD patients in Jordan.
The study was conducted in six hospitals located in the capital city of Jordan, Amman.These selected hospitals were randomly chosen from a list of large hospitals, ensuring that each hospital had a capacity of at least 200 beds and was equipped to provide comprehensive medical and surgical care.The chosen hospitals encompassed a diverse range of healthcare settings, including two private hospitals, two governmental hospitals, one educational institution-affiliated hospital, and one military hospital.This selection aimed to capture the four distinct sectors of the healthcare system in Jordan, including private, governmental, educational, and military.Such a comprehensive approach was taken to guarantee a representative reflection of the varied healthcare landscape in Jordan, potentially influencing the study's findings.
Study participants and sampling
The study employed a convenience sampling technique to recruit 120 participants.The sample was split into control and intervention groups.The control group received a health education program with a discharge plan, providing concise instructions on COPD medications and preventive measures.Meanwhile, the intervention group, in addition to the health education program, received training on respiratory care bundles.This training encompassed guidance on using deep breathing exercises, incentive spirometry, and airway clearance techniques.These respiratory care bundles were intended for home use post-discharge.
Patients were included if they were diagnosed with COPD or bronchial asthma, aged more than 18 years, could read and write in Arabic, and had a discharge plan within 3 days of data collection.Patients with chronic renal failure, cancer, or any instance of data collection affected by medical instability were excluded.Medical instability in this context refers to situations where data collection could be compromised due to factors such as acute medical conditions, treatment interventions, or procedures that might impact the accuracy and reliability of the collected information.Additionally, participants who were hospitalized during the study period were excluded from the final data analysis to prevent potential confounding factors that could arise from their hospitalization status.This approach ensures the integrity of the data and the validity of the study's findings.
The determination of the sample size for this study was conducted using G-power software.Statistical power tests were employed with an alpha level (P value) of .05, a medium effect size of 0.30, and a power of 0.80 to ascertain an appropriate sample size.The applied statistical analyses encompassed paired t-tests and ANCOVA tests.Initially, a minimum sample size of 92 participants, divided into 46 participants per group, was calculated.In anticipation of potential dropouts during the follow-up phase, an additional 28 participants were recruited, resulting in a total of 120 participants enrolled at the baseline.Individuals eligible for the study but opting not to participate in the intervention group and the associated respiratory care bundles were invited to join the control group and contribute to the study's completion.
Ethical consideration
This study was approved by the institutional review board of the Scientific Research Committee at Applied Private University in Amman, Jordan (Approval no.2021-2022-8-82).Permission was also obtained from the selected hospitals to recruit study participants.
Before participation, all eligible participants were asked to sign an informed consent form indicating their agreement to participate in the study.Participants were told their participation was voluntary and had the right to withdraw without penalty.Additionally, they were assured that any information obtained during the study would be kept confidential and their personal information would be kept anonymous if used for publication.The data collected were stored on a password-protected computer with access granted only to the research team.
Data collection tool and technique
Data collection procedures were carried out by three research authors with PhD in critical care nursing.After obtaining ethical approval and signed informed consent, the authors collected data at baseline and follow-up using a self-administered questionnaire distributed between May 2022 and December 2022.The head nurses of departments in the selected settings were consulted to obtain potential eligible participants' names and contact information.The authors approached patients who fulfilled the inclusion criteria and provided them with a comprehensive explanation of the study.Specifically, individuals meeting the inclusion criteria were invited to participate, while those meeting any of the exclusion criteria were not included in the study.The inclusion and exclusion criteria were verified by reviewing patients' medical records and consulting with the assigned nurse.
Before the educational session and intervention training on the respiratory care bundle, a committee of three external evaluators was formed, which included a clinical nurse specialist with a doctorate in critical care and two pulmonologists with extensive clinical experience.[25] In addition, a pilot study was conducted with 10 eligible participants, and the external evaluators observed the authors during the education, training, and data collection processes to ensure consistency and inter-rater reliability among the authors.
The sample was partitioned into control and intervention groups.Initial baseline data collection encompassed participants' sociodemographic and clinical characteristics, along with the St. George's Respiratory Questionnaire for COPD patients (SGRQ-C questionnaire). [26]Following this baseline data collection through the structured questionnaires, the control group received a health education program, including a discharge plan with succinct instructions on COPD medications and preventive measures, administered in their hospital rooms before discharge from the selected settings.This health education program lasted between 25 and 40 minutes.Simultaneously, the intervention group, alongside the health education program, received training on respiratory care bundles, which involved guidance and training on using deep breathing exercises, incentive spirometry, and airway clearance techniques.The training on respiratory care bundles spanned 30 to 40 minutes.Subsequently, the implementation of these respiratory care bundles was intended for home use after discharge.
Participants in the intervention group were instructed to perform the respiratory care bundle interventions between 30 and 45 minutes daily and recommended twice daily (morning and evening) for at least 20 minutes each time, at their home after discharge from the hospital.However, participants who did not practice these interventions for at least 30 minutes at least 4 days per week were excluded from the final data analysis during the follow-up data collection (post-test).
Furthermore, twice weekly, follow-up phone calls were made to the intervention group to monitor their adherence to practicing the respiratory care bundles and address any inquiries they might have.After a 4-week period following participants' discharge from the hospitals, both the control and intervention groups were requested to complete the SGRQ-C questionnaire as a post-test.This post-test aimed to analyze the participants' data before and after the interventions (preinterventions and post-interventions), allowing for a comparative assessment and evaluation of the effectiveness of the provided interventions within each group.
Measurement of variables
The measurement of variables in this study used a structured self-reported questionnaire consisting of two parts.The first part was a demographic and clinical characteristics sheet which included information about the patient's age, gender, marital status, education level, employment status, and history of chronic diseases such as hypertension and diabetes mellitus.The second part was the Arabic version of the SGRQ-C questionnaire. [27,28]he SGRQ was specifically designed to measure the impact of chest disease on patients' health-related QoL and wellbeing and has since been extensively validated in numerous studies. [27,29,30]e SGRQ consists of a series of questions assessing the impact of respiratory symptoms on three domains of health-related QoL: symptoms, activity, and impacts.The symptoms domain includes questions about the frequency and severity of respiratory symptoms, such as coughing and shortness of breath.The activity domain assesses the impact of respiratory symptoms on the patient's ability to perform daily activities, such as walking, climbing stairs, and carrying out household tasks.The impacts domain includes questions about the social, psychological, and emotional impact of respiratory symptoms on patients' lives, such as feelings of anxiety, depression, and social isolation.A total score was calculated with all weighted items and expressed as a percentage, where 100 represents the worst possible health status and 0 represents the best possible health status. [26,30]e questionnaire is reliable and valid for measuring health-related QoL in COPD patients, with good internal consistency, test-retest reliability, and construct validity. [26,29,30]The Arabic version of the SGRQ-C questionnaire is translated and validated. [27]The Arabic version is a reliable and valid instrument for assessing health-related QoL in COPD patients, with high internal consistency and construct validity. [27,28]
Data analysis
In this study, data analysis was conducted using the Statistical Package for Social Sciences (SPSS) version 25.0.Descriptive statistics were used to summarize the participants' demographic and clinical characteristics and the SGRQ-C scores.Independent t-tests or Chi-square analyses were conducted to examine any significant differences between the intervention and control groups' baseline characteristics, health status, ABGs, and vital signs.A paired t-test was used to compare the pre-intervention and post-intervention scores of the SGRQ-C subscales and total scores for both the intervention and controlled groups.Additionally, a one-way pretest and post-test ANCOVA analysis were conducted to investigate further the effectiveness of the respiratory care bundle intervention on improving health-related outcomes, controlling for the pretest as a covariance.This analysis was used to compare the intervention and control groups' scores on the SGRQ-C components and total scores post-intervention and to determine whether the intervention significantly improved participants' QoL.All statistical tests were two-tailed, and a P value of less than .05was considered statistically significant.
Results
The study began by interviewing 152 eligible participants, all of whom were invited to take part.Of those, 120 agreed to participate and completed the questionnaire.This included 66 participants in the control group and 54 in the intervention group.Of the 120 participants who completed the questionnaire, 105 followed the recommended interventions of respiratory bundles and also completed the questionnaire.The sample selection and completion chart are presented in Figure 1.
T h e s t u d y f o u n d n o s i g n i f i c a n t d i f f e r e n c e s
between patients' characteristics, health status and characteristics, forced expiratory volume (FEV 1), and respiratory and heart rates between the controlled and intervention groups at baseline.The majority of the participants were men (55.8%), married (79.2%),Jordanians (76.7%), employed (71.7%), current smokers (65.8%), reported their health as good status, and had completed high school (52.5%).Nearly half of the participants had diabetes mellitus (49.2%).On average, participants were aged 55.11 years, overweight with a body mass index of 25.61, and slept approximately 6.52 hours per day in the 2 weeks before hospital admission [Table 1].
A paired t-test for both the intervention and control groups was used to compare the pre-intervention and post-intervention means of the outcome variables (the SGRQ-C subscales).The results showed that the intervention group had significant reductions in SGRQ-C symptoms components (t = 7.62, P < .001),activity component (t = 7.58, P < .001),impact component (t = 7.56, P < .001),and total scores of SGRQ-C post-intervention (t = 7.52, P < .001).In contrast, the controlled group did not have significant reductions in all SGRQ-C components and total scores post-intervention.The significant reductions in SGRQ-C scores for the intervention group indicate that the intervention positively affected their QoL.The lack of significant reductions for the controlled group suggests that any changes in their SGRQ-C scores were likely due to factors other than the intervention [Table 2].
Moreover, a one-way pretest and post-test ANCOVA analysis were conducted to investigate the effectiveness of interventional programs in improving health-related outcomes for COPD patients.The results [Table 3] showed significant differences in scores of all SGRQ-C components and total scores of SGRQ-C post-intervention between the two groups, indicating that the intervention had a positive effect on participants' QoL (f = 11.35,P < .001),with 41% of improvement of their overall QoL (η 2 = 0.41).Specifically, the intervention led to a reduction in COPD symptoms (f = 5.82, P < .001),an improvement in participants' activities (f = 8.31, P < .001),and a reduction in the impact of COPD on their daily life and health status (f = 7.24, P < .001).The study's findings suggest that providing additional respiratory care bundle training for COPD patients can significantly improve their QoL, as measured by the SGRQ scores.
Discussion
COPD is a chronic respiratory disease characterized by airflow limitation that progresses over time, leading to significant morbidity and mortality.The literature review highlights that pharmacological therapies and pulmonary rehabilitation programs are traditional methods to manage COPD symptoms, but they may not be feasible for all patients. [3]Therefore, alternative strategies and personalized self-management programs that may be safer, more effective, and feasible for all COPD patients are needed to improve dyspnea and QoL.The review also emphasizes the importance of targeting barriers to self-care, such as poor inhaler technique and limited understanding of medicines, through a personalized self-management program. [2,3,7,11]owever, respiratory care bundles, including practicing deep breathing exercises, incentive spirometry, airway clearance techniques, and engaging in pulmonary rehabilitation programs, including supervised exercise training and self-management education, have been noted to reduce readmission rates to hospitals and improve QoL in COPD patients. [11,13,14,17,20]nducting a study to fill the gap in knowledge and literature about a particular topic in Arab countries, especially Jordan, is of utmost importance.So, this research can provide valuable insights and clinical guidance to healthcare professionals, policymakers, and regional researchers.This study can significantly improve healthcare delivery in Arab countries, particularly Jordan, and pave the way for evidence-based interventions and policies.
The present study's results indicate that the controlled and intervention groups had similar characteristics and health status.This is a positive outcome because it implies that any discrepancies in the results between the groups can be attributed to the respiratory care bundle intervention rather than dissimilarities in the patients themselves.This information is valuable because it places the study's findings in context and ensures that the results apply to comparable populations.The results of this study are consistent with previous research that has shown that patients with COPD have similar baseline characteristics and health status. [12,21]Another study found that COPD patients in the control and intervention groups had similar levels of dyspnea, exercise capacity, and QoL at baseline. [11]These findings suggest that patient characteristics and health status do not play a significant role in the effectiveness of respiratory care interventions.Therefore, the results of this study have important implications for clinical practice and future research in this region, as they suggest that respiratory care bundle interventions can be effective regardless of patients' baseline characteristics and health status.
The study aimed to investigate the effectiveness of respiratory care bundle intervention on COPD patients' QoL.The results showed no significant differences in the mean scores of symptoms, activity, and impact components before the intervention and the total score for both groups.However, after the intervention, there were statistically significant differences in all SGRQ-C subscales means, which were lower in the intervention group than in the control group, indicating the positive effect of the intervention on their QoL.
The study also compared the intervention and control groups, with the intervention group receiving additional respiratory care bundle training and the control group receiving only an education program.The results showed significant differences in all SGRQ-C components and total scores post-intervention between the two groups, indicating that the intervention positively affected participants' QoL.Specifically, the intervention led to a reduction in COPD symptoms, an improvement in participants' activities, and a reduction in the impact of COPD on their daily life and health status.These [11] Similarly, recent studies have highlighted the effectiveness of deep breathing exercises in reducing dyspnea symptoms and improving ventilation and QoL in COPD patients. [12,14,16,31]he present study's findings were also supported by previous studies, which indicated that the respiratory care bundle effectively reduces dyspnea among bronchial asthma patients. [8,11,16,17,20]Therefore, these studies suggest that nonpharmacological interventions such as respiratory bundle care, incentive spirometry, and deep breathing exercises can improve respiratory function and QoL among COPD and bronchial asthma patients.
Limitation and recommendations
COPD is a widespread respiratory condition affecting millions worldwide, resulting in reduced QoL and increased healthcare utilization.21][22] This study proposes several recommendations and implications for healthcare providers to improve COPD patient outcomes and reduce healthcare costs.
First, healthcare providers should receive training and education on respiratory care bundles to effectively educate their COPD patients on the importance of these interventions.Patients should also be encouraged to practice respiratory care bundles to improve their QoL and reduce healthcare utilization.Training and educational materials, such as online resources, workshops, and conferences, should be available to encourage healthcare providers to provide this education.Healthcare providers should also be incentivized to provide high-quality care through financial incentives or recognition.Furthermore, it is recommended to conduct randomized controlled trials to evaluate the effectiveness of an educational intervention for healthcare providers on respiratory bundle care implementation and patient outcomes.
Investigating the long-term effects of implementing respiratory bundle care on patient outcomes and healthcare costs and exploring the potential role of telehealth in delivering respiratory bundle care and its impact on patient outcomes are also important research areas.Additionally, it is essential to investigate the acceptability and effectiveness of implementing respiratory bundle care in different healthcare settings and patient populations. [3,10,22]rthermore, implementing respiratory bundle care requires a multidisciplinary approach and collaboration between healthcare providers and patients.Patients should be empowered to practice respiratory care bundles and play an active role in managing their COPD.Healthcare organizations should prioritize implementing respiratory bundle care and provide resources for training and education to improve patient outcomes and reduce healthcare costs. [2,7,32]nducting qualitative studies to explore the barriers and facilitators to implementing respiratory bundle care in clinical practice from the perspective of healthcare providers and COPD patients can help identify challenges and opportunities for improving respiratory bundle care implementation.
Conclusion
This study provides several recommendations and implications for healthcare providers to improve COPD patient outcomes and reduce healthcare costs by implementing respiratory care bundles.Healthcare providers should receive training and education on respiratory care bundles to effectively educate their COPD patients on the importance of these interventions.Patients should also be encouraged to practice respiratory care bundles to improve their QoL and reduce healthcare utilization.Implementing respiratory bundle care requires a multidisciplinary approach and collaboration between healthcare providers and patients, and healthcare organizations should prioritize the implementation of respiratory bundle care and provide resources for training and education to improve patient outcomes and reduce healthcare costs.
Table 3 : The effectiveness of interventional programs on improving COPD patients' health-related outcomes for the intervention and controlled groups
Statistically significant value is bolded.^η2 : partial eta squared *
Table 2 : Comparison of pre-intervention and post-intervention measurements between the intervention and controlled groups
Statistically significant value is bolded.SD: Standard deviation *
|
2024-07-07T15:03:39.196Z
|
2024-05-01T00:00:00.000
|
{
"year": 2024,
"sha1": "28f4b8a0441221de94326b458d42003537e95602",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/jehp.jehp_1110_23",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "535b596bc66ebcb87ead24b65139b32ff9922501",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
231714236
|
pes2o/s2orc
|
v3-fos-license
|
DAP5 enables translation re-initiation on structured messenger RNAs
Half of mammalian transcripts contain short upstream open reading frames (uORFs) that potentially regulate translation of the downstream coding sequence (CDS). The molecular mechanisms governing these events remain poorly understood. Here we find that the non-canonical initiation factor Death-associated protein 5 (DAP5 or eIF4G2) is selectively required for re-initiation at the main CDS following uORF translation. Using ribosome profiling and luciferase-based reporters coupled with mutational analysis we show that DAP5-mediated re-initiation occurs on messenger RNAs (mRNAs) with long, structure-prone 5′ leader sequences and persistent uORF translation. These mRNAs preferentially code for signalling factors such as kinases and phosphatases. We also report that cap/eIF4F- and eIF4A-dependent recruitment of DAP5 to the mRNA facilitates re-initiation by unrecycled post-termination 40S subunits. Our study reveals important mechanistic insights into how a non-canonical translation initiation factor involved in stem cell fate shapes the synthesis of specific signalling factors.
non-canonical translation initiation factor involved in stem cell fate shapes the synthesis of specific signalling factors.
Introduction
Translation initiation directs the ribosome to the start codon of messenger (m)RNAs.
Binding of the eukaryotic initiation factor (eIF) 4F complex to the 5′ cap m 7 GpppX structure, present in all eukaryotic mRNAs, triggers the vast majority of translation initiation events (Topisirovic et al., 2011). The eIF4F complex consists of three subunits: eIF4G, eIF4A and eIF4E. As the scaffolding factor of the complex, eIF4G bridges the cap-binding protein eIF4E and the ATP-dependent RNA helicase eIF4A which unwinds secondary structures within the 5′ untranslated region (UTR) of the mRNA (Pelletier and Sonenberg, 2019). In addition, eIF4G associates with the poly(A) binding protein (PABP) to link the 5′ and 3′ ends of the mRNA, and recruits the 43S preinitiation complex (PIC; 40S ribosomal subunit bound to the eIF2:GTP:Met-tRNAi Met ternary complex, eIF3, eIF1 and eIF1A) to capped mRNAs (Hashem and Frank, 2018;Merrick and Pavitt, 2018). PIC scanning along the 5′ UTR to the AUG start codon and joining of the 60S ribosomal subunit determines the assembly of an elongation-committed 80S ribosome (Merrick and Pavitt, 2018).
In mammalian cells, three related eIF4G proteins regulate initiation of translation.
Multiple mechanisms have been proposed to explain how DAP5 engages in translation initiation. Most studies indicate that DAP5 stimulates translation using internal ribosome entry sites (IRESes) or cap-independent translation enhancers (CITEs) present in the 5′ UTR of specific mRNAs when cells are exposed to conditions that hinder capdependent initiation (Haizel et al., 2020;Henis-Korenblit et al., 2002;Henis-Korenblit et al., 2000;Hundsdoerfer et al., 2005;Lee and McCormick, 2006;Lewis et al., 2008;Liberman et al., 2009;Marash et al., 2008;Weingarten-Gabbay et al., 2014). Alternatively, DAP5 was proposed to initiate non-canonical translation via the assembly of cap-bound complexes with proteins other than eIF4E (Bukhari et al., 2016;de la Parra et al., 2018). As DAP5 is an ubiquitously expressed and abundant protein known to control the expression of genes required for stem cell differentiation and embryonic development (Nousch et al., 2007;Sugiyama et al., 2017;Takahashi et al., 2020;Yamanaka et al., 2000;Yoffe et al., 2016;Yoshikane et al., 2007), understanding its mode of action is important for elucidating the mechanisms that drive non-canonical translation.
Upstream open reading frames (uORFs) are prevalent and translated in the 5′ UTRs (hereafter 5′ leaders) of mammalian mRNAs (Bazzini et al., 2014;Chen et al., 2020;Fritsch et al., 2012;Ingolia, 2014;Ingolia et al., 2011;Lee et al., 2012). Expression of downstream and main coding sequences (CDSes) requires scanning of the PIC past the uORFs (leaky scanning) or re-initiation by unrecycled ribosomal complexes after uORF translation (Jackson et al., 2012). Despite the regulatory roles often attributed to uORFs in gene expression and disease (Barbosa et al., 2013), the mechanisms and proteins involved in uORF and main CDS translation in eukaryotic mRNAs are incompletely understood. Here, we describe DAP5 as a non-canonical factor that enhances translation of capped mRNAs via re-initiation. DAP5 is crucial for the translation of particular CDSes after uORF translation in transcripts with structured 5′ leaders. Together with eIF4A, DAP5 regulates the translation of transcripts encoding signalling and regulatory factors with important roles in stem cell and cancer biology, such as kinases and phosphatases. Our findings reveal an unexpected role for a member of the eIF4G family of proteins in the control of translation in human cells.
DAP5 mediates the synthesis of signalling proteins
To study the function of DAP5 in translation, we determined the translational landscape of CRISPR-Cas9 engineered DAP5-null and wild type (WT) HEK293T cells (Figure S1A-C). After isolation and sequencing of ribosome-protected mRNA fragments (ribosome profiling or Ribo-Seq) and matched transcriptome analysis (RNA-Seq), we determined genome-wide transcriptional and translational changes (Figures 1B and S1D) (Ingolia et al., 2011;Zhong et al., 2017). The Ribo-Seq and RNA-Seq experiments were reproducible as replicates clustered together (Figures S1E, F).
In the absence of DAP5, a group of genes -hereafter referred to as DAP5 targetsshowed a significant reduction in translation efficiency (TE; ribosome occupancy/mRNA abundance) ( Figure 1B; n=306, red). Although the majority of DAP5 target transcripts were more abundant, the number of ribosomes per mRNA (footprints or occupancy) decreased in the null cells (Table S1). Other translatome-associated differences were found in a small cohort of mRNAs with increased TE in the null cells (n=23, blue; Figure 1B and Table S1).
In addition to translatome-only changes, we also observed pronounced differences in transcript abundance in the null cells (n=3537; Figure S1D and Table S1). These differences may result from effects on transcription and/or mRNA turnover following DAP5 depletion.
Notably, DAP5 targets included mRNAs encoding proteins involved in cell signalling, or cellular response to stimuli, such as the serine/threonine-protein kinases WNK1 [With-No-Lysine (K)1] and ROCK1 (Rho-associated protein kinase 1), the RAC-alpha serine/threonine-protein kinase AKT1 or the phosphatidylinositol 3,4,5-triphosphate 5phosphatase 2 (INPPL1, a.k.a. SHIP2), among others ( Figure 1C, Table S1). Accordingly, WNK1, ROCK1 and INPPL1 protein levels assessed by immunoblotting were diminished in the absence of DAP5 ( Figure 1D). Importantly, decreased protein synthesis in the null cells was not caused by deficiency in the expression of eIF4E, eIF4G, eIF4A and PABP ( Figure S1C), or changes in global translation ( Figure S1G). With the exception of a reproducible increase in the free 40S subunits peak, polysome profiles of DAP5-null cells after sucrose density gradient separation were similar to those of wild type cells ( Figure S1G-I). However, the association of WNK1 and ROCK1 mRNAs with polysomes, but not GAPDH, shifted from the heavy to the light polysome fractions (Figures S1J-L, lanes 16-18 vs. 12-15) in the absence of DAP5. These results indicate that the translational efficiency of a specific subset of transcripts is regulated by DAP5.
DAP5 targets 5′ leaders have unique features
In addition to the differences in TE, we also observed qualitative changes in the pattern of ribosomal occupancies (footprints) in DAP5 target mRNAs. Ribosome occupancy at the annotated (main) CDSes was markedly decreased in the absence of DAP5. Moreover, ribosome footprints were skewed towards the 5′ leaders of these transcripts ( Figures 1E and S2). Estimation of footprint density (RFP) in all 306 target mRNAs revealed that despite the reduction of footprints in the CDSes, translation was increased on the 5′ leaders in cells lacking DAP5 ( Figure 1F), as measured by the ratio of footprints within the 5′ leader relative to the footprints at the annotated downstream CDS start codon. Increased translation in the 5′ leaders of the DAP5 target mRNAs occurred at uORFs as reflected by experimentally determined initiation-site-profiling in cells treated with harringtonine (HRT) and lactimidomycin (LTM) (Lee et al., 2012) (Figures 1E, G). In the presence of these drugs, ribosomes accumulate at the start codons but are allowed to complete elongation over the rest of the CDS (Ingolia et al., 2011;Lee et al., 2012). The majority of the DAP5 targets had multiple uORFs in the 5′ leader, with a median length of 26 codons, that frequently initiate at near-cognate start codons (CUG, GUG, UUG and AUC) in addition to the conventional AUG ( Figures 1H-J). For instance, WNK1, a regulator of development and WNT signalling (Rodan and Jenny, 2017), exhibited increased ribosome occupancy in two GUG (one of which is in frame with the main CDS), two CUG, two UUG, one AUG and one ACG uORF ( Figure 1E).
These observations suggest that DAP5 mediates CDS, but not uORFs translation. Close inspection of the RFP profiles revealed that cap-proximal uORF translation is DAP5independent (increased RFPs in null cells), whereas downstream uORFs and CDS are translated in a DAP5-dependent manner (decreased RFPs in null cells) (Figures 1E,F and S2).
Further analysis of the 5′ leader sequences of DAP5 target mRNAs also showed increased length, high GC-content and decreased minimum free energy ( Figure S3A-C). In addition, DAP5-independent uORFs tend to concentrate in the regions of the 5′ leaders adjacent to high predicted propensity for structure ( Figures 1E and S2). The increased complexity of the 5′ leader sequences of DAP5 target mRNAs was also associated with decreased TE of the main CDS ( Figure S3D). These findings indicate that the 5′ leaders of DAP5 targets likely form structured elements that might define positional information for DAP5-dependent translation.
DAP5-dependent translation requires the MIF4G and the W2 domains of the protein
To elucidate the molecular details of DAP5-dependent translation, we first tested if translation of the R-LUC reporters was influenced by changes in DAP5 protein sequence. We We then asked if overexpression of full length (FL) or N-terminally truncated eIF4G (lacking the PABP and eIF4E binding sites; eIF4G ΔN) ( Figure 2D) would suffice to translate the R-LUC reporters in the absence of DAP5. Curiously, none of the proteins was able to reestablish R-LUC activity (Figures 2A-D), indicating that WNK1, ROCK1 and AKT1 5′ leaders drive translation of the main CDS in a DAP5-specific manner.
Lastly, we also used DAP5 chimeric proteins where the MIF4G, MA3 or W2 domains were swapped with the respective eIF4G domains ( Figure 2D). Relative to the re-expression of DAP5 (FL), the chimeras were unable to fully restore R-LUC luminescence in the null cells, with the MIF4G and W2 chimeras showing the strongest defects on R-LUC translation (Figures 2A-D). These findings reinforce the notion that all domains of DAP5, and their specific interactions, are necessary for efficient translation of the R-LUC reporters with DAP5 target 5′ leaders. All DAP5 protein constructs were expressed at similar levels and reporter mRNA levels were not altered between the conditions ( Figure S3E-M).
Only eIF4A-bound DAP5 can interact with mRNA
To investigate the recruitment of DAP5 to target mRNAs, we performed RNApulldown assays and RT-qPCR. In contrast to V5-SBP-MBP, V5-SBP-DAP5 efficiently associated with WNK1 and ROCK1 mRNAs, but not GAPDH ( Figure 2E-H). The association of DAP5 with the targets was abolished by mutations on the eIF4A-binding region (eIF4A*; Figure 2E, F), suggesting that eIF4A mediates mRNA binding. Consistent with this idea, the MIF4G domain of DAP5 was sufficient to pull down WNK1 and ROCK1 mRNAs, either alone (MIF4G) or when present in other DAP5 constructs (MA3 chimera and W2 chimera; Figure 2E-G). The DAP5 MIF4G was also specifically required for mRNA binding, as substitution by the respective domain in eIF4G, which is 39% sequence identical (Virgili et al., 2013), prevented DAP5 recruitment ( Figure 2E-G). Consistent with a role of eIF4A in mRNA binding, we observed that one third of DAP5 targets (n=102; Figure S4C; Table S2) showed experimentally determined Rocaglamide A (RocA) sensitivity (Iwasaki et al., 2016).
RocA is a translation inhibitor that clamps eIF4A onto polypurine sequences on the mRNA (Iwasaki et al., 2016). RocA-sensitive mRNAs, such as WNK1, show decreased RFP density at the CDS and premature uORF translation in the presence of the drug ( Figure S4D, E) (Iwasaki et al., 2016). These findings indicate that DAP5 specifically interacts with transcripts when in complex with eIF4A. In the absence of an interaction with the RNA helicase, binding of DAP5 to the target, and thus translation, is compromised (Figures 2A-C).
The interaction of DAP5 ΔW2 with WNK1 and ROCK1 transcripts was comparable to the wild type protein ( Figures 2E, F). This result suggests that impaired translation of the R-LUC reporters in null cells upon DAP5 ΔW2 expression (Figures 2A-C) is unrelated to target binding and is most likely associated with the function of the W2 domain in the initiation of translation.
All proteins were expressed at equivalent levels and did not alter mRNA input levels (Figures 2E-G, input panels, H).
Altogether, our findings show that both eIF4G and DAP5 bind to WNK1 and ROCK1 mRNAs. Whereas the interaction of DAP5 with the mRNA is specific and reliant on eIF4A, eIF4G binds to all capped mRNAs as part of the eIF4F complex. Once on the mRNA, DAP5 mediates the synthesis of WNK1 and ROCK1 proteins (main CDS) but is dispensable for the translation of cap-proximal uORFs ( Figures 1E, F). Translation of the latter is most likely eIF4G-and eIF4F-dependent. Thus, initiation of translation along the structured 5′ leaders of DAP5 targets switches from a DAP5-independent and eIF4F-dependent mechanism to a DAP5-and eIF4A-dependent mechanism.
DAP5-mediated translation is cap-dependent
Given that the 5′ leaders of DAP5 targets contain structured elements that may represent IRESes promoting DAP5 recruitment, we blocked cap/eIF4F-dependent translation via the overexpression of an engineered eIF4E-binding protein (4E-BP) (Peter et al., 2015a) and tested binding of DAP5 to WNK1-and ROCK1-R-LUC mRNAs. As shown in cap-based pulldowns, overexpressed 4E-BP bound to eIF4E and abolished its interaction with eIF4G ( Figure 3A), thus suppressing cap/eIF4F-dependent translation. Notably, binding of V5-SBP-DAP5 to the transcripts was suppressed in the presence of the 4E-BP ( Figures 3B, C). The overexpressed proteins were pulled down at comparable levels in the different experimental conditions ( Figure 3D). These findings suggest that binding of DAP5 to mRNA is cap/eIF4Fdependent and not mediated by an IRES-dependent mechanism.
We then generated two cap-proximal truncations in WNK1-R-LUC mRNA (Δ1 and Δ2; Figure 3E). These truncations partially (Δ1) or completely (Δ2) removed the sequence in the 5′ leader containing the cap-proximal uORFs translated independently of DAP5. Both truncations reduced the abundance of the reporter, and consequently R-LUC activity ( Figure S4F-H), suggesting they might affect mRNA stability and/or transcription. To assess changes only in translation, we determined the protein/mRNA ratios (TE) for the WNK1-R-LUC Δ1 and Δ2 reporters. We observed that despite low mRNA levels, the Δ1 reporter was still translated with the same efficiency, even in the absence DAP5 ( Figure 3F and S4I). WNK1-R-LUC Δ2 mRNA was not translated. These findings show that the cap-proximal region of the 5′ leader is critical for DAP5-mediated translation. This result also indicates that the downstream structured region is not sufficient to determine the translation of the main CDS.
Since cap-proximal uORFs are translated in the null cells ( Figure 1D), we conclude that DAP5 acts downstream of cap-dependent translation initiation, i.e., DAP5 drives translation of the main CDS in mRNAs where eIF4F-loaded ribosomes translate uORFs.
DAP5 enables re-initiation after uORF translation
Our data suggests that the uORFs located in the 5′ leaders of DAP5 targets serve an important role in the translation of the main CDS by DAP5. To characterize the functionality of uORFs in the regulation of DAP5-dependent translation, we transfected wild type and null cells with versions of the WNK1-R-LUC reporter containing altered uORF features.
The WNK1 transcript contains at least eight uORFs, one of which (uORF2) is in frame with the AUG of the main CDS. uORF2 is translated in the absence of DAP5, initiates from a GUG start codon (uGUG) and is 22 codons in length ( Figures 1E and 4A). We optimized the initiation sequence context of uORF2 by mutating the uGUG to conventional AUG (uORF2+; Figure 4A). Based on the proximity to the 5′ end, the uAUG should play a dominant role in start codon recognition and in the initiation of translation (Kozak, 2002).
Interestingly, WNK1-R-LUC-uORF2+ mimicked the reporter with the natural 5′ leader of To confirm that the uAUG was used in the initiation of translation, we removed all STOP codons, and consequently all uORFs, in frame with the AUG of R-LUC (3 in total) (NO STOP; Figure 4A). In the WNK1-R-LUC-NO STOP reporter, the uAUG was the only initiating codon and originated an R-LUC protein with an extended N-terminal region (70 kDa instead of 35 kDa; Figure 4C, lanes 13-15 vs 1-3). Although the protein levels of the two forms of the luciferase were similar ( Figure 4C), the N-terminal extension reduced R-LUC activity ( Figure 4B). Moreover, the activity of the long R-LUC was similar in wild type and null cells, indicating that its translation was not mediated by DAP5 ( Figures 4B).
We also generated reporters where only one (ΔSTOP1) or two (ΔSTOP1+2) of the three STOP codons were removed. Single and double deletions of these STOPs increase the size of uORF2 to 188 or 229 codons, respectively ( Figure 4A). In these settings R-LUC translation was abolished ( Figures 4B, C), indicating that DAP5-dependent translation of the main CDS is influenced by uORF length, as only the short uORF2 (22 codons in length) permitted R-LUC translation. To test the maximum uORF2 length supporting DAP5mediated translation of R-LUC, we extended the position of its STOP to 29, 39 or 49 codons downstream of uAUG ( Figure S5A). Although all tested reporters sustained DAP5-dependent translation of R-LUC, the expression and activity of the luciferase was inversely correlated with uORF2 length (Figures S5B, C). None of the observed differences could be explained by varying mRNA levels (Figures 4D,E and S5D,E). The fact that R-LUC synthesis is only observed in the presence of short uORFs shows that DAP5 drives re-initiation of translation.
We also examined the ability of DAP5 to prime re-initiation of translation without altering uORF initiation context. In the WNK1 5′ leader, uORF6 initiates with a conventional AUG (uAUG), is translated in a DAP5-dependent manner and is in frame with a UGA STOP Lastly, to exclude the possibility that DAP5 promotes translation initiation on downstream CDSes using PICs that scan past the uORFs, we determined the changes in R-LUC translation if the natural 5′ leader lacks all STOP codons in frame with the main CDS (ΔSTOP IF; Figure 5A). When transfected into cells, this reporter originated R-LUC proteins with different sizes (with and without distinct N-terminus), as observed by immunoblotting ( Figure 5D, lane 3). The synthesis of several N-terminally extended versions of R-LUC indicates that the PICs scanning the 5′ leader initiate translation at different upstream start codons, as expected by leaky scanning at near-cognate start codons. In the null cells though, expression of short R-LUC (35 kDa), but not the majority of the long R-LUCs with Nterminal extensions, was diminished ( Figure 5D, lane 4). We also removed all STOPs from the 5′ leader of WNK1 (ΔSTOP all F; Figure 5A). In this reporter devoid of uORFs, translation can initiate at multiple start codons, but does not terminate before the main CDS encoding R-LUC. In cells, long R-LUC proteins were expressed independently of DAP5 ( Figure 5D, lanes 5 and 6). Even if engineering of WNK1 5′ leader sequence was performed without introducing new start codons or altering the initiation contexts of the different uORFs (see methods section for details), removal of STOP codons changed start codon recognition, as shown by the presence of long R-LUC proteins with distinct sizes ( Figure 5D, lanes 5,6 vs 3,4). Notably, short R-LUC was not synthesized in wild type and null cells.
These results have several implications. First, translation of the main CDS (R-LUC) in the context of WNK1 5′ leader is DAP5-dependent. Second, initiation at the main AUG only occurs after uORF translation. Third, DAP5 is critical for re-initiation of translation at the main CDS. Lastly, in the WNK1 5′ leader the PICs are unable to scan until the main AUG, signifying that DAP5 reutilizes the ribosomal complexes involved in uORF translation.
Simultaneous uORF and main CDS translation in the DAP5 targets
The luciferase-based reporters used in the previous experiments suggest that uORF translation is pervasive and necessary for DAP5-dependent translation of the main CDS.
However, in these experiments we are unable to detect the synthesis of uORF-derived peptides, and therefore confirm uORF translation. To simultaneously detect and quantify uORF and main CDS translation, we adopted a split-fluorescent protein approach using mNeonGreen2 (mNG2) that expresses the yellow-green-coloured protein in two fragments: mNG2 1-10 and mNG2 11 . mNG2 1-10 originates a non-fluorescent mNG2 due to the lack of 11 th β-strand; however, upon co-expression with mNG2 11 (16-aa peptide), the two fragments assemble a functional mNG2 molecule (Chen et al., 2020;Feng et al., 2017;Leonetti et al., 2016). The uORF2 (22 aa) in the WNK1 5′ leader was replaced with the mNG2 11 CDS initiating with an uAUG. In addition, the main CDS encoded the EBFP (enhanced blue fluorescent protein) ( Figure 6A). The split-fluorescent reporters were transfected into wild type and DAP5-null cells together with a transfection control expressing mCherry.
The non-overlapping excitation and emission spectra of the three fluorophores allowed their simultaneous detection by flow cytometry ( Figure S6A-D). Expression of the mNG2 plasmids in trans did not generate a yellow-green signal ( Figure S6A). Co-expression of the two mNG2 plasmids generated the fluorescent signal in up to 9% of the cells ( Figure S6B). Although the complementation efficiency of the split-mNG2 system was low compared to the transfection efficiency in HEK293T cells (~50% in wild type cells and ~36% in DAP5null cells, as assessed by the number of mCherry-positive cells, Figure S6D Consistent with a block in re-initiation following translation of long uORFs, EBFP fluorescence was reduced and DAP5-independent in cells expressing the WNK1 ΔSTOP1+mNG2 11 +EBFP reporter which encodes a mNG2 11 peptide fused to 188 amino acids (Figures 6E-G, I, J). mNG2 expression (uORF translation) did not require DAP5 and was not disturbed by uORF length ( Figure 6K). These observations confirm that short uORF translation in the 5′ leader of DAP5 targets promotes main CDS translation. Another implication of our results using different reporter systems is that uORF2 or main CDS sequences and peptides are not relevant for the re-initiation of translation by DAP5, excluding the possibility that uORF translated peptides influence CDS expression in cis.
These experiments do not dismiss however, that WNK1 uORFs-derived peptides are functional in cells.
DAP5 utilizes post-termination translation complexes
To further test translation re-initiation by DAP5, we interfered with termination by exploiting a dominant negative mutant of the release factor 1, eRF1 AAQ (Brown et al., 2015;Shao et al., 2016), to cause local translation arrest at STOP codons. eRF1 AAQ is unable to hydrolyse the peptidyl-tRNA after STOP codon recognition (Frolova et al., 1999). Cells were transfected with the WNK1-R-LUC and GFP-F-LUC in the absence or presence of increasing amounts of eRF1 AAQ and luciferase activities and expression were measured. As expected upon termination inhibition, eRF1 AAQ expression decreased R-LUC and GFP-F-LUC protein levels in a concentration-dependent manner ( Figure 7B, lanes 1-4). However, the R-LUC: F-LUC activity ratio varied if R-LUC translation was primed or not by DAP5. In the context of WNK1 5′ leader (WNK1-R-LUC and WNK1-R-LUC-uORF2+), increasing levels of eRF1 AAQ proportionally decreased R-LUC activity ( Figures 7A-C). In contrast, DAP5-independent translation of R-LUC using a reporter containing a short 5′ leader (R-LUC) or an engineered WNK1 5′ leader without STOP codons that leads to the synthesis of the N-terminally extended R-LUC (WNK1-NO STOP-R-LUC, Figure 7A), was less affected by the eRF1 AAQ mutant. In these cases, R-LUC: F-LUC ratios were constant or even increased in the presence of rising levels of the mutant release factor ( Figures 7B, C). In all the conditions, R-LUC mRNA levels remained unchanged ( Figures S6E, F). These observations suggest that inhibition of termination after uORF translation impairs DAP5-dependent re-initiation at the main CDS.
In agreement with the re-initiation model, similar findings were obtained when 60S recycling was impaired in cells expressing the WNK1-R-LUC reporters. As expected, shRNA-mediated depletion of the ATP binding cassette sub-family E member 1 (ABCE1 KD; Figure 7D) decreased the levels of free 60S subunits in cells, as judged in polysome profiles of control (scramble) or ABCE1 shRNA-treated cells after sucrose density gradient separation ( Figure 7E). In cells with low levels of ABCE1, DAP5-dependent re-initiation of R-LUC translation (WNK1-R-LUC and WNK1-uORF2+-R-LUC reporters) was pronouncedly decreased compared to DAP5-independent translation of R-LUC (R-LUC and WNK1-NO STOP-R-LUC reporters) ( Figures 7F, G). Depletion of ABCE1 did not affect the levels of the different R-LUC transcripts ( Figures S6G, H). These findings indicate that DAP5 acts on post-termination translation complexes following 40S and 60S subunits dissociation at the STOP codons of uORFs.
To exploit if altered function of these factors affects re-initiation of translation by (Bohlen et al., 2020). Approximately 20% of the DAP5 targets (excluding WNK1) were dependent on DENR for efficient translation ( Figure S7E, Table S3).
These results suggest that re-initiation of translation by DAP5 requires DENR activity following uORF translation in a specific group of mRNAs. As DENR/MCTS-1 and eIF2D are either non-canonical Met-tRNA i Met delivery factors or deacylated tRNA eviction factors (Bohlen et al., 2020;Skabkin et al., 2010;Vasudevan et al., 2020), this data also implies that re-initiation of translation by DAP5 utilizes deacylated tRNA-free 40S subunits bound to the mRNA.
Altogether, our work shows that DAP5 mediates re-initiation following uORF translation. The data support a model in which binding of the DAP5-eIF4A complex to 40S subunits following termination of uORF translation promotes re-initiation at the downstream CDS. The structured nature and the presence of multiple uORFs in the 5′ leaders of DAP5 targets may favour dissociation of the eIF4F complex. In this context, translation of cap-distal CDSes relies on DAP5. In the absence of this non-canonical eIF4G protein, post-termination 40S complexes are most likely prone to dissociate from intricate 5′ leaders.
Discussion
Here we reveal that DAP5 is a non-canonical factor that mediates re-initiation without affecting general cap-dependent translation. As one of the few re-initiation specific factors described to date, DAP5 emerges as an important protein in translational control with multiple biological implications. Re-initiation-dependent transcripts are enriched for regulatory proteins such as kinases and phosphatases, implicating DAP5 in the control of cell signalling cascades that support cell proliferation and differentiation. Our data, also expands the list of mRNAs in which re-initiation of translation is essential for protein synthesis.
DAP5 enhances re-initiation on mRNAs with burdened 5′ leaders
The cues for DAP5-mediated translational control are driven by information present in the 5′ leaders of its target mRNAs. Transcripts with structure-prone 5′ leaders and multiple uORFs selectively require DAP5 for proper translation of the main CDS. The long and burdened 5′ leaders restrain scanning of cap-loaded pre-initiation complexes, facilitate translation at uORFs that otherwise would be skipped, and limit main CDS translation (Kozak, 1990). Re-utilization of post-termination complexes following uORF translation is therefore instrumental to trigger the synthesis of the proteins encoded by the main CDS. In this scenario, DAP5 plays a unique role: together with eIF4A, it binds to the mRNA and enables translation re-initiation. We propose that repeated uORF translation and scanning cycles fuelled by DAP5 at the impenetrable 5′ leaders move the ribosome towards the main CDS ( Figure S7F model).
Mechanistically, DAP5 most likely replaces the function of the eIF4F complex. The intricate nature of scanning coupled with slow translation of sequenced biased (GC-rich) uORFs might gradually dissociate or reduce the activity of eIF4F along the long 5′ leaders and favour binding of DAP5 to 40S subunits. As DAP5 interacts with eIF4A, eIF3 and eIF2β (Imataka et al., 1997;Lee and McCormick, 2006;Liberman et al., 2015), its presence on the mRNA may stabilise post-termination 40S subunits, promote the recruitment of an initiator tRNA and/or stimulate a new cycle of scanning and translation. Indeed, DAP5 mutant proteins unable to associate with these initiation factors exhibited reduced ability to promote re-initiation, and eIF4G or its N-terminally truncated protein did not substitute DAP5 in null cells. Future studies will enable the detailed characterization of DAP5 functions as a reinitiation factor.
We present additional evidence that supports the proposed model for the role of DAP5 Even though the poor initiation context at the uORFs led to frequent leaky scanning in the 5′ leaders of DAP5 targets, our mutational analysis of start and STOP codons showed that translation of short uORFs was mandatory for expression of the main CDS. Consistent with a role in re-initiation, DAP5 function was also sensitive to the inhibition of termination and ribosome recycling. In addition, a subset of the DAP5 targets was less translated in cells deficient for DENR, an initiation/recycling factor previously implicated in the re-initiation of translation in animal cells (Ahmed et al., 2018;Bohlen et al., 2020;Castelo-Szekely et al., 2019;Schleich et al., 2014;Vasudevan et al., 2020).
Altogether, our work reports a previously unrecognized role for DAP5 in the control of translation in human cells.
DAP5 regulates the synthesis of signalling proteins
Synthesis of developmental, regulatory and disease-relevant proteins often occurs on mRNAs with GC-and uORF-rich 5′ leaders (Fujii et al., 2017;Kozak, 1991;Renz et al., 2020;Wethmar et al., 2016). These intricate 5′ leaders are thought to limit the production of proteins that are detrimental to cells if overproduced or deregulated by imposing stringent translational control. Although the regulatory potential of these 5′ leaders has long been recognized, the molecular mechanisms enforcing translational control are largely unknown.
We find that DAP5-dependent re-initiation is required for translation of the (Nousch et al., 2007;Sugiyama et al., 2017;Takahashi et al., 2020;Yamanaka et al., 2000;Yoffe et al., 2016;Yoshikane et al., 2007). As several DAP5 targets are known oncogenes and disease-associated genes, future investigations are required to unveil the biological and functional implications of re-initiation in pathological settings. Together with the growing evidence that defective uORF function, polymorphisms and translational reprogramming at 5′ leaders contribute to various human diseases (Barbosa et al., 2013;Sendoel et al., 2017;Wethmar et al., 2016), our work opens new directions into whether uORF translation, re-initiation and DAP5 can be exploited for future therapeutic interventions.
Our results also highlight the functional importance of 5′ leaders, uORFs and reinitiation in the regulation of gene expression. A mechanistic understanding of the influence of alternative 5′ leaders, structured elements and the increased coding capacity of the genome as a consequence of re-initiation will provide exciting findings on how cells precisely tune protein expression levels.
Resource availability
Further information and requests for resources and reagents should be directed to and will be fulfilled by Catia Igreja (catia.igreja@tuebingen.mpg.de).
Materials availability
All unique/stable reagents generated in this study are available with a completed Materials Transfer Agreement.
DNA constructs
DNA constructs used in this study are listed in the Key Resources Table. Table S4.
All the mutants used in this study were generated by site-directed mutagenesis using the QuickChange Site-Directed Mutagenesis kit (Stratagene).
Generation of the DAP5-null cell line
Two sgRNAs targeting DAP5 were designed and cloned into the pSpCas9 ( Serial dilutions in 96-well plates were used to isolate single cell clones. Genomic DNA was extracted from the different clones using the Wizard SV Genomic DNA Purification System (Promega). The DAP5 locus was PCR amplified and Sanger sequencing of the targeted genomic regions indicated two frameshift mutations in exon 10 (172 bp deletion in exon/intron 10, and a 1 bp insertion) targeted by sgDAP5-a. These mutations caused defective splicing and intron retention, as evidenced by subsequent RNA sequencing ( Figure S1B). Two mutations were detected in exon 12 (1 bp insertion and 12 bp deletion) targeted by sgDAP5-b. The lack of DAP5 protein was further confirmed by western blotting ( Figures 1D, S1C). RNA sequencing revealed that DAP5 transcript levels were severely reduced in the null cells compared to wild-type cells ( Figure S1A), most likely as a result of non-sense mediated decay. The following guide sequences were used: sgDAP5-a: 5'-CACGTACCTTGGCTCGTTCA-3'; sgDAP5-b: 5'-ACACCATTGGGTTCCTCGCA-3'.
Ribosome profiling and RNA sequencing
For ribosome profiling and RNA sequencing HEK293T wild type and DAP5-null cells were plated on 10 cm dishes 24 hours before harvesting (3.2 x 10 6 WT cells and 3.5 x 10 6 null cells per plate). Cells were harvested as described in (Calviello et al., 2016).
Importantly, cells were not incubated with cycloheximide before harvesting. Cycloheximide (100 μg/ml, Serva Electrophoresis) was only present in the washing and lysis buffer, as described in (Calviello et al., 2016). For total RNA sequencing, RNA was extracted using the RNeasy Mini Kit (50) (Qiagen) and processed according to the Illumina TruSeq RNA Sample Prep Kit. For ribosome profiling the original protocol (Ingolia et al., 2012) was used in a modified version also described in (Calviello et al., 2016). The ribosome profiling and total RNA sequencing pools were sequenced on an Illumina Hiseq3000 instrument. Reads originating from ribosomal RNA were removed using Bowtie2 (Langmead and Salzberg, 2012). Remaining reads of the RNA sequencing library were mapped onto the human genome using Tophat2 (Kim et al., 2013) which resulted in 15.7-20.5 million mapped reads with an overall read mapping rate >94% for the RNA sequencing experiment. Ribosome profiling reads were subjected to statistical analysis using RiboTaper that aims at identifying actively translating ribosomes based on the characteristic three-nucleotide periodicity (Calviello et al., 2016). Reads of 29 and 30 nucleotides length showed the best threenucleotide periodicity and where therefore used for subsequent mapping onto the human genome. This resulted in 2.8-3.8 million mapped reads with an overall read mapping rate >95% for the ribosome profiling experiment. Read count analysis was performed using QuasR (Gaidatzis et al., 2015). Differential expression analysis was conducted using edgeR (McCarthy et al., 2012;Robinson et al., 2010). Translation efficiency (TE) was calculated using RiboDiff (Zhong et al., 2017).
Harringtonine and LTM datasets from human HEK293 cells were downloaded from the Sequence Read Archive database (accession: SRA056377). RocA and DENR datasets were retrieved from the GEO database accession numbers GSE70211 and GSE134020, respectively. Ribosomal RNA reads were filtered using Bowtie 2 (Langmead and Salzberg, 2012). The remaining reads were mapped on the hg19 (UCSC) human genome or the mm9 (UCSC) mouse genome with TopHat2 (Kim et al., 2013). No specific filters for read length were applied.
Analysis of GO terms and nucleotide compositions
Upregulated and downregulated gene groups were defined as being significantly deregulated (FDR<0.005) with a logFC>0 and logFC<0, respectively. No cut-off of the logFC value was applied so that genes with little but significant changes could also be detected. GO analysis was performed with the R based package goseq (Young et al., 2010).
For analysis of 5′ leader nucleotide composition, the respective mRNA sequences were fetched using biomaRt (Durinck et al., 2005;Durinck et al., 2009). Analysis of GC content and length of 5′ leader was performed with R based scripts.
RNA structures were calculated using the ViennaRNA package 2.0 (Lorenz et al., 2011). Metagene analysis was performed using the Deeptools suite of functions (Ramirez et al., 2016). For uORF number, size and start codon analysis the accumulation of ribosome footprint on start codons was assessed using the ribosome profiling dataset in HEK293 cells treated with harringtonine (Lee et al., 2012). Identity of the start codon and the corresponding STOP codon was manually assigned.
Transfections, northern and western blotting
In the rescue assays described in Figures 2, 3B, 4, 5 and 6, 0.64 x 10 6 WT cells or 0.7 x 10 6 null cells were transfected, after seeding in 6-well plates, using Lipofectamine 2000 Cells were harvested two days after transfection and firefly and Renilla luciferase activities were measured using the Dual-Luciferase reporter assay system (Promega Table S4. Normalized transcript expression ratios from three independent experiments were determined using the Livak method (Livak and Schmittgen, 2001).
Polysome fractions were collected using the Teledyne Isco Density Gradient Fractionation
System.
To isolate RNA from sucrose fractions, samples were first digested with proteinase K (Sigma Aldrich, 1% of the sample volume; 100 mg/ml in 50 mM Tris-HCl pH=8.1, 10 mM Fractions were reverse transcribed and analysed by RT-qPCR.
RNA pulldown
For the RNA pulldown, 3 x 10 6 WT HEK293T cells were plated in 10 cm plates and was prepared and analysed using qPCR (5% of the cDNA), as described above. The list of primers used for the qPCR experiments can be found in Table S4.
Pulldown assays
Pulldown assays were performed in the presence of RNase A as described previously (Peter et al., 2015). HEK293T cells were grown in 10 cm dishes and transfected using
Quantification and statistical analyses
Figures 1B, S1D. Upregulated and downregulated genes were identified using log 2 Fold Change (FC) between null and control cells > 0 or < 0, respectively, and False Discovery Rates (FDR) < 0.005. Figure 1C. The quantitative value represented in the graphs corresponds to -log 10 (qvalue) determined by the GOseq analysis tool (Young et al., 2010). (E) Ribosome footprints and total mRNA reads distribution along WNK1 exon 1 including the 5′ leader and the most 5′ proximal coding sequence in WT and DAP5-null cells. Also shown are the ribosome footprint profiles (RFPs) in HEK293 cells treated with harringtonine (HRT) and lactimidomycin (LTM) obtained by Lee and co-workers (Lee et al., 2012). The predicted propensity for secondary structure across WNK1 5′ leader, determined using the ViennaRNA package 2.0 (Lorenz et al., 2011), is illustrated in orange in the mRNA panel.
uORFs position in the 5′ leader is indicated with the corresponding start codons (GUG, CUG, UUG, ACG or AUG). Start codons highlighted in green are in frame with the AUG at the main annotated coding sequence of WNK1. Gene annotation is depicted below the profiles.
DAP5-independent and -dependent translation is indicated with a black dashed line. CDS: coding sequence. See also Figure S2.
|
2021-01-27T14:15:55.768Z
|
2021-01-21T00:00:00.000
|
{
"year": 2021,
"sha1": "98d93979a048a44cf55e8156fd7630a3e1e66545",
"oa_license": "CCBYNC",
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2021/01/21/2021.01.21.427569.full.pdf",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "98d93979a048a44cf55e8156fd7630a3e1e66545",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
}
|
237753654
|
pes2o/s2orc
|
v3-fos-license
|
Evaluation of the Absorption of Methionine Carried by Mineral Clays and Zeolites in Porcine Ex Vivo Permeability Models
: Supplemental dietary amino acids (AAs) need to be provided in a form that prevents their degradation along the gastrointestinal tract to guarantee their high bioavailability and bioactivity. In this study, methionine (Met) protected via organo-clay intercalation (natural carriers) has been developed as a sustainable alternative to polymeric coating. Specifically, two different bentonite-zeolite-based mineral clays were tested, Adsorbene (ADS) and BioKi (BIO). Briefly, 1 g of the carrier (ADS or BIO) was contacted with 50 mL of an aqueous solution at a pH of 3.0, 5.8, and 8.9. Solid-liquid separation was conducted. The released Met in the liquid phase was analysed by Chemical Oxygen Demand, while residual Met in the solid phase was analysed by Fourier Transform Infra-Red (FT-IR) spectroscopy. The effect of Met-ADS complex on cell viability was tested on IPEC-J2 cells incubated 3 h with Met-ADS 2.5 mM. Jejunum segments obtained by entire male pigs (Swiss Large White, body weight 100 ± 5 kg) were used as ex vivo models to compare the absorption of 2.5 mM Met released by ADS with 2.5 mM free Met and its influence on epithelial integrity in perfusion Ussing chambers. The carriers released a very low amount of Met and Met-BIO interaction was stronger than Met-ADS. The maximum release of Met was at pH 3, with 3% and 6% of Met release from Met-BIO and Met-ADS, respectively. Cell viability experiments revealed that Met-ADS did not alter cell metabolic activity. No differences in Met absorption and intestinal epithelial integrity were observed ex vivo between free Met and Met-ADS. This study provided new insights into the release of Met from natural clays such as ADS and BIO, the safety of its use in the porcine intestine and the ability of ADS-released Met to absorb to the same extent as the free Met in porcine jejunum. Results were obtained from the minimum of a technical duplicate of four independent experiments. The data were tested for normality using the Shapiro–Wilk test. Results were tested with one-way analysis of variance (ANOVA) as the data were normally distributed. Pairwise comparisons were evaluated using Tukey’s HSD test. Differences between control and treatment groups were considered significant when p < 0.05.
Introduction
In recent years, the application of low protein diets has been recognised as a tool in pig production to reduce feeding costs, nitrogen excretion and effectively improve gut health in an era of increasing antibiotic use limitations [1][2][3]. The swine industry has witnessed the great advantage of reducing dietary crude protein with free amino acids supplementation for sustainable production, saving protein ingredients, reducing nitrogen excretion, feed costs and the risk of gut disorders without impairing growth performance, compared with traditional diets [4].
Appl. Sci. 2021, 11, 6384 2 of 10 Among essential amino acids, methionine (Met) has key importance in protein synthesis, methyl donation, and in antioxidant and immune defense processes in swine [5]. Free Met delivery may recover growth performance in pigs receiving a Met-deficient diet [6,7] due to nitrogen retention and muscle protein accretion by increasing protein biosynthesis [8] and decreasing protein degradation [9].
However, thermal processing has negative effects on sulfur amino acids such as Met and cystine. Therefore, supplemental dietary AAs need to be provided in a form preventing their degradation during technological treatments to guarantee high bioavailability in the organism.
Although the protection of amino acids finds its natural application in dairy cow feeding due to its essential role in rumen protection [10,11], the application of delivery systems may represent an efficient solution for reducing peptide-food matrix reactions and food processing protection.
An important prerequisite of a good delivery system, as applied to the encapsulation of food bioactive, is that it must not modify the physicochemical and organoleptic properties of the product [12] nor have reactivity with the nutritional compound. A broad spectrum of colloidal systems has been proposed as carriers for oral peptides delivery, such as liposomes, microemulsions, emulsions, biopolymer microgels, and solid lipid nanoparticles [13,14].
Our group developed a protocol for the protection of Met via organoclay intercalation (natural carriers) as more sustainable alternatives to polymeric coating [15,16]. In particular, two different bentonite-zeolite-based mineral clays were tested, Adsorbene (ADS) and BioKi (BIO). In the present study, the release of Met from ADS (Met-ADS) and BIO (Met-BIO) was tested. Furthermore, given the higher Met release from Met-ADS, its effect on in vitro culture of porcine intestinal epithelial cells (IPEC-J2) and Met absorption capacity in jejunum using an ex vivo porcine permeability model were tested.
Materials and Methods
The samples considered for this study were prepared according to [15,16], which allows for intercalation of neutral organic molecules without ions exchange [11,17].
Samples (Met-ADS and MET-BIO) containing 0.25 g of Met per gram of the carrier were chosen for the experimental trials.
Sample Preparation and Release Tests
Release tests were performed in water solutions at different pH levels to evaluate the interaction strength between the carrier and Met. The release test procedure is illustrated in Figure 1 and described in detail below. In a typical experiment, 1 g of the carrier (ADS or BIO) was contacted with 50 mL of an aqueous solution at different pH levels, namely 3.0, 5.8, 8.9, by stirring at 600 rpm for 90 min at room temperature. Solid-liquid separation was performed by centrifugation at 3000 rpm for 30 min. The liquid phase, where released Met is expected to be found, was analysed by Chemical Oxygen Demand (COD) [15,16].
DTGS KBr detector. FT-IR skeletal analyses of inorganic matrix, and other related modified materials were conducted on KBr pressed disks and prepared by carefully mixing the solid sample with KBr powder in an agate mortar (about 1% w/w concentration of powders). Measurements were repeated twice with 100 scans each, background air, and in the mid-IR range, i.e., 4000-400 cm −1 . FT-IR spectra analysis was performed considering the pH conditions with the lowest recorded release rate.
Effects of MET-ADS on IPEC-J2 Cell Viability
IPEC-J2 cells are intestinal porcine enterocytes isolated from the jejunum of a neonatal unsuckled piglet (ACC 701, DSMZ, Braunschweig, Germany). The IPEC-J2 cell line is unique as it is derived from the small intestine and is not transformed nor tumorigenic in nature [18]. The IPEC-J2 cells were cultured in DMEM/F-12 mix (Dulbecco's Modified Eagle Medium, Ham's F-12 mixture) supplemented with HEPES, fetal bovine serum (FBS), penicillin/streptomycin and cultivated in a humid chamber at 37 °C with 5% CO2. All experiments were performed using IPEC-J2 cells within six cell passages (passages 16 to 22) to ensure reproducibility.
IPEC-J2 cells were seeded at a density of 1.5-2 × 10 5 cells/mL in 96-well plates and cultured for 24 h. In addition, Met-ADS (liquid phase obtained during release tests at pH 3, see above) was tested on IPEC-J2 cell viability. Cell viability was determined after three hours of treatment by a colorimetric proliferation assay (MTT test) in accordance with the manufacturer's instructions (Sigma Aldrich, St. Louis, Missuri, USA).
Animals and Tissue Collection
The Swiss cantonal Committee for Animal Care and Use approved all procedures involving animals (authorization number: 27428). In total, tissues from n = 4 pigs of entire male (EM), Swiss Large White breed were used for the ex vivo trial. Only EMs were used to avoid variability related to sex. Pigs were fed a conventional finisher diet and had ad libitum access to drinking water. They were slaughtered at 171 ± 2.8 d of age at the research station abattoir after being fasted for approximately 15 h. Intestinal segments were removed within 15 min after exsanguination. Intestinal contents were removed with a cold saline solution (4 °C). Tissues were stored in serosal buffer solution (see below) and CONTACTING VH2O=50 ml, msol=1 g, 90 minutes, T=25°C, stirring=600 rpm, pH=3.0 or 5.8 or 8.9 SEPARATION centrifugation (3000 rpm, 30 min) or high pressure filtration (6 bar, 60 min)
ANALYSIS
Liquid: COD Solid: FT-IR COD analyses were performed using a Spectrodirect Lovibond Instrument (2021 Tintometer GmbH, www.lovibond.com, accessed on 9 July 2021, Germany) and operated according to standard procedures [ASTM D1252-06]. Aliquots of the liquid phase were stored at −20 • C for cell viability. Ussing chamber experiments were also performed and are described below.
The residual solid phase was analysed by Fourier Transform Infra-Red (FT-IR) spectroscopy.
FT-IR spectra were recorded in the 4000-400 cm −1 range by a FT-IR Thermo Nicolet Nexus spectrometer (Thermo Electron Corporation, Madison, WI, USA) equipped with a DTGS KBr detector. FT-IR skeletal analyses of inorganic matrix, and other related modified materials were conducted on KBr pressed disks and prepared by carefully mixing the solid sample with KBr powder in an agate mortar (about 1% w/w concentration of powders). Measurements were repeated twice with 100 scans each, background air, and in the mid-IR range, i.e., 4000-400 cm −1 . FT-IR spectra analysis was performed considering the pH conditions with the lowest recorded release rate.
Effects of MET-ADS on IPEC-J2 Cell Viability
IPEC-J2 cells are intestinal porcine enterocytes isolated from the jejunum of a neonatal unsuckled piglet (ACC 701, DSMZ, Braunschweig, Germany). The IPEC-J2 cell line is unique as it is derived from the small intestine and is not transformed nor tumorigenic in nature [18]. The IPEC-J2 cells were cultured in DMEM/F-12 mix (Dulbecco's Modified Eagle Medium, Ham's F-12 mixture) supplemented with HEPES, fetal bovine serum (FBS), penicillin/streptomycin and cultivated in a humid chamber at 37 • C with 5% CO 2 . All experiments were performed using IPEC-J2 cells within six cell passages (passages 16 to 22) to ensure reproducibility.
IPEC-J2 cells were seeded at a density of 1.5-2 × 10 5 cells/mL in 96-well plates and cultured for 24 h. In addition, Met-ADS (liquid phase obtained during release tests at pH 3, see above) was tested on IPEC-J2 cell viability. Cell viability was determined after three hours of treatment by a colorimetric proliferation assay (MTT test) in accordance with the manufacturer's instructions (Sigma Aldrich, St. Louis, Missuri, USA).
Animals and Tissue Collection
The Swiss cantonal Committee for Animal Care and Use approved all procedures involving animals (authorization number: 27428). In total, tissues from n = 4 pigs of entire male (EM), Swiss Large White breed were used for the ex vivo trial. Only EMs were used to avoid variability related to sex. Pigs were fed a conventional finisher diet and had ad libitum access to drinking water. They were slaughtered at 171 ± 2.8 d of age at the research station abattoir after being fasted for approximately 15 h. Intestinal segments were removed within 15 min after exsanguination. Intestinal contents were removed with a cold saline solution (4 • C). Tissues were stored in serosal buffer solution (see below) and used for Ussing chamber experiments started within 30 minutes. The jejunum tissues used in the present study were collected from an abattoir for animals entering the food chain; therefore, no ethical issues have been faced. The treatments were performed on tissues mounted in the perfusion chambers and not directly on the animals.
Ussing Chamber Experiments
For each pig, n = 6 jejuna samples starting from the third meter distal to the pylorus were stripped of outer muscle layers and immediately mounted between the two halves of an Ussing chamber (Physiologic Instruments, San Diego, CA, USA). Each segment was bathed on its mucosal and serosal surfaces (exposed area 1.0 cm 2 ) with the same corresponding KRB buffer. Specifically, each chamber was filled with 4 mL KRB buffer (115 mM NaCl, 2.4 mM K 2 HPO 4 , 0.4 mM KH 2 PO 4 , 1.2 mM CaCl 2 , 1.2 mM MgCl 2 , 25 mM NaHCO 3− ). The serosal buffer (pH 7.4) also contained 10 mM glucose as an energy source that was osmotically balanced by 10 mM mannitol in the mucosal buffer (pH 7.4). Indomethacin was added in both mucosal and serosal buffers (0.01 mM). Buffers were continuously perfused with a 95% O 2 and 5% CO 2 gas mixture. The temperature was maintained at 37 • C by a circulating water bath.
After a stabilization period of 30 min, three different treatments were randomly assigned to each chamber in duplicate: KRB (as control), 2.5 mM pure Met and 2.5 mM of Met-ADS after release tests protocol at pH 3 was applied. The effect of the KRB Buffer pH 3 used for the Met release from the carrier was tested in a different experiment by adding the equivalent volume of the KRB Buffer (pH3) without Met-ADS as a control. Each addition was kept in equilibrated osmotic condition by the addition of equimolar mannitol on the serosal side. Forskolin (10 µM) was added to the serosal compartment at the end of the experiment to test tissue viability. During the experiments, aliquots (500 µL) of both apical and basolateral sides were collected immediately before (T0) and after 3 h (T1) of adding Met. Aliquots were immediately frozen (−20 • C) until further analysis. Trans-epithelial potential difference (TEER) and the short-circuit current (Isc) were continuously monitored using a computer-controlled device,. Tissues were voltage-clamped at 0 mV by an external current after correction for solution resistance.
Quantification of Methionine Release in Luminal and Basolateral Samples
Transepithelial absorption of Met across pig small intestinal mucosa was assessed in apical (mucosal) and basolateral (serosal) aliquots obtained during Ussing chamber experiments (described above) at the beginning of the experiment (0 h) and after the incubation span of 3 h. Samples were thawed rapidly at 20 • C. Quantification of Met was performed by Q-Exactive Orbitrap high-resolution mass spectrometry (HPIEC-HRMS-Orbitrap; Thermo Scientific, San Jose, CA, USA), analysis according to the method described by Panseri et al. [19]. Briefly, after membrane filtration (0.45 µm, PVDF), 3,4,5trimethoxycinnamic acid (100 mg/L in methanol) was added to an aliquot of 50 µM, and the samples were injected. The temperatures of capillary and vaporizer were set at 330 • C and 280 • C. The electrospray voltage operating in negative mode was adjusted at 3.50 kV.
Statistical Analysis
Data were analysed using IBM SPSS Statistics version 24 (SPSS, Chicago, IL, USA) and are presented as mean ± standard error. Results were obtained from the minimum of a technical duplicate of four independent experiments. The data were tested for normality using the Shapiro-Wilk test. Results were tested with one-way analysis of variance (ANOVA) as the data were normally distributed. Pairwise comparisons were evaluated using Tukey's HSD test. Differences between control and treatment groups were considered significant when p < 0.05.
Release Tests
An exceptionally low amount of Met was found in the liquid phase obtained during the release test. Met-BIO interaction is apparently stronger than Met-ADS. The maximum release of Met was 6% and 3% for Met-ADS and Met-BIO, respectively, at pH 3. Little pH effect can also be inferred considering that, for both solids, the minimum Met released was observed at near-neutral pH, i.e., 5.8 (Table 1). Table 1. Released methionine at different pH from two different carriers (Adsorbene and BioKi). Tests were performed on 2 g of Met-carrier, which correspond to 0.5 g of total initial methionine in the analysed sample. Solids after release at pH 5.8, were also analysed by FTIR spectroscopy (Figure 2).
Release Tests
An exceptionally low amount of Met was found in the liquid phase obtained during the release test. Met-BIO interaction is apparently stronger than Met-ADS. The maximum release of Met was 6% and 3% for Met-ADS and Met-BIO, respectively, at pH 3. Little pH effect can also be inferred considering that, for both solids, the minimum Met released was observed at near-neutral pH, i.e., 5.8 (Table 1). Table 1. Released methionine at different pH from two different carriers (Adsorbene and BioKi). Tests were performed on 2 g of Met-carrier, which correspond to 0.5 g of total initial methionine in the analysed sample. Solids after release at pH 5.8, were also analysed by FTIR spectroscopy (Figure 2). In the IR spectrum of both samples before the release test, several weak IR bands characteristic of the organic moiety near 2920-2950 cm −1 (CH stretching modes) and in the region 1620-1300 cm −1 (CO and CN stretching modes, CH deformation modes) are detectable. A detailed spectroscopic investigation on the interaction with methionine and clay matrix has been recently reported by Cristiani et al. (2021, accepted on 14 June). After the release tests, these bands are barely detected in the high-frequency region of the spectrum, namely in the spectrum of the MET-BIO sample ( Figure 2B), suggesting the presence of residual Met, strongly bounded at the surface of the carrier.
Effects of Met-ADS on Intestinal IPEC-J2 Cell Viability
Cell viability experiments revealed that the IPEC-J2 cells treatment with Met-ADS 2.5 mM, and after pre-treatment with KRB, did not significantly altered cell metabolic activity (MTT test) after 3 h contact (Figure 3).
Figure 2.
Spectral regions of infrared spectra obtained by Fourier-transform infrared spectroscopy (FTIR) of released methionine at different pH conditions; (A) Met-ADS and (B) Met-BIO before and after release at pH 5.8 (dashed areas: bands of interest). Dashed rectangular: weak IR bands characteristic of the organic moiety near 2920-2950 cm −1 (CH stretching modes) and in the region 1620-1300 cm −1 (CO and CN stretching modes, CH deformation modes).
In the IR spectrum of both samples before the release test, several weak IR bands characteristic of the organic moiety near 2920-2950 cm −1 (CH stretching modes) and in the region 1620-1300 cm −1 (CO and CN stretching modes, CH deformation modes) are detectable. A detailed spectroscopic investigation on the interaction with methionine and clay matrix has been recently reported by Cristiani et al. (2021, accepted on 14 June). After the release tests, these bands are barely detected in the high-frequency region of the spectrum, namely in the spectrum of the MET-BIO sample ( Figure 2B), suggesting the presence of residual Met, strongly bounded at the surface of the carrier.
Effects of Met-ADS on Intestinal IPEC-J2 Cell Viability
Cell viability experiments revealed that the IPEC-J2 cells treatment with Met-ADS 2.5 mM, and after pre-treatment with KRB, did not significantly altered cell metabolic activity (MTT test) after 3 h contact (Figure 3).
Transepithelial Absorption-Ussing Chamber Experiments
The KRB buffer at pH 3 had no effect on the tissue's viability compared to the standard KRB buffer used to carry on the experiments (data not shown).
No differences between Met-ADS and free Met-induced ΔIsc were observed. The high forskolin-induced ΔIsc confirmed the viability of the tissues at the end of the experiment. No detrimental effects on the trans-epithelial resistance were observed in the Met-ADS compared with the controls after 3 h of incubation. Data are reported in Table 2.
Transepithelial Absorption-Ussing Chamber Experiments
The KRB buffer at pH 3 had no effect on the tissue's viability compared to the standard KRB buffer used to carry on the experiments (data not shown).
No differences between Met-ADS and free Met-induced ∆Isc were observed. The high forskolin-induced ∆Isc confirmed the viability of the tissues at the end of the experiment. No detrimental effects on the trans-epithelial resistance were observed in the Met-ADS compared with the controls after 3 h of incubation. Data are reported in Table 2.
Quantification of Methionine in Luminal and Basolateral Samples
The transport of Met occurred across the pig small intestinal mucosa. Results are reported in Table 3. In some cases, Met was not detectable on either the luminal or serosal sides. Table 3. Methionine percentage investigated in the Ussing chamber with porcine mucosa. Results are presented as a percentage of the luminally applied dose recovered (%) at the beginning of the experiment (0 h) and after 3 h of incubation.
Discussion
This study aims to demonstrate the suitability of Met-ADS and Met-Bio as natural AA carriers for animal nutrition application. Given that supplemental dietary AAs need to be provided in a form that prevents their degradation along the gastrointestinal system to guarantee their high bioavailability, bioactivity and absorption, two natural clays are proposed in this study as AAs carrier for feed application. The target of this study is to evaluate possible advantages in the use of clay minerals as carriers of nutrients in feed application. Accordingly, AAs release is a fundamental step that occurs in the intestine of the animal, avoiding the competition for AAs with micro-organisms in the stomach, where absorption by the host is still limited.
The release of Met could result from an external stimulus, such as a change in environmental conditions during the transit of Met through the digestive system. Such stimulus could be the change of pH, which is considered the most important physico-chemical parameter changing in the different parts of the gastrointestinal tract. The ability of pH to affect the release rate of methionine can be exerted on both methionine and carrier; in the case of methionine, by charging the molecule; in the case of the clay, by modifying the interlayer charge. Both these effects may favour methionine release, possibly by charges of repulsion.
Carriers and Release of Methionine
The synthesis procedure applied to prepare Met-ADS and the Met-BIO samples has demonstrated to be effective in adsorbing the organic molecule onto the carriers. However, considering the different phase composition and structure of the clays, different kinds of interaction are present in ADS and BIO. In the case of BIO, a pure zeolitic material, MET is trapped inside the zeolitic cages; while in the case of ADS, a mixture of montmorillonite and zeolite-type materials, Met interacts with the carrier via different mechanisms. Indeed, in ADS, the expendable montmorillonitic component can capture Met via both interlayer intercalation, without ions exchange, and surface adsorption, while the zeolitic component behaves as BIO, i.e., via a trapping mechanism (Cristiani et al. 2021, accepted on 14 June).
From the results in Table 1, it is evident that Met is strongly bonded to the carrier. This behavior can be explained considering the nature of the carrier. The ADS is a complex mixture of different mineral clays and zeolites, such as Chabazite (PDF card 01-085-1046), Phillipsite (PDF card 01-072-4634), Bentonite (PDF card 00-003-0015), and Illite (PDF card 01-075-0948). The BIO is a fully zeolite-based material (i.e., Chabazite (PDF card 01-085-1046), Phillipsite (PDF card 01-072-4634), Illite (PDF card 01-075-0948), Hydrosodalite (PDF card 01-073-5304), and Albite (PDF card 19-1184) as found by XRD analysis of the powders. A strong Met trapping by the carrier is present; this is essentially due to the zeolites structural cages, which are able to trap and strongly interact with organic molecules in general. Indeed, the same amount of Met was etched in near-neutral or harsher conditions (i.e., strong acid or base conditions), where a larger release was expected. Therefore, it can be concluded that only Met weakly bounded at the surface of the carrier is released, while no effect of water, high or low pH is exerted on trapped Met.
Met-ADS and IPEC-J2 Cell Line Viability
Due to the more efficient results obtained by the release tests performed on Met associated with ADS, compared with Bio, only Met-ADS was chosen for the in vitro and ex vivo trials.
Prior studies demonstrated morphological and functional similarities between the IPEC-J2 cell line and intestinal epithelial cells [20]. IPEC-J2 cells mimic the intestinal physiology more closely than any other cell line of tumor origin. Therefore, they are ideal tools to study epithelial transport and the effect of nutrients on a variety of parameters reflecting epithelial functionality [18,21,22].
In this study, IPEC-J2 cells were used as an in vitro model to study the effects of Met-ADS in order to exclude any cytotoxicity issue at the intestinal epithelium level before ex vivo trials. Results from IPEC-J2 experiments did not reveal statistically significant toxicity in IPEC-J2 cells treated with Met-ADS, suggesting that Adsorbene did not cause detrimental effects on the intestinal epithelial tissue after 3 h of contact [23]. Once the safety of the Met-ADS products on intestinal epithelial cells was assumed, the absorption of the Met released by the ADS was tested in porcine explants ex vivo.
Methionine Absorption in Pig Jejunum
In the present study, we tested the absorption of Met released from Met-ADS complex in porcine jejunum ex vivo. Exogenous Met is commonly added to diets that do not contain a sufficient amount of Met to meet the Met requirement by the pig. To be transported from the intestinal lumen into the blood, Met enter the intestinal absorptive epithelial cells (a.k.a. enterocytes) first through corresponding transporters present in the brush border membrane at the apical side and then exported from the epithelial cells to the bloodstream through the transporters present in the basolateral membrane [24]. Met is a neutral AA, and its uptake across the apical membrane of intestinal epithelial cells is almost entirely Na+-dependent [25,26]. The major apical neutral AA transporter in the intestine is the B(0)AT1 (SLC6A19) system, which cotransports one Na+ per AA [27]. In pigs, Met is absorbed along the entire small intestinal tract.
The Ussing system offers an ex vivo measurement of electrophysiological measurements such as TEER (as a measurement of gut integrity) and short circuit current (Isc). Isc is a summation of all ionic currents across the epithelium, and it represents an estimation of actively transported ions across the epithelial membrane [17]. However, once absorbed, Met can be metabolised by the epithelial cells or released in the serosal side of the epithe-lium. In order to estimate the quantity of Met released in the serosal side, Ussing-obtained data must be integrated with other techniques such as Mass Spectroscopy.
In this study, we used jejunum segments because previous studies in pigs demonstrated that the rate of Met absorption in this region is comparable with or exceeds those of other regions [28]. During the 3 h exposure time of the jejunum segments, a small percentage of both free Met and Met-ADS were absorbed. The low percentage of both Met and Met-ADS absorbed during the trial could be explained by the saturable absorption of the Met. Concentrations tested in our study were chosen to ensure the saturation of the intestinal carriers. The maximum rates of saturable absorption of Met has been estimated in the range of nmol min −1 mg tissue −1 by the middle intestine [28]. However, the lack of statistically significant difference in the Met and Met-ADS-induced ∆Isc demonstrated that ADS did not affect the potential of the Met to be absorbed by the intestinal active transporters once released by the carrier. In agreement with the cytotoxicity test in vitro, no detrimental effects on the intestinal epithelial integrity were observed. Specifically, no differences in the TEER between CTR, Met and Met-ADS were found. In all the groups, the TEER remained nearly constant during the whole experiment demonstrating that the integrity lasted for the entire trial. The forskolin-induced ∆Isc values also confirmed the viability of the tissues at the end of the experiment, confirming the absence of detrimental effects of Met-ADS on jejunum integrity.
Conclusions
This study offers new insights into the release of Met from ADS and BIO, the safety of its use in the porcine intestine and the ability of ADS-released Met to be absorbed to the same extent as the free Met in porcine jejunum. Although the application of delivery systems may represent an efficient solution for reducing peptide-food matrix reactions and improving amino acid bioavailability in swine feed, further studies need to focus on optimising delivery systems to maximise amino acid release at relevant gastrointestinal conditions. Author Contributions: Conceptualization, C.G., M.T., C.C. and L.R.; methodology, C.G., M.T., C.C. and E.F.; formal analysis, C.G., M.T., C.C., E.F., and S.P.; investigation, C.G., M.T., C.C., E.F., P.S. and S.P.; resources, C.C. and L.R.; data curation, C.G., M.T. and C.C.; writing-original draft preparation, C.G., M.T., C.C. and E.F.; writing-review and editing, C.G., M.T., C.C., E.F., P.S., S.P. and M.D.; visualization, C.G., C.C. and E.F.; supervision, A.B.; project administration, L.R.; funding acquisition, L.R. All authors have read and agreed to the published version of the manuscript.
Funding: This research was funded by the Lombardy Region and European Regional Development Fund (ERDF) under grant: FoodTech Project (ID: 203370).
Institutional Review Board Statement:
The study was conducted according to the guidelines of the Declaration of Helsinki.
Data Availability Statement:
The data presented in this study are available within the article.
|
2021-09-28T01:09:24.838Z
|
2021-07-10T00:00:00.000
|
{
"year": 2021,
"sha1": "4e04926c83cbdcf9e936156d872f44e705df4dcb",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/11/14/6384/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "0d29d8ec9fa4d43fb73e9b6df97ac2332ef372a7",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
261394249
|
pes2o/s2orc
|
v3-fos-license
|
Technology-Facilitated Sexual Violence and Abuse in Low and Middle-Income Countries: A Scoping Review
Technology-facilitated sexual violence and abuse (TFSVA) is a pervasive phenomenon and a global problem. TFSVA refers to any form of sexual violence, exploitation, or harassment enacted through the misuse of digital technologies. This includes, but is not limited to, image-based sexual abuse, online sexual exploitation and harassment, sextortion, and the non-consensual sharing of sexual images. It has significant and long-lasting psychological, social, financial, and health impacts. TFSVA is on the rise, particularly in low and middle-income countries (LMICs), where there has been an explosion in digital technology overall. This scoping review aimed to identify studies on TFSVA in LMICs to examine its types, impacts, victim-survivor coping strategies, and help-seeking. To identify peer-reviewed literature, six databases were searched: Applied Social Sciences Index & Abstracts, ProQuest, PubMed, Scopus, Star Plus-University of Sheffield library search, and Web of Science. The review included empirical studies published in English between 1996 and 2022, focusing on TFSVA among adults (aged 18+) in LMICs. A total of 14 peer-reviewed studies were included, highlighting that scant empirical research is available on TFSVA in LMICs. This review found several types of TFSVA and their wide-ranging impacts; traditional patriarchal societal norms and values largely shape TFSVA for women in LMICs. It also found more social impacts linked to sociocultural factors. Survivors adopted various coping mechanisms and help-seeking behaviors primarily through informal family support. Studies highlighted the need for effective legislation; pro-victim-survivor policing; strong family support; increasing victim-survivors’ knowledge about reporting; and more research.
Introduction
Technology-facilitated sexual violence and abuse (TFSVA) is a pervasive phenomenon, and global public health and social concern, found not only in the developed world but also in the Global South (Bailey et al., 2021;Zagloul et al., 2022).The rapid proliferation of technology, including the internet and mobile devices, and increased dependence on technology during and following the coronavirus disease 2019 (COVID-19) global pandemic (Huiskes et al., 2022;Jatmiko et al., 2020) has led to increased opportunities for and prevalence of online sexual violence, abuse, and harassment worldwide (Bailey et al., 2021;Makinde et al., 2021;Powell et al., 2020;Salerno-Ferraro et al., 2022;Snaychuk & O'Neill, 2020).This is particularly significant in low and middle-income countries (LMICs), where access to digital technologies is growing faster than in other parts of the world (ITU, 2019;Pew Research Centre, 2019).Indeed, internet use in the least developing countries is growing at an annual growth rat of 22 per cent, which is much faster than global growth rate of 7.2 percent (ITU, 2023).This rapid expansion of the internet and digital technologies offers more opportunities to use these as a tool or platform for perpetrating sexual violence, abuse, and harassment in countries where gender inequality and sexual violence against women are already high due to long-traditional patriarchal practices, inadequate social support, and a lack of economic opportunities (Carrington et al., 2015;DeKeseredy & Hall-Sanchez, 2018;Jakobsen, 2018).For this review, the World Bank's (2023) definition of LMICs has been adopted, meaning countries with a Gross National Income (GNI) per capita between US$1,085 (or less than US$1,085) and US$13,205 are defined as LMICs.The World Bank further divides LMICs into two categories: lower-middle-income countries, with a GNI per capita between US$1,086 and US$4,255, and upper-middle-income countries, with a GNI per capita between US$4,256 and US$13,205.There are 136 countries in these categories.
Digital technologies refer to devices such as personal computers, tablets, mobile telephones, and cameras, as well as systems such as software, apps, virtual reality and less tangible forms of technology such as the internet and social media.The misuse of digital technologies is exemplified in diverse behaviors including through the use of social media, artificial intelligence (AI) (Flynn, 2019;Henry et al., 2018), GPS tracking (Wong, 2019), mobile phones and smart home technology (Buil-Gil et al., 2023).There are numerous examples of TFSVA, including: sending unwanted sexually explicit photos, emails, or texts; harassment through repeated requests for dating or physical relationships; sending, taking, sharing or threatening to share intimate images or videos; monitoring and tracking an individual's location, social media and internet activities; restricting access to or ownership of digital devices through coercive control; threatening or non-consensually disclosing personal information (known as doxing, or doxxing).
Many of these behaviors are grouped together and commonly referred to as TFSVA; this is the umbrella term we adopt in this paper.However, there is no agreed terminology or definition among scholars who study TFSVA; instead, a diverse range of concepts and definitions are found across the literature (Backe et al., 2018;Henry et al., 2020;Henry & Powell, 2018;Patel & Roesch, 2022;Powell et al., 2020).This lack of uniformity makes it harder to assess the prevalence and effects of these behaviors (Snaychuk & O'Neill, 2020).It also complicates the development of appropriate legislation and interventions to prevent or combat its harm (Patel & Roesch, 2022).
Understanding and investigating TFSVA has become increasingly important because of its rapid expansion.According to a study conducted by the Economist Intelligence Unit (2021), globally, the overall prevalence of online violence against women is 85%.This study also found regional differences in the prevalence of online violence against women, including 90% in Africa; 88% in Asia Pacific; 98% in the Middle East; 74% in Europe; 76% in North America; and 91% in Latin America and the Caribbean.Regional and country-based studies also reveal the prevalent nature of this problem.For example, UN Women's (2021) multi-country study on Arab states (22 countries) showed that 60% of women experienced online violence in the past year, and 16% of women experienced online violence at least once in their lifetime.Another multi-country study based on the Asia region (five countries) conducted by UN Women (2020) illustrates that online violence against women is widely prevalent with different manifestations.A study conducted by Internet Without Borders (2019) based on seventeen central and West African countries found that 45.5% of adult women (18-45 years) experienced online gender-based violence (GBV) while using social media.A meta-analysis and systematic review on the prevalence of TFSV (based on a narrow definition of TFSV as the creation, distribution and threats to share intimate images of videos) conducted by Patel and Roesch (2022) revealed pool prevalence of TFSV victimization as 8.8% of people (both male and female) reported to having their image or video-based sexts distributed without permission, 7.2% had been threatened with sexting sharing, and 17.6% have had their image taken without consent.
The prevalence of TFSVA is also reflected in the growth of scholarship on TFSVA.However, most global research and evidence synthesis includes studies only in developed or high-income countries (see, e.g., Afrouz, 2021;Rogers et al., 2022;Filice et al., 2022;Patel & Roesch, 2022).Yet as the digital landscape rapidly expands in LMIC, and for some countries this has been more rapid than in highincome ones, research lags behind the Western world.Yet, a recently published meta-analysis and systematic review on sexual violence in LMICs indicated that women experience a higher prevalence than men (Ranganathan et al., 2021).This review highlighted the lack of a robust evidence base as prevalence ranged widely from 14.5% to 98.8%, and it was noted that due to the ambiguity of definition, it is difficult to measure among the studies (Ranganathan et al., 2021).Research has also revealed that the consequences of sexual violence are serious and complex in LMICs (Hardt et al., 2022).Importantly, Hardt et al.'s (2022) study found that impacts vary across LMIC settings and are influenced by contextual and socio-cultural factors.Clearly, a more sophisticated and nuanced understanding of the types, impacts, and victim-survivors' responses to TFSVA in LMICs is needed.
A recently published scoping review on technology-facilitated GBV (Gender-based violence) in LMIC across the Asia Pacific region found various manifestations of technology-facilitated GBV while noting that an accurate picture of the prevalence of this victimization remains unclear or unknown due to widespread under-reporting (Bansal et al., 2023).Our scoping reviews build upon what was done by Bansal et al., as they only included low-and middle-income countries in the Asia Pacific region, whereas we have included all LMICs from any continent.In addition, Bansal et al. (2023) only summarized the forms, tactics, and prevalence of technology-facilitated GBV.In contrast, this review examined the types of TFSVA, its impacts, and victim-survivors' help-seeking responses, alongside a consideration of the impact of different cultural contexts in LMICs.
Aims and Research Questions
The primary aim of the review is to identify and synthesize studies regarding TFSVA perpetrated against women aged 18+, as they are the most frequent victims (Powell and Henry, 2019;Henry et al., 2020;EU/FRA, 2014), in LMICs.The scoping review questions are: 1. What is known about victimization and perpetration of TFSVA against adult women in LMICs? 2. What are the impacts of TFSVA on adult women victim-survivors in LMICs? 3. What are the help-seeking practices of adult women victim-survivors of TFSVA in LMICs?
Method
This study employed a scoping review methodology.It is a transparent and growing methodology, which has value in addressing a broad range of research questions and identifying evidence gaps (Arksey & O'Malley, 2005).We adopted the Preferred Reporting Items for Systematic Reviews (PRISMA) guidelines for this review (Tricco et al., 2018).The review protocol was registered at the Open Science Framework on 06/05/2022 [https://osf.io/58x43/].
Search Strategy and Selection Criteria
In November 2022, the search took place in the following databases: Scopus, Web of Science, ProQuest, PubMed, Applied Social Sciences Index & Abstracts, and the University of Sheffield library (Star Plus).Two search strings were used to capture the wide range of TFSVA; string one ("technologyfacilitated sexual violence" OR "technology-facilitated abuse" OR "technology-facilitated intimate partner violence" OR "technology-facilitated domestic violence" OR "technology-facilitated domestic abuse" OR "technology-facilitated gender-based violence" OR "technology-facilitated coercive control") and string two ("online gender-based violence" OR "online harassment" OR "digital harassment" OR "cyber violence" OR "image-based abuse" OR "Non-consensual intimate image sharing" OR "ICT-based harassment").The search string used was the names of all LMICs as defined by the World Bank's (2023) classification.This method has been used in evidence reviews with a contextual focus on LMICs (see, e.g., Hardt et al., 2022).
To capture additional research on this topic, Google Scholar was searched using the terms "technology-facilitated violence and abuse" OR "cyber harassment and abuse" OR "online gender-based violence."The first 500 items returned were checked for additional relevant articles not found by the main searchers.Some websites were searched manually to capture more studies, such as UN Women, the Cybercrime Foundation, the Worldwide Foundation, ANROWS (Australia's National Research Organization for Women's), and GenderIT.We only included studies available in written English, published between January 1996 to November 2022, based in LMICs.Any empirical studies that included respondents below 18 years old and did not clearly indicate the respondents' minimum age were not considered for the review.Details of Inclusion and exclusion criteria are shown in Table 1.
Study Selection and Data Extraction
Our initial search identified 1,617 studies from six databases.All were imported into EndNoteX9 from each database and other sources.After removing duplicates, there were 1,040 papers for screening.All the papers were primarily screened, with 946 removed based on the title, abstracts, and context (LMICs or Non-LMICs) screening.The reference lists of several review articles were searched manually, and we completed backward and forward citation chaining as a means of finding related documents.We imported all 94 remaining studies (see Figure 1), which were then subjected to a full-text reading.Of these, 81 studies were excluded: 33 studies were not primary research; 24 did not contain relevant information; 12 were based on the age group (included those below 18 years) of the population; six studies contained only hacking-related content; six studies were rejected based on context (not based in LMICs).At this stage, one further study was published, that met the inclusion criteria.A total of 14 studies were included in the review.Study details were collated in an extraction table (see Table 2 for a summary of studies) using Microsoft excel and the following categories: author(s); year of publication; origin/country of origin; aims/purpose; population and sample size within the source of evidence (if applicable); methodology/methods; key findings that relate to the scoping review questions.The findings of the included studies were entered into NVIVO (Qualitative data analysis software produced by Lumivero, version 12) for analysis and synthesis.Initially, two team members independently screened each article based on its title and abstract; subsequently, a third team member (who is not the author, but whose support is acknowledged in the acknowledgement section) who did not participate in the initial screening of titles and abstracts was available to resolve all screening conflicts.The full texts of peer-reviewed articles that met inclusion criteria during the title-abstract screening stage were obtained for review, and any conflicts that arose during the full-text screening stage were resolved collectively by the two researchers.If a study included both men and women, where the team was able to extract data on women only, we included the study.
Description of Included Studies
Six were from South Asia, two were from Southeast Asia, three were from Middle Eastern countries and North-West Africa, one was from South America, one was from Sub-Saharan Africa, and one was from Turkey.Eight studies adopted qualitative methods, three quantitative methods, and three mixed methods.
Definitional Ambiguity Relating to TFSVA in LMICS
While all included studies acknowledged that acts of TFSVA are pervasive and harmful, replicating the wider global literature, there was a lack of agreed definition and clarity regarding specific behaviors, context, and nature.We found ten different terms or concepts used in the selected peer-reviewed studies, including: online harassment (OH) (n = 4); online sexual harassment (n = 2); online GBV (n = 1); digital or online abuse (n = 1); cyber violence (CV) (n = 2); cyber aggression (n = 1); technology-facilitated sexual violence (n = 1); technology-facilitated violence and abuse (n = 1); non-consensual sharing of intimate images (n = 1); and electronic dating violence (EDV) or digital dating violence (n = 1).Only six studies gave a clear definition of the term they used.Most studies investigated TFSVA in its broadest sense, and only two studies focused on specific behaviors: EDV (Alsawalqa, 2021) and non-consensual sharing of intimate images (França & Quevedo, 2020).
Definitions of TFSVA ranged from broad statements or grouping harmful behaviors as a form of violence (e.g., cyber abuse) to explicit behaviors (e.g., sextortion).Some studies used different terms to indicate the same behavior; nearly half of the studies used online/cyber/digital abuse/violence, or referred to TFSVA (Demir & Ayhan, 2022;Hassan et al., 2020;Jatmiko et al., 2020;Koirala, 2020 2020; Sultana et al., 2021;Tandoc et al., 2021).One study used the term non-consensual sharing of intimate images as an alternative to "revenge porn" to refer to any non-consensual sharing of intimate images by strangers or intimate persons (França & Quevedo, 2020).Sarkar and Rajan (2021) found that women who voiced their opinion regarding socio-cultural structures and the objectification of women in the media experienced online sexual harassment as punishment.
Types and Patterns of TFSVA in LMICs
Image-Based Sexual Abuse.The non-consensual taking, sharing, or threats to share personal, intimate images or videos were reported in most of the included studies, highlighting how IBSA is used to blackmail, humiliate, or perpetrate emotional abuse (França & Quevedo, 2020;Hassan et al., 2020;Jatmiko et al., 2020;Makinde et al., 2021;Sarkar & Rajan, 2021;Zagloul et al., 2022).A study conducted by Zagloul et al. (2022) of Egyptian women (n = 563) found that more than half (51.9%) of study participants experienced non-consensual pornography.Women survivors shared their experiences of facing threats or blackmail from current or former partners to share their intimate images or videos online (França & Quevedo, 2020;Sarkar & Rajan, 2021).In many cases, these intimate images or videos were taken with permission in the relationship, but often coercively as "proof of love" or "proof of trust" (França & Quevedo, 2020) and later blackmailed the victim-survivors into sending further photos or videos or into continuing the relationship.Intimate material was shared publicly on the internet or sometimes in a closed group.This type of harassment and abuse was also conducted by known persons, office colleagues, friends, and even family members.A study conducted by Jatmiko et al. (2020) of Indonesian women found that fictitious agencies during COVID-19 were engaged in illegal activities of spreading intimate photos online without consent.Similarly, women in South Asian countries faced deepfake harassment reported in a study by Sambasivan et al. (2019).Deepfakes are synthetic media in which a person in an existing image or video (often sexual in nature) is replaced with someone else's likeness.Sambasivan et al. (2019) found that 6% of the study participants (n = 199) reported that they experienced this as their photos had been superimposed or manipulated in this way and shared through social media.
Gender Tandoc et al., 2021;Zagloul et al., 2022).A meme is an image, video, or piece of text that is intended to be sexuallyhumorous and which is copied and spread rapidly by internet users, often with slight variations.This form of harassment can be organized or initiated by people in power (e.g., politicians and government officials) (Tandoc et al., 2021).For example, in Turkey, female sports journalists reported experiencing derogatory comments about their physical appearance on social media orchestrated by their seniors (Demir & Ayhan, 2022).Kundu and Bhuiyan's (2021) study found that around half of the comments (49%) toward female journalists in Bangladesh are misogynist, with 17% categorized as a religious "tag," for example, calling a woman "Satan": a figure commonly associated with evil or malevolence in many religions and belief systems, particularly in Christianity, Islam, and Judaism.It is seen as the objectification of a hostile, destructive force in Islam.Sultana et al. (2021) found that 19 women (n = 91) in their study reported that religious tags or sentiments were used to harass them online.
Impersonation, Doxing, and Defamation.A common finding reported in the studies was using digital technologies to create malicious profiles or fake identities in social media and using or stealing victim-survivors' information or photos without permission (Hassan et al., 2020;Sambasivan et al., 2019;Tandoc et al., 2021).Abusers copied victimsurvivors' identities or personal information to create false, malicious, humiliating or sexually revealing profiles on social media, and in most cases, victim-survivors were unaware of these profiles.
Another abuse reported in the studies involved the disclosure of private information (e.g., address or phone number), chat or telephone records on social networking sites, a behavior called doxing (or doxxing) (Sambasivan et al., 2019).A South Asian study by Sambasivan et al. (2019) found that 14% of study participants experienced defamation through the creation of fake identities on social media, and Hassan et al.'s study (2020) found that 10% of women (n = 356) reported being exposed to defamatory information in social media, with 6% reporting that their private data were accessed and disseminated online without consent.
Physical and/or Rape Threats.Several studies included in the reviews, found that women in the LMICs can face physical harm threats and rape threats through digital technologies, including via phone calls, messaging apps, and social networking sites (Demir & Ayhan, 2022;Koirala, 2020;Kundu & Bhuiyan, 2021;Sambasivan et al., 2019;Sarkar & Rajan, 2021;Tandoc et al., 2021;Zagloul et al., 2022).More than 4% of female respondents (total n = 148) in the study conducted by Hassan et al. (2020) reported that they faced threats of physical or sexual violence online.This threat sometimes includes the threat of kidnap, killing, or murder (Sambasivan et al., 2019).
Physical threats were mostly found among women who were in the public eye; for example, those working as journalists, bloggers, or women representing women's rights in social media (Demir & Ayhan, 2022;Koirala, 2020;Kundu & Bhuiyan, 2021;Sarkar & Rajan, 2021;Tandoc et al., 2021).Kundu and Bhuiyan's study (2021) found that 5% of social media comments on women journalists' social media accounts are related to threats of rape or sexual humiliation, and 1.5% of comments are related to the threat of killing.A study conducted by Demir and Ayhan (2022) revealed that this type of physical threat is on the rise, particularly among female journalists working in Turkey.
Coercive Control.Another finding was the use of digital tools and technologies in coercive controlling behavior.Coercive control is typically perpetrated by abusive intimate partners, romantic partners, or lovers, (Stark, 2007).Coercive control is an act or a pattern of acts of assault, threats, humiliation, intimidation, or other abuse that is used to harm, punish, or frighten their victim (Stark, 2007).It is designed to make a person dependent by isolating them from others.Online coercive control was enacted through: online stalking, monitoring movements or activities; secretly recording intimate moments; accessing social media or online accounts without permission; the threat of leaking personal information/photos/videos without permission; and threats of sharing intimate images or videos.A qualitative study in India (Sarkar & Rajan, 2021) described how intimate partners used technology as a means of threat for coercive sex and taking intimate photos.
Monitoring Through Digital Technologies.Some studies found that women were being monitored or their activities tracked through digital tools and technologies (Hassan et al., 2020;Makinde et al., 2021).An online survey of Egyptian women found that 1 in 10 (11.5%, total n = 148) reported that their online and offline activities were monitored or tracked, 6% said that their private data and/or photos were accessed and spread without permission and 5% reported that their movements were tracked (Hassan et al., 2020).Makinde et al.'s (2021) study found that in one in five Sub-Saharan countries women said that someone had gained access to their online/ email accounts without permission.
Mode and Platforms of Perpetration
Most of the studies in the review revealed that there were several platforms and means of perpetrating TFSVA, including the misuse of social networking sites, dating sites, blogging, mobile calls, SMS, personal online platforms, websites, GPS, emails and other communication technologies.However, no study found evidence of the use of more sophisticated technology, such as AI or drone technology.Among the diverse range of social networking sites and social media, Facebook, Messenger app, WhatsApp, Instagram, and Twitter were the most cited platforms where TFSVA took place (Demir & Ayhan, 2022;França & Quevedo, 2020;Hassan et al., 2020;Jatmiko et al.,2020;Koirala, 2020;Kundu & Bhuiyan, 2021;Makinde et al., 2021;Sultana et al., 2021;Zagloul et al., 2022).In some studies, perpetrators used porn websites to share intimate images (França & Quevedo, 2020), while most harassment was private in nature as a large number of victims experienced this via personal messaging apps, such as WhatsApp (Koirala, 2020).
Identity and Motivation of Perpetrators
Studies described a diverse range of perpetrator identities, including victim-survivors' current romantic partner, expartners, friends, friends of friends, virtual friends, classmates, relatives, stepfathers, neighbors, managers, and co-workers.A study by França and Quevedo (2020) found that 81% of Brazilian women (total n = 141) who had experienced the non-consensual sharing of intimate images knew the perpetrator's identity.In another study, female journalists identified their harasser as a manager or senior officer (Koirala, 2020).However, most studies described that the perpetrator was unknown or unidentified to survivors (Hassan et al., 2020;Koirala, 2020;Sultana et al., 2021;Zagloul et al., 2022).
Impacts of TFVA
Studies reported a wide range of impacts that victims-survivors experienced during and after TFSVA on a personal, family, and professional basis.Some were minor, while others were described as more serious and, in some cases, longlasting.There were five main impacts: psychological and emotional; social; behavioral; financial; and physical harm.
A study in Brazil found that one in three victim-survivors of non-consensual sharing of intimate images experienced post-traumatic stress disorder (França & Quevedo, 2020).Victim-survivors of TFSVA also showed a sense of helplessness, emotional burnout, and self-guilt (Alsawalqa, 2021;Sambasivan et al., 2019).In one study, women, particularly female journalists, tended to think that TFSVA is an unavoidable part of their job; they reported feeling helpless (Kundu & Bhiyan, 2021).Social Impacts.A number of studies described how women in LMICs experience the social impacts of TFVA, including: withdrawal from virtual social life and the virtual public sphere; low academic performance; social isolation; reduced social media activities and use of digital technologies; leaving educational institutions; moving home; harassment in public places humiliation; and the reputational damage of victim-survivor (Alsawalqa, 2021;Demir & Ayhan, 2022;França & Quevedo, 2020;Hassan et al., 2020;Koirala 2020;Sarkar & Rajan, 2023).
Jatmiko et al.'s (2020) study described how online sexual harassment imposed "self-censorship" on victim-survivors and a "new-self" emerged, changing the way of interacting online and in everyday life.A South Asian study conducted by Sambasivan et al. (2019) found reputational damage, including adverse social gossip resulting in a loss of arranged marriage opportunities.Such reputational damage was rooted in the suspicion of a woman's complicity (due to presumed sexual and premarital relations, considered taboo for most women).In this study, women, particularly from minority communities and those with disabilities, were most vulnerable to threats of coercive sexual or romantic relationships (Sambasivan et al., 2019).
Financial Impact.Three studies included in the review found that victim-survivors had experienced financial consequences of TFSVA (França & Quevedo, 2020;Hassan et al., 2020;Kundu & Bhuiyan, 2021).This included loss of employment, difficulty in getting a new job, hampering or restricting professional activities, and spending money for treatment (psychological harm or illness).França and Quevedo (2020) found in their sample of 141 that 6% of the victim-survivors of non-consensual sharing of intimate images had lost their job, and 5% said they faced difficulties finding a new jobs after victimization.Another study found that 16% (total n = 141) of victims-survivors spent money on counseling or psychiatric treatment to address the impact of their experiences of TFSVA (França & Quevedo, 2020).
Behavioral Impacts.Several impacts have been found in the studies included in the reviews, including aggressive behavior, eating disorders, heavy smoking, and violence.For example, Alsawalqa (2021) found that behavioral consequences associated with the victim-survivor's trauma included eating disorders.It also found that victim-survivors of online dating violence experienced changed behavioral patterns, such as new violent or aggressive behaviors with people around them.
Physical Harm.Three studies explored the physical impact of TFSVA victimization, including sleeping problems, diet changes, unexplained weight loss, and transient tachycardia (Alsawalqa, 2021;Hassan et al., 2020;Sambasivan et al., 2019).Hassan et al.'s (2020) study found that 4.1% of women (n = 148) were exposed to physical consequences due to CV.In Alsawalqa's (2021) study, victims-survivors of EDV suffered different stress-related physical ailments, including shortness of breath, fainting, and sleeping problems.
Coping Strategies and Help-seeking Behavior
Women across the studies demonstrated different coping strategies following TFSVA including: staying silent; taking no action; taking risk-reducing strategies; limiting or disconnecting themselves from virtual life; taking the informal route of help-seeking; or reporting through the professional and formal way.These actions, or non-actions, are directly or indirectly, related to different issues, including cultural context, availability of family or professional support, fear of physical harm through repercussions, and awareness about TFSVA and reporting.
Individual Responses to TFSVA.A wide range of individual actions were identified as victim-survivors disconnected or disengaged themselves from virtual life through: disabling, deleting or changing social media profiles or phone numbers; or ceasing to use the internet (Alsawalqa, 2021;Hassan et al., 2020;Makinde et al., 2021).They also imposed some "selfcensoring" strategies to minimize or reduce the risk of being harassed by limiting using social media and the internet (e.g., posting fewer photos, videos, and comments), removing personal photos or using non-face photos in social media (Koirala, 2020;Kundu & Bhuiyan, 2021;Sambasivan et al., 2019;Tandoc et al., 2021).Victim-survivors also took some direct action: for example, blocking offenders from their contact lists or in a device or app or even confronting the offender requesting that they stop (Alsawalqa, 2021;Hassan et al., 2020;Makinde et al., 2021;Sambasivan et al., 2019;Sultana et al., 2021).More than 72% of respondents (n = 148) in a study by Hassan et al. (2020) reported blocking the offender as a response to TFSVA.
Several studies found that in addition to the above actions, victim-survivors also take a strategy of "ignoring the harasser or harassment" or developing a "thick skin" to consider this type of harassment as a part of women's everyday lives (França & Quevedo, 2020;Koirala, 2020;Kundu & Bhuiyan, 2021;Sultana et al., 2021;Tandoc et al., 2021).Sometimes this strategy of ignoring the abuse was imposed by the higher authority or family members (Hassan et al., 2020;Koirala, 2020).Alsawalqa (2021) found that some victim-survivors utilized self-distraction strategies to cope with victimization, for example, exercising, shopping, sleeping more than usual, even taking sedative pills, and consulting with doctors.Makinde et al.'s (2021) study found that 5% of the respondents (n = 389) attended a counseling session with close associates (family members, friends, classmates, or clergy persons).The same study also found that some respondents reported that they adopted a strategy of apologizing to their social media friends stating that their accounts had been hacked.
Informal Channels of Help-seeking.Seeking support from family members, friends, and other close ones were the most commonly reported sources of support (Makinde et al., 2021;Sambasivan et al., 2019).Sambasivan et al.'s (2019) study found that 47% of victim-survivors (n = 199) disclosed incidents to family members or friends.Studies by Alsawalqa (2021) and Sambasivan et al., (2019) found that survivors used the perpetrator's friends or someone else who had mutual trust to convince the offender to remove disseminated photos from social media.
Professional and/or Formal Channels of Help-seeking.Studies show that women survivors of TFSVA in the LMICs do not like to report incidents to formal channels.Several studies found that victim-survivors did not report their experiences to the police (Koirala, 2020;Makinde et al., 2021;Sambasivan et al., 2019).In contrast, a study by Zagloul et al. (2022), found a small number, 3.9% of respondents (n = 283), reported the incident to the police.However, survivors who reported the incident also noted that they preferred to keep it secret (Alsawalqa, 2021).The reasons given for the lower frequency of reporting to the police included family pressure, fear of re-victimization, negative attitudes toward law enforcement agencies, fear of not getting proper justice, and difficulty in proving the authenticity of the evidence (e.g., screenshots) (Koirala, 2020;Sultana et al., 2021).
Some women professionals who experienced TFSVA sought support or reported abuse to their manager or employer (Koirala, 2020;Sultana et al., 2021;Tandoc et al., 2021).Studies revealed that higher authorities did consider online sexual harassment as a serious issue or vital problem to deal with but were often not well equipped to deal with such issues and often ended the complaints with the minimum sanction (Koirala, 2020;Kundu & Bhuiyan, 2021;Sultana et al., 2021;Tandoc et al., 2021) One study reported that victim-survivors preferred to seek help from non-government organizations as it provided hassle-free support and it also maintained confidentiality (Sambasivan et al., 2019).
Barriers to Help-seeking
A number of help-seeking barriers were identified in the studies.Some reported that many victim-survivors preferred not to disclose TFSVA due to: the fear of social stigmatization of being identified as a victim-survivor; a culture of shame or fear of damaging their reputation; pressure from employer; fear of family honor; and pressure from family to keep silent, (Alsawalqa, 2021;Demir & Ayhan, 2022;Hassan et al., 2020;Jatmiko et al., 2020;Koirala, 2020;Sultana et al., 2021;Tandoc et al., 2021;Zagloul et al., 2022).Two studies identified that distrust or negative attitudes/experiences were a barrier to reporting to the police (Sarkar & Rajan, 2021;Sultana et al., 2021).
Sometimes victims-survivors reported the incidents to the respective online platforms (Hassan et al., 2020;Sambasivan et al., 2019;Sultana et al., 2021).However, it was reported that due to the "community standard" problem and platforms' unwillingness to consider the victim-survivor's experience, they were discouraged from filing complaints.This is problematic for women in LMICs as while fully clothed photos of a woman might not violate the platform's policy, they can be used to harass women in a South Asian context (Sambasivan et al., 2019).
Facilitators to Help-seeking
Several studies identified some facilitators or motivational factors of help-seeking and disclosure of TFSVA, including: "exposing the harasser culture"; whistleblowing to warn others; releasing the burden of proof; resisting victim-blaming cultures through exposing offenders; professional and online support groups; media literacy training among female journalists; and government support (Demir & Ayhan, 2022;Sultana et al., 2021).
Some studies described that the everyday normalization of sexism and misogynist behavior toward women replicated online was not different from physical environments; it is an extension of such behaviors which are considered acceptable (Koirala, 2020;Sarkar & Rajan, 2021).Alsawalqa (2021) described how, in conservative Islamic societies, women alone are blamed for their online victimization as it (online dating) is unacceptable behavior (i.e., to date someone before marriage).The fear of victim-blaming or slut-shaming culture and the patriarchal power structure of the family, and the normalization of GBV not only creates a dominant culture online but also hinder women from disclosing and reporting TFSVA to the police and relevant organizations (Alsawalqa, 2021;Demir & Ayhan, 2022;Hassan et al., 2020;Sultana et al., 2021;Tandoc et al., 2021;Zagloul et al., 2022).
Discussion
In this scoping review, we reviewed 1,617 papers and found 14 primary studies reporting the characteristics, impacts, and women's responses to TFSVA published between 1996 and 2022.Our analysis uncovered a number of essential findings for understanding TFSVA in LMICs.First, echoing the wider literature on TFSVA, the conceptualization of TFSVA varied widely.Secondly, socio-cultural context matters as patriarchal societal norms and gender power dynamics were the key underlying factors, along with anonymity and cultural taboos, fueling inappropriate sexual behavior online.Third, gender norms and dynamics silenced those who experienced online sexual abuse and shaped how women should respond to TFSVA.Finally, the key to responding to TFSVA in LMICs was to address hegemonic and patriarchal cultures, promote the de-normalization of GBV, and the implementation of robust structural measures.
Our review identified that more than half of the studies (n = 8) tried to explore the nature and type of violence, abuse, or harassment against women perpetrated by digital technologies without giving precise and clear definitions of the term they used in the study.Even though authors defined the terms they had adopted, there are many variations among studies on terminologies, conceptualizations, and definitions of technology-facilitated violence and abuse.This is no different from the wider global literature on TFSVA.However, it is essential to consider the culturally contextual and gendered nature of women's position in LMIC societies as contexts and variables can differ significantly which can make comparisons difficult across low-or middle-income countries, let alone comparing with high-income countries.For example, what might be considered a risk factor in one country, might be less so in another.Analysis of TFSVA in LMICs also requires centering and understanding the sensitivities surrounding the gender norms and sexuality of women in these settings.For example, in many LMICs, premarital relationships and sexual activities are considered a sin or prohibited and a matter of taboo such that in many LMICs, distributing women's fully clothed photos or calling them names in the wrong context can lead to significant negative consequences, which is not or mildly considered sensitive or sexual in nature in western societies (Sambasivan et al., 2019;Sultana et al., 2021).
Overall, the types of TFSVA reported illustrated that women across the LMICs experienced a variety of abuse types in personal and professional settings.However, working professionals and minority women faced more misogynist comments, gender-based offensive comments, trolling, rape threats, and religious or ethnic-based hatred comments.Zagloul et al. (2022) found that working women were more likely to experience TFSVA than non-working women.Though the comparison between working and non-working women's experiences of TFSVA is not straightforward, it was evident that professionals faced OH disproportionately.Moreover, four studies (Demir & Ayhan, 2022;Koirala, 2020;Kundu & Bhuiyan, 2021;Tandoc et al., 2021) revealed how women working in the media faced OH primarily orchestrated by those in positions of authority.Without undertaking a comparative study with high-income countries or research to examine this in more detail, it is presupposed that gender norms, power imbalances, and the influence of patriarchy in relation to the role of women is likely to have a part to play in this finding (Alsawalqa, 2021;Koirala, 2020).
While current scholarship on TFSVA in LMICs reflects the intersections of gender, culture, religion and, sometimes, age, overall there is a distinct lack of attention paid to additional aspects of social location such as sexuality, non-normative gender identities (such as trans and nonbinary identity), ethnicity, socio-economic class (except in the case of research in women's experiences in particular professional fields-mostly journalism), age, or (dis)ability.It is likely that this reflects that the evidence-base is rather nascent for countries in the LMICs; this claim is also supported as despite there being 136 countries included in the LMICs, only 29 of countries were represented in the included studies.
There was a consensus that the consequences of TFSVA are harmful, severe, and long-lasting.Research from higherincome countries has found the impact of TFSVA on women's mental health (e.g., depression, anxiety, stress), emotional behavior (e.g., eating disorder, aggressiveness), and internet behavior (e.g., withdrawing from social media, self-censorship) (Afrouz, 2021;Author 2;Backe et al., 2018;Henry & Powell, 2018).In our review, there is strong evidence that TFVA has similar short-term and long-term impacts (Alsawalqa, 2021;França & Quevedo, 2020;Hassan et al., 2020;Koirala, 2020;Sambasivan et al., 2019).However, for women in LMICS, reported impacts are individual, family, community, and social in nature.The most distinctive impact is that women faced unprecedented social backlash or repercussions, including reputational damage, canceling an arranged marriage, slut shaming or victimblaming within or outside the family, and further coercive romantic engagement (Sambasivan et al., 2019).The review found that patriarchal societal norms and gender power dynamics were key underlying factors, along with anonymity, cultural taboos, and religious tagging, fueling TFSVA against women in LMICs.
The main help-seeking and coping mechanisms of TFSVA reported in the studies were the self-controlling or individual responses, including the "let it go" approach and changing self-identity in social media.These strategies gave victimsurvivors temporary relief from what was happening and hid their identity, helping to develop the "thick skin" and "distraction strategy" to manage the victimization (Alsawalqa, 2021;Hassan et al., 2020;Koirala, 2020).However, such strategies do little to bring to light the problem of TFSVA at a community or societal level.
In terms of coping mechanisms and help-seeking, family and friends were considered the most important sources from which survivors felt the confidence or trustworthiness to share their experiences and seek help (Makinde et al., 2021;Sambasivan et al., 2019).However, it was common that victim-survivors were reluctant to report the incident to the police in particular, fearing more victim-blaming, family pressure, and fear of future repercussions.There is a strong body of work (e.g., Loke et al., 2012;Kaur and Garg, 2008) which illustrates the reluctance to report interpersonal violence in various forms (e.g., intimate partner violence, family abuse, and hate crime) and therefore it is not surprising to find that victims-survivors of TFSVA are similarly reticent.There are both similar and distinct reasons, according to the type of violence, and many victims-survivors will anticipate shame, embarrassment, blame, and of not being taken seriously by professionals.There can be a fear of repercussions for making disclosures.Moreover, as TFSVA is not well understood or widely recognized in LMICs, this can mean that there is an additional barrier in making disclosures for victim-survivors as they fear a lack of support offered, or that they might not recognize their experiences as abuse.
While the studies emphasize the types and impacts of TFSVA along with help-seeking practices, they did not give details on how to prevent TFSVA.Four studies revealed OH among women journalists, recommending training for journalists and more organizational support or solidarity efforts to tackle this OH against female journalists (Demir & Ayhan, 2022;Koirala, 2020;Kundu & Bhuiyan, 2021;Tandoc et al., 2021).However, training is only effective when there is a culture of openness and transparency and a willingness to change.It is the wider culture of patriarchy and hegemonic masculinity that sustains and enables behaviors described in these working environments.Thus, without wider cultural change, the effects of such training may be limited to raising awareness among those participants who generally are the recipients of TFSVA (i.e., women) enabling them to name behaviors in future, while not reaching those individuals who would benefit from attitude change (i.e., men and the perpetrators of TFSVA).
Overall, there is a dearth of empirical evidence on TFSVA in LMICs.All included studies were peer-reviewed articles.While relevant literature was found, it was rejected on the grounds of quality (e.g., no reporting of methodology), did not meet inclusion criteria, or lacked sufficient information for an informed decision to be made around inclusion or exclusion.The studies reported characteristics that reflected scholarship about TFSVA in Western countries, but as stated above, the specific socio-cultural contexts in LMICs are highly relevant to women's experiences and the relative acceptance of TFSVA perpetration, meaning that more research is needed to provide a solid evidence-base from which to argue for robust policy and practice responses.Finally, as noted earlier, only 29 of 136 countries meeting the definition of a low-or middle-income country are represented in this review meaning that research and knowledge is lacking about TFSVA in the vast majority of LMICs, or research is unpublished, or not widely available.It is more likely to be lacking and therefore more scholarship is needed.
Limitations
To our best knowledge, this is the first scoping review on TFSVA in LMICs, providing important insights about its characteristics, impacts, and help-seeking behavior.The challenge of conducting a scoping review on studies based in LMICs lies in the heterogeneity of the research units of analysis, design and, importantly, the settings.LMICs encompass a wide range of countries with varying levels of economic, social, and cultural development.Therefore, research in LMICs is often so diverse that ensuring a consistent and comprehensive approach to reviewing and synthesizing literature across different contexts is difficult.We did find that some studies were published in the native language, and a limitation of this review is that we did not consider the article published other than in the English language as we lacked the resources for translation services.Another limitation is the lack of consensus on the definition of TFSVA, along with different sampling procedures and data collection methods, making it difficult to compare and contrast study findings.Moreover, it is highly likely that victim-survivors in LMICs have not shared their online violence and abuse in detail due to the disclosure barriers linked to social and cultural factors such as shame, stigma, and reputational damage.
Implications for Future Research, Practice and Policies
This scoping review has important implications for future research, policy and practice (see Table 3).Overall, there is a need for more quantitative research to measure prevalence, identify which behaviors have more drastic impacts for victimsurvivors, identify different factors of victimization and perpetration, standardize comparisons, determine attitudes, perceptions, and beliefs.Quantitative research could also be used to develop outcome measures and inform the development of interventions; both would need to be developed in a culturally sensitive way.There is also a need for qualitative research on TFSVA, including building a consensus on definitions, terminology, and conceptualizations to reflect the reported incidences and experiences by victim-survivors and to increase consistency across the understanding and theorization of TFSVA (see Table 4).It should also take an intersectionality approach to examine the influence of different social characteristics and locations (e.g., age, gender identity, sexual orientation, (dis)ability and socio-economic position) (Crenshaw, 1991).Further gendered analyses would be appropriate as it is critical to ensure equal and safe access for women in digital environments, promoting gender equality and building positive forms of masculinity to challenge the long-traditional hegemonic masculinity, patriarchal culture, and societal norms in LMICs.Research should also be undertaken in all countries defined as LMICs.
There is some evidence that professional solidarity and group efforts can make a difference in reducing or addressing online sexual harassment and abuse in professional settings (Kundu & Bhuiyan, 2021;Koirala, 2020).Research is needed to explore whether bystander interventions increase harm minimization or prevention in the context of technology-facilitated sexual violence and abuse (TFSVA).This has implications for the development of professional and community responses reflected in policy and practice environments; for example, in the form of codes of conduct and trauma-informed policies for the workplace.However, as noted above, policy and practice change can be difficult to achieve without wider cultural shifts away from the patriarchal norms and entrenched gender inequalities found in LMICs.
Evaluation of existing legal frameworks to determine strengths and weaknesses in addressing TFSVA is needed in LMICs.This review of policy and law should incorporate an assessment of how current policies respond to the diverse forms of TFSVA and to adopt new laws if necessary.Research is needed to determine and develop effective traumainformed policies and practices in the workplace and educational settings addressing the prevention and harm reduction linked to TFSVA.Such research should be undertaken in relation to exploring understandings of TFSVA as well as of policy implementation and knowledge of relevant law and apply to all relevant professional settings in terms of those practitioners who might encounter victim-survivors (and or perpetrators) of TFSVA including criminal justice, social care, health, and charity sector specialists.
Conclusion
Our scoping review has uncovered that, in line with scholarship in Western countries, the conceptualization of TFSVA in LMICs varied across studies.Yet, it is prevalent with diverse characteristics and many impacts on women's lives.An important distinction is that this review has drawn into sharp focus the influences of different socio-cultural factors, including strong patriarchal norms and values, hegemonic masculinity, and pervasive victim-blaming attitudes and perceptions that pervade in LMICs.As TFSVA is influenced and shaped by these different sociocultural factors, it is essential to gain a more sophisticated understanding of TFSVA in its cultural context, which is different in LMICs to Western cultures, to determine the underlying motivations of perpetrators, survivors' tendency to bear the burden of victimization (by not disclosing or help-seeking), and to develop robust responses to TFSVA.
Table 1 .
Inclusion and Exclusion Criteria.
Table 2 .
Summary of the Included Studies.
TFVA = Technology-facilitated violence and abuse; NGO = Non-government organizations
Table 3 .
Critical Findings.•There is a notable lack of consensus and clarity in defining the terms used to refer to violence, harassment, and abuse facilitated by digital tools and technologies.• In examining the types of technology-facilitated sexual violence and abuse (TFSVA), findings indicated that women in the low and middle-income countries (LMICs) experience a diverse range of behaviors, including online sexual harassment, image-based abuse, impersonation or doxing, gender-based hate speech/memes/trolling, and coercive control.• The consequences of TFSVA broadly fall into categories of mental health, social and financial, though the short-term and long-term impacts are not routinely reported in the studies.• Consequences are influenced by socio-cultural factors such as reputational damage, cancellation of arranged marriage, shame, and rejection.• Mechanisms of coping strategies and help-seeking are infrequently reported in the studies.However, the identified responses are categorized into personal level responses, informal and formal, or professional/legal level responses.• Disclosure to formal authorities is uncommon and problematic with structural barriers and drawbacks anticipated by victim-survivors.• Diverse socio-cultural contexts, including hegemonic masculinity, patriarchal societal norms, and values, fear of individual and family dignity and honor, fear of victim-blaming and slut-shaming or gender trolling, were identified as key factors in shaping coping mechanisms, help-seeking behavior, disclosure of victimization and motivation of perpetration in the context of TFSVA.
|
2023-09-01T06:16:12.832Z
|
2023-08-31T00:00:00.000
|
{
"year": 2023,
"sha1": "ba88fad3e865b14e4371eec2319f9b630121cfc1",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/15248380231191189",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "6f42d8256e7bfbf5cb64a45ef4989ba76680fe1d",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119102685
|
pes2o/s2orc
|
v3-fos-license
|
Scalar scattering from black holes with tidal charge
The cross sections of black holes with tidal charge predicted in the context of the Randall--Sundrum brane-world scenario are computed considering the massless scalar field. Results obtained for black holes with different tidal-charge intensities are compared in order to study how this charge modifies the black hole cross sections. Such results are also compared with the ones for Schwarzschild and extreme Reissner--Nordstr\"om black holes. The increase of the tidal-charge intensity makes the black hole absorb more and can also be measured by the narrowing of interference fringes of the differential scattering cross section. These results indicate that the effects of the tidal charge are very important in phenomena which take place near the black hole, but can be neglected in the far region. Analytical results are obtained in the high-frequency limit and are shown to excellently agree with the numeric results obtained via the partial-wave method. It is shown numerically that black holes with tidal charge obey the universality of the low-frequency absorption cross section of stationary black holes for the massless scalar field.
Introduction
Twenty years ago, Arkani-Hamed, Dimopoulos, and Dvali proposed that the fundamental scale of gravity could be so low as the weak scale if the spacetime had two or more extra compact dimensions [1]. In the same year, these authors, together with Antoniadis, showed that their proposal was based within the context of string theory [2]. One year later, alternatives to this model were proposed by Randall and Sundrum considering that the spacetime has only one warped extra dimension [3,4]. Despite the differences, these scenarios are based on the fact that our 4-dimensional observed Universe is a subspace -3-brane, or simply brane -of a a e-mail: ednilton@pq.cnpq.br higher-dimensional spacetime -the bulk; Standard-Model fields are restricted to the 3-brane while gravity is free to penetrate the bulk.
One of the most important consequences of these so called brane-world models is that microscopic black holes could be produced by particle collisions of a few TeVs at the LHC [5]. 1 Since these black holes would rapidly evaporate via Hawking radiation [11], some effort has been put on determining their greybody factors [12][13][14][15][16][17][18][19][20][21][22][23][24][25][26], once their evaporation rate depend directly on these factors. However, these models have also important consequences on the astrophysical and cosmological levels [27], what instigated the appearing of some solutions which describe how black holes interact with particles restricted to the 3-brane (see Refs. [5,28] and references therein).
In this work we present the study of the cross sections of a black hole with tidal charge [41] predicted as a solution considering the Randall-Sundrum scenario. This system is described by the following metric on the 3-brane: 2 1 See Refs. [6-10] for some of the most recent constraints on quantum black hole production at the LHC. 2 Here we work with c = G = 1. where with M being the black hole mass and β = q M 2 . If 0 < q ≤ 1, then the metric (1) coincides with the one of Reissner-Nordström black holes [42]; q = 0 describes Schwarzschild black holes; negative values for q are not allowed in the context of General Relativity, but are permitted in the context of the Randall-Sundrum brane-world scenario where β represents the effect of the 5th dimension on the 3-brane [41] and is related to as "tidal charge". The event horizon of such systems is given by We see that the tidal charge acts in the same sense of the black hole mass, increasing the size of the event horizon, while the electric charge on the Reissner-Nordström solution acts in the opposite sense, once r h tends to decrease with the increase of q. Other consequences of the tidal charge have been studied in detail. For example, in Ref. [32] it has been shown that the tidal charge tends to decrease the oscillation frequency while increase the damping rate of scalar, electromagnetic, and gravitational quasinormal modes. The authors also showed that the absorption cross section and the emission rate of such black holes increase with the increase of the tidal-charge intensity. In Refs. [33][34][35][36] the influence of the tidal charge in the shadow cast by a black hole was studied with the conclusion that the shadow increases if the tidal-charge intensity increases.
Here our interest lies in the roles the tidal charge plays in the cross sections of black holes described by Eq. (1). We use the massless scalar field to model a plane wave impinging upon the black hole and then compute its absorption and differential scattering cross sections numerically using the partial-wave method. We compare results for black holes with different tidal charges and also for Schwarzschild and extreme Reissner-Nordström black holes in order to better understand the effects of the tidal charge on the scattering properties of these black holes.
The present work is organized as follows: in Sect. 2 we study the high-frequency limits of the absorption and scattering cross sections; in Sect. 3 we describe the behavior of the massless scalar field in the considered spacetime in terms of partial waves presenting asymptotic solutions to the Klein-Gordon equation as well as the general expressions for the cross sections; section 4 shows a selection of cross sections obtained numerically and also their comparisons with the respective analytical results; our final remarks are presented in Sect. 5.
2 Classical and semi-classical scattering
Geodesic limit
The spacetime described in Eq. (1) is spherically symmetric. Therefore, the particle motions have two conserved quantities given by and which are related respectively to the energy and the angular momentum of the particle. The dot means differentiation with respect to an affine parameter. Such constants can be used to describe the impact parameter of scattered particles, b = L/E. The deflection of null geodesics can be given in terms of the following equation: where u ≡ 1/r . We can obtain a second-order equation with the differentiation of the equation above: By making d 2 u/dφ 2 = 0 we obtain the critical-orbit radius From h b (1/r c ) = 0, we find the critical impact parameter b c = r c / f (r c ) 1/2 which is the radius of the capture cross section of the black hole, σ cl abs = π b 2 c . The deflection angle of scattered rays can be obtained by integrating Eq. (6) from u = 0 to u = 1/r 0 , being r 0 the radius of the returning point. For β < 0 this leads to where K (k) and F(z, k) are the complete and incomplete elliptic integrals of the first kind [43] and their arguments are Here, u i (i = 1 . . . 4) are the roots of h b (u) for b ≥ b c with u 2 ≥ u 1 > 0 and u 4 < u 3 < 0; u 1 = 1/r 0 . Although expression (9) is the same as in the Reissner-Nordström case, [44].
The classical differential scattering cross section is given by where the sum takes the fact that particles can circulate the black hole multiple times so that the same scattering angle θ = |2nπ −Θ| (n = 0, 1, 2 . . .) can be observed for particles with different impact parameters, e.g. different values of n. Figure 1 shows the deflection angle and the differential scattering cross section for massless particles considering q = −2, −1, 0, 1. The differential scattering cross section has been computed considering until the second largest term of the sum in Eq. (10) dropping its contribution only when it was smaller than ∼ 0.1%; the contributions of further terms are even smaller. We see that particles can approach closer black holes with higher q and that the charge exerts lower influence in particles scattered with higher impact parameters. As consequence, it becomes hard to make distinction between the differential scattering cross sections of black holes with different charges when considering particles scattered in small-angle directions (θ 20 • ). As we see in Sect. 4, these results agree very well with the wave differential scattering cross section in the same limit they tend to each other.
Glory approximation
Near the backward direction (θ = 180 • ), the scalar scattering cross section can be described analytically by the glory approximation [45,46]: where J n (·) is the Bessel function of the first kind [47], and b g is the impact parameter of back-scattered rays. Equation (11) is a semi-classical formula, as we can infer from the fact that it involves both ray (b) and wave (ω) properties. Therefore, it is expected to be valid for Mω 1. Despite this, the glory approximation presents very good agreement when compared with the numeric results for intermediate values of frequency, for instance Mω = 5.0, as we will see below.
In Fig. 2 we plot the classical parameters which define the intensity, b 2 g |db/dθ | θ=π , and the widths, b g , of the glory rings. As we can see, the glory intensity increases with the increase of the tidal-charge intensity, i.e., as |q| becomes higher on the q < 0 region. This is not observed in the case of Reissner-Nordström black holes, where the glory intensity decreases with the increase of black hole electric charge up to q ≈ 0.8 when the intensity starts increasing [44]. 3 Formula (11) predicts that the interference fringe widths vary inversely to b g . Therefore, from Fig. 2 we conclude that the fringe widths must decrease with the increase of the tidal-charge intensity. In the Reissner-Nordström case, the increase of the black hole charge intensity results in a increase of the interference fringe widths [44]. As we see in Sect. 4, this last prediction based on the glory approximation is confirmed by the numeric results.
In Sect. 4 we compare the glory approximation with the numeric results obtained via the partial-wave method. This comparison is important because it can help us to estimate the accuracy of our results or, in some cases, the lack of agreement between the glory approximation and the partial-wave results may indicate the occurrence of extraordinary phenomena in the scattering process. This is the case of the electromagnetic and gravitational scatterings from Reissner-Nordström black holes, where both helicityreversing process and interconversion of spin 1 and 2 waves take place [48,49].
Wave scattering
Here we consider the case of the massless scalar field model to describe the scattered wave. This field is governed by the Klein-Gordon equation, which reads: The metric in this equation is implicitly given in Eq. (1), with g denoting its determinant. Spherical symmetry of the spacetime allows us to define stationary modes proportional to the scalar spherical harmonics as Φ ωlm = [ψ ωl (r )/r ] × Y m l (θ, φ)e −iωt . Once angular and temporal parts of the solution are known, we have to focus on solving its radial part, which can been shown to be: where the effective potential is given by The radial equation (13) can be put in a Schrödinger-like form if we define the tortoise coordinate d/dr * = f d/dr: Some plots of the effective potential (14) are shown in Fig. 3. There we can see that the effective potential vanishes in the limits r * → ±∞ independently of the value of q or l. The main consequence of changing q is observed in the maximum value of the effective potential, which decreases with the increase of the tidal-charge intensity, |q|. Therefore, we can already predict that partial waves are more absorbed by black holes which have a more intense tidal charge. This agrees with what has been previously observed in the behavior of the absorption cross section in Ref. [32], the black hole shadows in Refs. [33][34][35][36], and also with the high-frequency analysis presented in Sect. 2 where we observed that massless particles can come closer to the black hole without being absorbed as higher is the value of q (see Fig. 1, top panel). The asymptotic analysis of V l (r * ) allows us to describe the behavior of ψ ωl in regions near the horizon and far from the black hole. Such behavior is necessary to provide the mathematical expressions of the cross sections. For r * → −∞, V l (r * ) → 0 and therefore, for the scattering problem For r * → ∞, V l (r * ) → 0 and therefore In the region r r h , a more precise form of the radial function can be obtained by considering that V l (r * ) ≈ l(l +1)/r 2 * (r * ∼ r ). In this case, the radial solution can be expressed as: where h (1,2) l (·) are the spherical Hankel functions of the first and second kinds [43], respectively. Once h (1) l (x) ≈ (−i) l+1 e i x /x and h (2) l (x) ≈ i l+1 e −i x /x in the region x l(l + 1)/2, we recover (17) from (18) at infinity, as expected.
We can define the reflection and transmission coefficients in terms of A in ωl , A ref ωl , and A tr ωl respectively as and Flux conservation implies in R ωl + T ωl = 1. These coefficients are also necessary in the description of the absorption cross section, which for massless scalar monochromatic plane waves scattered in spherically symmetric spacetimes can be shown to be [31,[50][51][52][53][54]: where σ (l) abs = π ω 2 (2l + 1)T ωl (22) is the absorption cross section of each partial wave, usually referred to as partial absorption cross section. We can also define the phase shifts from the coefficients given in Eq. (17): The differential scattering cross section for massless scalar monochromatic plane waves in spherically symmetric spacetimes can be given as [44,[55][56][57]: where f ω (θ ) is the scattering amplitude which in terms of partial waves is: with P l (·) being the Legendre polynomials. The sum of the total absorption cross section σ abs with the scattering cross section σ el defines the total cross section, σ tot . The total cross section is known to diverge if the wave is scattered by potentials which asymptotically fall off as the Coulomb potential, V ∼ 1/r . This is the case of the spacetime studied here, since the main term in the metric at infinity comes from the Schwarzschild term, −2M/r . However, it is possible to obtain a finite total cross section for small black holes on the brane considering the ADD model if the bulk has 6 or more dimensions [29].
Numeric results
Here we present numeric results for the absorption (21) and differential scattering (24) cross sections. These results are obtained by matching numeric solutions of the radial equation (13) with the corresponding asymptotic solutions, Eq. (17). In order to improve precision, we may use the solution in terms of spherical Hankel functions, Eq. (18), or improve the asymptotic solutions (16) and (17) with a power series expansion (cf., for example, Eqs. (14)-(16) of Ref. [57]); here we use the first approach. Once the transmission coefficients are found, the computation of the absorption cross sections is straightforward. The situation is not so direct in the case of the differential scattering cross section. The scattering amplitude sum (25) is known to be divergent in the forward direction and poorly convergent in other cases. Therefore, we apply a convergence method introduced in Ref. [58] in order to obtain precise results for the differential scattering cross section computing a relatively small number of phase shifts. Figure 4 shows the partial absorption cross sections for black holes with tidal charge q = −1, −2 and also for the Schwarzschild and extreme Reissner-Nordström black holes. We compare the partial cross section for l = 0 with both the black hole mass squared (top graph) and the black hole area, A = 4πr 2 h (bottom graph). For fixed mass, the partial absorption cross section increases with the decrease of q. For black holes with fixed area, however, the partial absorption cross section rapidly increases with the increase of q. This is so because the black hole tends to shrink as q becomes higher. Therefore, in the bottom panel of Fig. 4, we are actually comparing black holes with different masses which are higher for higher q in order to keep the event horizon size unaltered. Also from the bottom panel of Fig. 4 we can infer that σ abs → A when Mω → 0. 4 This is in agreement with analytical results which predict that stationary black holes have the absorption cross section for lowfrequency massless scalar field equal to their area [59,60]. In the top graph of Fig. 4 we also show the partial absorption cross sections for l = 1 in units of the black hole mass squared. Again the cross section increases with the decrease of q, as expected from the effective potential behavior (see Fig. 3). Similar results have been observed for higher values of l.
In Fig. 5 we present the results for the total absorption cross sections. Top panel shows the comparison of the total absorption cross sections for black holes with tidal charge q = −2, −1 as well as for Schwarzschild and extreme Reissner-Nordström black holes considering the massless scalar field. In the bottom panel we compare the numeric results with the "sinc approximation" [61] for q = −0.5, −1.5 which were obtained and originally presented in Ref. [32] (see Fig. 9 therein) and are reproduced here thanks to their authors. We see that the absorption cross section rapidly decreases with the increase of q, as has been already observed for the partial absorption in the top graph of Fig. 4. Also, the absorption cross section oscillations become wider with the increase of q. The wave absorption cross section oscillates around the geometrical-optics limit value (straight lines on the top panel) and excellently agree with the "sinc approximation" already for relatively low values of the frequency, Mω ∼ 0.5. Figure 6 shows the comparison of the differential scattering cross sections of black holes with q = −2, −1 and Mω = 5.0 obtained numerically, via geodesic approach, and via the glory approximation for massless scalar particles. In all cases we see that the glory approximation fits (q = 0) and extreme Reissner-Nordström (q = 1) black holes for the massless scalar field with Mω = 2.0 is presented in Fig. 7. We see that the tidal charge does not play an important role in the average of the scattered-flux intensities, but its change modifies the width of interference fringes of the differential scattering cross section. The decrease of q implies in a decrease of the interference fringe widths. Near the forward direction, θ 20 • , the results tend to depend weakly on the value of q. This is expected since the metric term of the charge q varies with r −2 and tends to be small compared with the mass term in the far region where particles which suffer small deflections pass.
Final remarks
We have computed the absorption and scattering cross sections of black holes with tidal charge for the massless scalar field. Since the metric form of these black holes coincides with the metric form of Reissner-Nordström black holes, we have compared the main results with similar results for both Schwarzschild and extreme Reissner-Nordström black holes to have a clearer understanding of the effect of the tidal charge in the cross sections.
The tidal charge acts mainly in phenomena which take place in the region near the black hole. Black holes with more intense tidal charges, e.g. higher −q, tend to absorb more, as noticed by the increase of the partial and total absorption cross sections when presented in units of the black hole mass squared (cf. top graph of Figs. 4 and 5). The same can be conclude by analyzing the comparison of the differential scattering cross section for different values of q (cf. Fig. 7). In this case, we have shown that a change in q has direct consequences on the interference fringe widths, which are more intense for high values of the scattering angle. In the nearforward direction, small θ , the interference fringes wane, and the differential scattering cross sections tend to be the same, not depending on the value of q. Similar consequences of the change of q were noticed in the case of Reissner-Nordström black holes [31,44].
All approximations regarding to the cross sections of the massless scalar field apply well in the case of black holes with tidal charge. We have shown that such black holes obey the universality of the low-frequency absorption -which says that the absorption cross section for the massles scalar field tends to the black hole area in the low-energy limit if the black hole is stationary [59] -by expressing numeric results in terms of the black hole area (cf. bottom graph of Fig. 4). We also showed that the total absorption cross sections tend to oscillate around the corresponding capture cross sections in the geometrical-optics limit (cf. top graph of Fig. 5) and excellently agree with the "sinc approximation" [32] (see bottom panel of Fig. 5) which is valid in the high-frequency limit [61]. In the case of the differential scattering cross sections, the analytical results, geodesic and glory approximations, were shown to be in good agreement with the numeric results in their respective regime of validity, low scattering angles for geodesics and large angles for the glory approximation, even though not very high frequencies have been considered.
|
2018-05-28T21:06:20.000Z
|
2018-05-14T00:00:00.000
|
{
"year": 2018,
"sha1": "16184fd942817321d096bd1fc91be9582d6d78c4",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-018-6316-9.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "3159246b7d8bcbe28f8aad4bf16d1994ba088c7e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
264344136
|
pes2o/s2orc
|
v3-fos-license
|
IMPOSTORS AMONG FAMILY DOCTORS
,
INTRODUCTION
Introduced in Ukraine in recent decades, the competency-based approach has become a catalyst for changes in both quantitative and qualitative dimensions of educational outcomes.Due to its evident advantages, such as bridging gaps between different global education systems, creating prerequisites for integration into the European space, and enhancing the connection between theoretical knowledge and practical needs, this reform direction undoubtedly strengthens the national educational environment and allows learners to acquire quickly the necessary level of professional competence [1].However, the continuous increase in quality criteria contributes to elevated stress levels among young professionals who are just embarking on their career paths.It is precisely at the outset of their journey to success and upon assuming a new social role that the impostor syndrome (or «impostor phenomenon») exerts its influence -a psychological phenomenon where individuals suffer from pervasive selfdoubt about their abilities, achievements, and professional success [2].In other words, they believe that their success depends solely on their ability to confidently demonstrate their pseudo-competence, hindering their successful integration into a new environment, adaptation, and professional growth [3].The frequency of this syndrome is ДОСЛІДЖЕННЯ higher among professionals in the intellectual sphere and is influenced by the absence of clear quality standards for work and a lack of feedback mechanisms (medical personnel, chief executives, academic staff, etc.) [4].Persistent doubts about their achievements lead to emotional emptiness and can quickly transform into burnout syndrome [5].
The aim: to analyze the prevalence of the impostor syndrome among young doctors specializing in «General Practice -Family Medicine» and explore possible correlations with burnout syndrome.
MATERIALS AND METHODS
A cross-sectional study was conducted using a questionnaire without randomization.Actual anonymous surveying took place immediately after the completion of the state certification for the title of specialist doctor in June 2023 and included two questionnaires.The first one was the «Clance Impostor Phenomenon Scale» (CIPS), consisting of 20 statements rated on a 5-point scale (Likert scale), where 5 indicates complete agreement with the statement and 1 indicates complete disagreement.If the total score ranges from 0 to 40, there are no impostor syndrome manifestations; 41-60 indicates a mild degree; 61-80 indicates a moderate degree; and over 81 points indicates a severe degree.It should be noted that in Ukraine, this test is still undergoing adaptation; currently, it is the most acceptable tool in practice for assessing the expression of the impostor syndrome.We assessed the reliability of the questionnaire using Cronbach's Alpha method (α=0.94),indicating high consistency of the Ukrainian version of the questionnaire used in the actual study.
To assess burnout syndrome, we used the Maslach Burnout Inventory Human Services Survey for Medical Personnel (MBI HSS (MP)), which consists of 22 statements.The evaluation of results was conducted using standard methodology [6].The sample size comprised 27 doctors, the complete graduating class of intern doctors in 2023 from the Department of Family Medicine at the Faculty of Postgraduate Education and Propaedeutics of Internal Medicine.
Following the recommendations of the MBI HSS (MP) questionnaire authors, all participants (n=27) were categorized into five profiles: «Burned-Out» (individuals with high scores on the emotional exhaustion and depersonalization scales according to the MBI), «Engaged» (individuals with low scores on emotional exhaustion and depersonalization scales, and high scores on the personal accomplishment scale), «Overloaded» (characterized by high levels of emotional exhaustion only), «Detached» (this profile is formed due to the presence of cynicism), and «Ineffective» (low level of professional accomplishment).According to the assessment standards of the MBI HSS (MP), individuals with the «Engaged» profile do not show any signs of burnout syndrome.The «Overloaded», «Ineffective» and «Detached» profiles are intermediate with respect to burnout syndrome risks and are amendable to correction.Respondents falling under the «Burned-Out» profile are more likely to have developed burnout syndrome.
The procedure of our conducted research fully adhered to widely accepted moral norms, requirements for the observance of rights, interests, and personal dignity of research participants, in accordance with the principles of bioethics outlined in the Helsinki Declaration «Ethical Principles for Medical Research Involving Human Subjects» and the «Universal Declaration on Bioethics and Human Rights (UNESCO)».All participants provided informed consent to participate in the survey.
Statistical data analysis and presentation of results were carried out using Microsoft Excel and SPSS v29 trial.The distribution of results obtained in the actual study was assessed using the Shapiro-Wilk test.The majority of the data (90%) showed a normal distribution, allowing us to subsequently utilize parametric statistical criteria.The critical level of statistical significance was set at p<0.05.
RESULTS
Our study involved 27 young doctors specializing in «General Practice -Family Medicine», who completed their internship in June 2023 and received their specialist medical practitioner certificates.The respondents had an average age of M=24.9 (SD=0.57)years, with a majority of females (25 individuals) and 2 males.
According to our data, the lowest score obtained from the CIPS questionnaire was 42, indicating that all respondents exhibited manifestations of the impostor syndrome ranging from mild to severe (Fig. 1).This underscores the presence of this psychological phenomenon among all intern doctors who transition from the student to independent practitioner role when assuming the new social role of a family doctor.
The impostor syndrome is a genuine form of intellectual self-doubt, characterized by a combination of tendencies towards perfectionism, anxiety, and low selfesteem.According to our data, nearly half of the young family doctors exhibited scores corresponding to moderate and severe levels of expression for these components in their personality structure.This primarily indicated an inability to acknowledge their real achievements, knowledge, successes, competencies, and skills.It is worth noting that manifestations of the impostor syndrome become more pronounced in competitive work environments, during transitions to new job positions, or when acquiring higherranking positions.This aligns with the design of the actual study, as the survey was conducted during the transition from intern to independent practitioner, a phase where the competitive aspects of young professionals' development increase, along with their responsibilities and demands for evaluating professional abilities.In accordance with the objectives of the actual study, we also conducted an analysis of the results from the MBI HSS (MP) questionnaire to determine the prevalence of burnout syndrome components among young specialists specializing in «General Practice -Family Medicine» (Table 1) Based on the data we obtained, it was revealed that 2 doctors exhibit an unfavorable profile characterized by high levels of emotional exhaustion, depersonalization, and reduced professional accomplishment.The distribution of intern doctors according to burnout syndrome profiles is presented in Fig. 2. According to our data, the majority of young family medicine professionals (44% of observations) fall into the «Engaged» profile and therefore do not exhibit any signs of burnout syndrome.
Our data also indicates that 26% of young family medicine specialists had an «Ineffective» profile, 19% had an «Overloaded» profile, and 4% of observations showed a «Detached» profile.These profiles, due to the risks of developing burnout syndrome, are considered intermediate and amenable to correction.Unfortunately, 7% of respondents fell under the criteria for the «Burned-Out» profile, and they are more likely to have already developed burnout syndrome.
ДОСЛІДЖЕННЯ
The next stage of the actual study involved searching for potential correlational relationships between the impostor syndrome phenomenon and the components of burnout syndrome.This exploration took into account the similarities and intersections of these components and the psychological predictors that shape these phenomena.(Table 2)
DISCUSSION
The assessment of the obtained correlational relationships revealed that the higher the score on the CIPS questionnaire, the more pronounced were the components of burnout syndrome, such as emotional exhaustion (p=0.002) and depersonalization (p=0.000214).In our opinion, this is a consistent process of psychological exhaustion in individuals due to overstrain of adaptive capacities under the pressure of high demands in the job market and self-doubt (impostor syndrome, which was present in all respondents).It's important to note that the impostor syndrome is an internal experience characterized by a set of beliefs and thoughts that hinder an individual from accepting their achievements, praise, and success, despite evidence to the contrary.Individuals with the impostor syndrome tend to downplay and negate their own successes.Among young medical professionals, this could be triggered by factors such as lack of time, information deficiency, comorbidities in patients, and the inability to track cause-and-effect relationships.In routine practice, this might manifest as physical and emotional fatigue, a lack of resources to fulfill everyday professional tasks due to improper resource allocation, leading to a cynical and detached attitude towards colleagues, one's own work, and even deformation of relationships within the team.It could also be a manifestation of strong dependency of self-esteem on society's opinion.
It is irrelevant.that we did not find a correlation between the impostor syndrome and the level of reduced professional abilities, as the actual study was conducted at the early stage of a young professional's career development and formation as a family doctor.The burnout phenomenon, on the other hand, represents a later stage and a more profound disruption in the destructive burnout model.
The observed interplay between the impostor syndrome and burnout syndrome in young medical professionals provides additional opportunities for preventing burnout and requires further research.
CONCLUSIONS
1. 44% of young family doctors exhibited normal indicators across all dimensions of the professional burnout syndrome, while an additional 48% were classified under intermediate profiles (Overloaded, Ineffective, Detached), which can be addressed through timely identification and effective management.
2. All respondents displayed manifestations of the impostor phenomenon at varying degrees of intensity.
Figure 1 .
Figure 1.Distribution of respondents according to the degree of severity of impostor syndrome
Figure 2 .
Figure 2. The distribution of intern doctors according to burnout syndrome profiles
|
2023-10-20T15:28:00.637Z
|
2023-09-30T00:00:00.000
|
{
"year": 2023,
"sha1": "4652328587507cf53a656001453a59287517b7e1",
"oa_license": "CCBYNC",
"oa_url": "https://cp-medical.com/index.php/journal/article/download/300/263",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5da896620dde3e37df5c1e05280438036a7384bd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
222147907
|
pes2o/s2orc
|
v3-fos-license
|
Co-Produced Care in Veterinary Services: A Qualitative Study of UK Stakeholders’ Perspectives
Changes in client behaviour and expectations, and a dynamic business landscape, amplify the already complex nature of veterinary and animal health service provision. Drawing on prior experiences, veterinary clients increasingly pursue enhanced involvement in services and have expectations of relationship-centred care. Co-production as a conceptualisation of reciprocity in service provision is a fundamental offering in the services sector, including human medicine, yet the role of co-production in veterinary services has been minimally explored. Utilising a service satisfaction framework, semi-structured interviews (n = 13) were completed with three veterinary stakeholder groups, veterinarians, allied animal health practitioners, and veterinary clients. Interview transcript data were subject to the qualitative data analysis techniques, thematic analysis and grounded theory, to explore relationship-centred care and subsequently conceptualise co-production service for the sector. Six latent dimensions of service were emergent, defined as: empathy, bespoke care, professional integrity, value for money, confident relationships, and accessibility. The dimensions strongly advocate wider sector adoption of a co-produced service, and a contextualised co-production framework is presented. Pragmatic challenges associated with integration of active veterinary clients in a practitioner–client partnership are evident. However, adopting a people-centric approach to veterinary services and partnerships with clients can confer the advantages of improved client satisfaction, enhanced treatment adherence and outcomes, and business sustainability.
Introduction
Undeniably, the veterinary business landscape is experiencing a period of unprecedented change. Corporate consolidation and a growth in practice size [1], digitisation and telemedicine [2,3], the rise of pet care services [4], attrition rate of veterinarians, job dissatisfaction and burnout [5], and the feminisation of the profession [6][7][8] all contribute to the transformation of the sector. Across the allied animal health sector (paraprofessional practitioners), the specialist services offered continue to grow and develop in all areas of animal health [1,8].
Concomitant changes are evident in client behaviour and client expectations of veterinary services. Animal owners and keepers are more discerning and sophisticated than ever before. They are arguably more knowledgeable, and have constant access to readily available information, data, and knowledge through on-line sources and search engines [9,10]. Through social media platforms, clients connect with other like-minded animal owners and share experiences and opinions rapidly through positive
Study Design
A qualitative research approach was adopted to explore veterinary stakeholder opinion and experience of service received and given. The methodology was selected as a valuable technique to appreciate complex issues of attitudes, perceptions, and opinions [39], and to explore and generate knowledge based on human experience. It did not seek to measure or quantify co-produced service for the sector. Data were collected using semi-structured interviews with the three stakeholder groups categorised as veterinarians, allied animal health practitioners (herein referred to as allied practitioners), and clients. Identification and selection of participants was performed using homogeneous purposive sampling, ensuring a well-informed and germane contribution [40]. Participants were selected according to the following criteria: professional role (veterinarian or allied practitioner), or client, age, and species of animal kept or treated. Participants were geographically dispersed over the following countries of the United Kingdom: England, Scotland, and Wales. Three veterinarians, five allied practitioners and five clients were interviewed.
A summary of participating stakeholders is provided in Table 1. Recruitment of participants was achieved through professional, academic, and industry contacts. All subjects gave informed written consent for inclusion before they participated in the study. The study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Ethics Committee of Harper Adams University, UK (code 4295-201412STAFF). Qualitative reporting follows the Consolidated Criteria for Reporting Qualitative Research (COREQ) process and checklist [41]. Techniques to standardise interview behaviour are beneficial to the process and validity of qualitative techniques. Accordingly, an interview sheet and prompt questions were devised using the extant literature, industry data, and professional knowledge of the researcher, and implemented (Supplementary Materials). Interview questions were developed from scoping of industry data, and veterinary and services literature. Questions were mapped to the study aims and for each question, prompts and probing questions were determined. Similar questions were posed to professionals and clients alike, but the interview language was contextualised. Clients were questioned on their experiences with the veterinarian and the allied practitioner and were encouraged to openly discuss their expectations of value and service satisfaction. Professionals discussed the service they provide and their perceptions of service quality and value. Pilot interviews were completed with a veterinarian, allied practitioner, and a client to ensure that the questions and terminology used were appropriate for each group. Data collected from the pilot study were rich, detailed, and relevant to the study, and were therefore Vet. Sci. 2020, 7, 149 4 of 15 analysed and included in the presented results. All participants spoke freely and were willing to share positive and negative personal experiences of service provided or received.
Data Collection
Over a 6-month period (July-December 2017), semi-structured face-to-face interviews (n = 13) were completed at either the participants place of work, home, or the primary researcher's workspace. All interviews were completed by one researcher (A.Z.P.). Participants were attentive to the study aims and recruitment was uncomplicated. All interviews were audio-recorded using a SONY IC Recorder and stored as digital audio files (MP3) in accordance with UK general data protection regulations (GDPR). Interviewees were asked to share their experiences, beliefs, and opinions around a range of topics on animal health service provision, and the critical incident technique was used to encourage respondents to draw on past personal experiences and to aid recollection. At the culmination of each interview, all participants were given the opportunity to clarify any comments or to make further comment. Interview duration was between 45 and 92 min.
Thematic Analysis
Theoretical thematic analysis, constructed in grounded theory methodology, with an iterative constant comparison technique, was used to identify common themes and patterns within the transcribed interview dataset. Thematic analysis and principles of grounded theory were selected for data analysis as a highly flexible approach, capable of yielding rich and detailed data on attitudes and experiences. The six-stage process, as defined by Braun and Clarke [42], was followed. Interviews were transcribed verbatim and re-read several times by the lead researcher (A.Z.P.) prior to the commencement of coding. Supplementary hand-written notes taken during each interview were used to further inform the research process. Thematic data analysis was primarily completed by A.Z.P. and was completed concurrent with data collection and transcription. Analysis was performed using the Qualitative Data Analysis software package QRS NVivo (v.11). The research team were core contributors to the initial code (theme) development or codebook through shared coding of the first pilot interview. Reliability was maximized through the use of memo-writing within the coding process, permitting transparency in the decision-making process of A.Z.P. and indicating to the research team how the data were interpreted. Research team validation was completed post-coding for each interview transcript. Embedded in grounded theory methodology [43], data saturation is reached when no new additional themes or theoretical codes are emergent [44]. In this study, data saturation was identified at interview number 12 and confirmed through the completion of an additional interview.
Results
A sample of sector stakeholders reflecting age, gender, occupation, and species of animals kept or treated, was achieved, as shown in Table 1. Participants were considered to reflect the study population but are not generalisable.
Thematic Analysis
The interview results disclosed six latent themes or dimensions within the data. The dimensions were defined as: Empathy, Bespoke care, Professional integrity, Value for money, Confident relationships, and Accessibility. These dimensions are described in Table 2. Table 2. Definitions of the dimensions of service identified from the interviews.
Empathy
Compassion and thoughtfulness through a clearly communicated service. Caring provision, with due regard for clients' needs and animal health and welfare.
Bespoke Care
Custom tailored, dependable service which is accurate and a results-focused provision.
Professional Integrity Trust, honesty and morality of service delivery. Strong themes of professionalism.
Value for Money Willingness to provide comprehensive service within a justifiable pricing strategy. Price paid reflects the service given.
Confident Relationships
Professionals' connection with the client, connection with other professionals, and pro-active responsiveness to the wider knowledge, skills, and expertise of others. Preparedness to undertake two-way open communication with an active client, demonstrative of respect and rapport.
Accessibility
Geographical proximity of up-to-date resources and facilities, accessibility of professionals (physical and communicative), and ease of contact.
The category of emergent themes and explanatory narrative from the interview data are provided below.
Theme One: Empathy
Dimensions of compassion, care, and empathy were discussed with all stakeholders and thoughtful communication was defined as an essential component of the interaction. All client groups, regardless of species kept, had expectations of considerate handling and treatment of their animals by all practitioners. Pet owners expressed this through the strength of bond between them and their pet.
One participant, on discussing their expectations of care and compassion in the veterinary consultation, expressed the emotional importance of their dog.
I think just that something that's very precious to me is in their care and their understanding that I was feeling, that you are feeling anxious and worried and you want to know that everything is okay. [07C] Empathy towards the animal was found to be an expectation of the service provided, whereas empathy from practitioner to client was highly valued. In these cases, service delivery was emotive and highly charged as clients expressed the depth of compassion they had felt from the practitioners.
I remember her putting her arms around me and she was just really compassionate about how we were feeling. [13C] Equally, the strong person connection experienced was expressed even with a practitioner they had only just met. She was lovely, I don't know the vet's name, it wasn't the vet that I used to see on a regular basis for normal appointments and she was just really, really lovely. [13C] Factors of compassion and empathy were repeatedly, strongly, and overtly expressed within the allied practitioner group.
I think that you have got to have empathy and compassion with the animal. [01P] They also want to make sure that their pet is being cared for in the right way and they have got the best quality of care that there is, no matter what time of day. They want to see care and compassion in the situation.
[09P] I think they [clients] would be looking for a sympathetic hearing for what they want. I think they'd be looking for suitable amplification of the problems that they're putting over, a solution to the problem that they're presenting. [06P] Veterinarians' reflection of empathy was embedded within the service provision: You've got to make them feel that their animals are important. The vet isn't just looking at their watch and saying, "I've got another call to do". The worst thing that vets can do is to say, "I'm in a hurry so I can't be long at this". It's the while you're here is the important thing and that welds the relationship between the client. [03V]
Theme Two: Bespoke Care
Bespoke care emerged as a strong theme, with the expectation that the service would be customised and individualised. Results and outcomes were important perceived components of a tailored package. Empathy and communication were anticipated to be important due to the inherent nature of health practice and were evident in the extant literature review.
They want expertise I think initially. They want attention when they want it, ASAP of course, especially in a crisis. You can understand that. They want latest information. They want expertise and they want practicality. They want pragmatism and they want understanding of their situation. There is a bespoke element to it. Although they wouldn't voice it as that, there is that bespoke requirement-"I need this, and I need that". The demands are high because they perceive the veterinarian as expensive. [03V] From the client perspective, there is a clear expectation for bespoke service to be delivered.
[In discussions with the farm vet] After we've had the weekly routine [visit] they'll always come up to the house. We'll sit down and discuss things, if there's an issue. [12C] Interestingly, this time to talk with clients and provide individualised personal care was a fulfilling part of the practitioners' role, suggestive of reciprocation in service delivery.
It's not unusual to spend twenty minutes, half an hour, talking to somebody. I actually quite enjoy it. [05P]
Theme Three: Professional Integrity
Trust was patent within all participants' interviews, but integral were notions of morality, integrity, and technical competence. Equally, the client expects the animal health professional to have the skill and ability to give the correct treatment well.
People aren't going to trust your decision-making if they don't think that you are a trustworthy person and that comes across in the way that you present yourself. [04V] Reciprocity in trust between the client and professional were apparent when discussing the importance of relationship development between all stakeholders. Because you've got to trust them and they've got to trust, I suppose, a little bit in you as well. So, it's nice to have that but they also know when to keep it professional, and when to keep it personal as well. [02C] Technical skills and animal handling capabilities were important to all clients and to those professionals with direct hands-on work as part of the day-to-day role.
If there's a problem with a cow, and shall we say it's what I would class as an internal problem where I can't see any physical problems with the cow, obviously, I trust that the vet is able to make a good diagnosis. [02C] Trust and integrity were also expressed as a judgement on value for money.
Well because you are paying for that professional service and their opinions and that I'm entrusting them with the care of my animals. Value for money, with price paid reflecting the service received, was an enduring theme throughout the interviews. Interestingly, veterinarians discussed financial implications more frequently than the other stakeholders, stressing the problems associated with a pricing strategy which does not reflect value.
At the moment, veterinarians haven't been very good at charging for time, they've subsidized it by sales and medicine. That's tempered the whole best way forward. The best way forward in my view is for veterinarians to sell their time and not much else. [03V] This was further emphasised when discussing farm animal practice and concepts of value related to price were introduced.
The vast majority of farmers have a high level of expectation of the vets. They've an expectation of good service, expectation of reasonable prices, but they know that they're always going to get a reasonable sized total bill at the end of the month. That's what they expect from vets. But they expect the highest standards and that's okay as long as they can see the value. [04V] Cost plays an element, but what we find is there are competitors in our area who would sell some wormers cheaper than us. But having spent an awful lot of time training people, our SQPs [Suitably Qualified Persons/Animal Medicines Advisor], and the relationship we've built with clients, it's not always about the price anymore. [04V] Clients introduced the concept of involvement and preparedness to pay more money in situations which they perceived to have higher stakes, carry greater risk to the animal involved, or require higher levels of skill or technical ability from the professional.
For example, paying a full call out for them to come and do vaccinations, which they can do standing on their head. It doesn't really take a lot of ability. But I think largely, considering what they're doing, which is highly technical, I do think it is good value for money, knowing what similar things cost in medicine. All of the study participants discussed the importance of two-way communication, through a mutually respectful relationship. Veterinarians particularly identified with the need to make every effort to communicate with clients, emphasising the importance of communication within the service process.
You have to actually communicate with the owner in every possible available way and develop that ability. [04V] Vet. Sci. 2020, 7, 149 8 of 15 Allied practitioners described the client expectations of communication and also to be active in the service process.
Communication is one of the key things that they [clients] definitely expect and a follow-up as well. They are making sure that not only are they making that initial contact with the owner about something but the follow-up after that. [09P] When discussing how involved clients expected to be in the service process, one practitioner indicated client expectations of the process.
Clients will be expecting it [involvement], and clients will be driving that they have that for their animal.
Clients sought open, respectful, and intelligent communication with the veterinarian and allied practitioners alike. All groups made references to the need for courteous interaction between themselves and the service provider, with some owners placing importance upon how they were addressed.
Will they [veterinarians] communicate with me in a professional manner, but also not treating me like I don't know anything at all? [07C] This concept was taken further by one client, who actively sought a challenging dynamic with the veterinarian and allied practitioners to ensure the best possible outcome for their livestock.
I have to say we're very lucky with the people that we work with. They do challenge you. We possibly hopefully challenge them a little bit. We bounce ideas off each other. As I say they'll often have meetings with our nutritionist, with the vet, and they'll all sit down every couple of months together . . . It's nice if they come out and give you ideas and suggestions, and challenge your thinking as well . . . [12C] Also, clients did not want their own personal experience to be discounted, seeking a personal involvement within the service process. This involvement was not distinct to a single client group, but was apparent through the companion animal owners, horse owners, and livestock farmers alike. Clients often drew on their human health experiences of communication to evaluate the veterinarian or allied practitioner.
Because I think doctors are now taught to communicate. They do loads of role-play, especially if they're going to be a GP [General Practitioner Doctor], and realise, "Actually, I can communicate with these people, and it should be a two-way street. But I think people need to be taught to communicate. If you're a four A* student, who has studied really hard, you may not have the social skills, the interpersonal skills. You need to learn those if you don't have them naturally, which some people do. [07C] Professional interactivity indicative of co-creating and co-producing service was equally evident.
I'm looking for them [allied practitioners] to be able to identify-obviously they have a conversation with me first about what my thoughts are. I think it's important that I feel involved. [08C]
Clients introduced the concept of self-care to explain their desire for an active involvement and participation in the service process.
She looked at how he [the horse] moved. She did a really in-depth assessment when she first met the horse, and then treated it very thoroughly, gave me exercises that I could do . . . I was very involved. I always want to feel that I can do my bit as well, and I can't believe that somehow something just needs treating in three months, six months, twelve months. There's got to be some kind of self-care in the meantime. It is problematic. Just the thought that you can't just get the vet when you want them is problematic to me, and booking so far in advance. [02C] It was two miles from where I lived, could always get an appointment straight away. I was always very pleased with the care that I got for all of my animals. [13C] The concept of accessibility through virtual communication was raised by an allied practitioner during the discussion of social media trends and the need for flexibility in communication techniques.
Enquiries on our advice line are actually dropping, and enquiries through social media are going through the roof. She'll get a tweet at ten o 'clock at night and answer it. [05P] Out of hours' care and emergency care were crucial topics to the client group who keenly felt the importance of being able to contact the professional with ease and speed.
Discussion
This study used a qualitative approach to explore relationship-centred care within the animal health provision from the perspective of the three core stakeholders, veterinarians, allied practitioners, and clients. The findings provide novel insight into the conceptualisation of co-produced service for the sector. Co-produced service is delineated by equal and reciprocal relationship development through a responsive service provision. Co-production has relevance and application to a diverse range of sectors, and results from this study propose applicability and potential practical implications for contemporary veterinary and animal health care.
Within service quality provision, all service is assumed to be inherently relational in nature [45] as the client is endogenous to, and is an active participant in the service provided [46,47]. Within this study, concepts of trust, bonds, empathy, communication, and relationships were evidenced to be important components of the service experience. These findings reflect conceptualisation of value-creation in service and co-production [48] and correspond to previous studies completed on veterinarian-client interaction [20][21][22]49].
Confidence in relationships and relationship-centred care emerged as important to all stakeholder participants, but the theme was particularly well developed within the client group. Central to the strong development of partnerships between clients and practitioners was reciprocity, with emphasis given to two-way, respectful communication, similar to the findings of Coe et al. [49]. When discussing factors of trust, clients were keen to emphasise its importance but also to stress the reciprocal nature of trust between the client and professional. Trust is essential for collaborative working and co-production and is founded in the expectation that one party will behave in a predictable and reliable manner [50]. Trust may take a number of forms. Newell and Swan [51] determined three types of trust pertinent to collaborative working: companion trust, competence trust, and commitment trust. Companion trust is based on the reciprocal exchange of goodwill and friendship. Competence trust is established through perceptions of others' ability to perform the required tasks. Commitment trust is associated with contractual arrangements or expectations between the clients and practitioner. Trust, in the medical, and arguably veterinary setting, is conceptually difficult to define and there is no commonly shared understanding of what it means, what factors affect trust or how it relates across the health provision [52]. In co-produced service, the veterinarian or practitioner must fully recognise the integral role of the client in the service process and, therefore, trust the clients' judgement. Introduction of the notion of self-care was raised by the equine and farm animal clients within this study, reflecting their wish to be an active participant in the service process. The idea of involvement was a common feature to all client groups as they did not wish the value of their own personal experience to be ignored or dismissed, or to feel that they were not active in the health care process. Reciprocity of engagement and involvement was an interesting feature and novel to the client; to the farm animal client, this notion was so strongly developed that they expressed a wish to be 'challenged by the professional'. Active participation in the service encounter is wanted by the client, a concept indicated by only a limited number of previous studies [23,49].
Conceptually, co-production extends inter-relationships beyond involvement into active partnerships, creating opportunity and challenges for veterinary service provision in equal measures. Client participants from the present study expressed their wish to be part of the service delivery. Every service encounter contributes to relationship formation [47,53] as co-produced service has the potential to improve the longevity of relationships [47,54] whilst synchronously building loyalty [55]. Loyalty in veterinary and animal health practice confers benefits of enhanced treatment outcomes and business sustainability.
Authenticity of collaborations is central to relationship-centred care [56] and is reflected in this study through experiences of empathetic care. In analogous human health care, empathy is viewed as the cornerstone of the patient-medic relationship [57]. Yet, in the health context, clinical empathy is complex and problematic to describe, as protective mechanisms need to be in place to safeguard the medical practitioner from repeated exposure to often upsetting scenarios. The potential for conflict is apparent, as patients desire true, authentic empathy, whilst practitioners may need to maintain clinical detachment to safeguard their own health [58]. Given similarities in the roles performed, it could be assumed that the veterinarian or allied practitioner would experience the same tension and this was confirmed as client participants from the present study expressed the value they placed on true empathy.
Factors of continuity of care were raised equally by all stakeholder contributors, emphasising the value of relationship formation between client and practitioner. In human health service, continuity of care and the development of strong relationships between the patient and medical practitioner are known to improve service satisfaction [59], treatment adherence, and outcomes [60,61]. Where sustained continuity of care is present, communication between the patient and physician is enhanced and service satisfaction improved. The nature of allied practitioner service in the animal health sector often facilitates continuity through repeated service encounters with the same clients. Conversely, in contemporary veterinary practice organisations, this can be difficult to achieve and now presents as a sector challenge.
At the policy level in human health practice, service-user collaboration is an accepted requirement [30,31] and the patient is an active participant. This is not without challenge. Challenges in human health services are cited as: external performance pressures, professional norms and values, and culture [30,31]. They serve as functional barriers to the inclusion of the patient and make client participation a complex offering. Findings from the present study strongly indicate stakeholder expectations of a co-produced service, but practitioner acceptance of the client as an active service collaborator raises thought-provoking questions for daily practice. The paucity of evidence on animal health client service expectations make framing a co-produced service for the sector challenging. A co-produced approach can be complex, nuanced, and intricate [62], but quality of communication and trust are central. There must be mutual acceptance of the partnership between clients and the veterinarian or practitioner in order for co-produced service to be delivered. The present study demonstrates that communication can serve as a proxy for trust, but where communication barriers exist or there is a failure in reciprocity of communication, co-produced service cannot be delivered [36].
Results from the present study indicate that the provision of client-centred co-produced care requires a re-examination of existing practice, potentially a paradigm shift in service provision. Recommendations from human practice for co-produced service indicate the requirement for: involvement of patients in decision-making processes, patient-centred tailored care with a move away from standardised protocols, and a shift in power-dynamics as the patient takes more control of the health care delivery [35,37]. These recommendations create a starting point for developing our Vet. Sci. 2020, 7, 149 11 of 15 understanding of co-produced care contextualised for the veterinary and animal health sector, as do the findings from the present study. A framework to illustrate a co-produced framework for the sector is presented in Figure 1.
Results from the present study indicate that the provision of client-centred co-produced care requires a re-examination of existing practice, potentially a paradigm shift in service provision. Recommendations from human practice for co-produced service indicate the requirement for: involvement of patients in decision-making processes, patient-centred tailored care with a move away from standardised protocols, and a shift in power-dynamics as the patient takes more control of the health care delivery [35,37]. These recommendations create a starting point for developing our understanding of co-produced care contextualised for the veterinary and animal health sector, as do the findings from the present study. A framework to illustrate a co-produced framework for the sector is presented in Figure 1. It is evident from the stakeholder interviews that co-production is integral to service quality and service satisfaction, and accordingly, is under active research. Lacking is pragmatic research into how to implement co-production from a day-to-day management perspective. It is not always clear what constitutes co-production and defining co-production or what is being co-produced is subject to It is evident from the stakeholder interviews that co-production is integral to service quality and service satisfaction, and accordingly, is under active research. Lacking is pragmatic research into how to implement co-production from a day-to-day management perspective. It is not always clear what constitutes co-production and defining co-production or what is being co-produced is subject to discussion [63], with the proposition that there could be multiple versions of co-production contextualised to the sector or situation.
The proposed navigational challenges contextualised for the veterinary and animal health sector, as determined by this study, are presented in Figure 2. Reflected are diversity and variation, and the differences in meaning and scope for co-production. Irrespective of the complexities of co-production, it is transparent that the quality of relationships permits co-production [64] and is evidenced through this study. Co-produced veterinary service requires a significant shift of power, as it moves beyond straightforward involvement of the client, to the establishment of equal and reciprocal partnerships. Flexibility in resources and time and blurring of practitioner-client boundaries [56] are cited as requirements for effective co-produced care. As a people-centric framework and a relationship-centred approach, co-production accepts health care recipients as active participants in their care [65], conferring benefits of enhanced health service efficiency as those who use the service are valuable resources [32]. Recognition by clients and professional alike of bespoke service was confirmed in the present study, supporting anecdotal indications that clients' expectations of service will continue to rise.
Hamilton's 2018 [66] review of adopting a co-creative approach to our understanding of the vet-farmer relationship, proposes co-production as an alternative to the evidence-based methodology most frequently adopted for vet-client communication research. Development of veterinarian-farmer partnerships has been marginally explored [67]. Integration of the client in an active, reciprocal partnership requires a significant forward leap in veterinary care but may reflect the future of animal health service provision.
boundaries [56] are cited as requirements for effective co-produced care. As a people-centric framework and a relationship-centred approach, co-production accepts health care recipients as active participants in their care [65], conferring benefits of enhanced health service efficiency as those who use the service are valuable resources [32]. Recognition by clients and professional alike of bespoke service was confirmed in the present study, supporting anecdotal indications that clients' expectations of service will continue to rise. Hamilton's 2018 [66] review of adopting a co-creative approach to our understanding of the vetfarmer relationship, proposes co-production as an alternative to the evidence-based methodology most frequently adopted for vet-client communication research. Development of veterinarianfarmer partnerships has been marginally explored [67]. Integration of the client in an active, reciprocal partnership requires a significant forward leap in veterinary care but may reflect the future of animal health service provision.
Further Work
As far as the authors are aware, this is the first study to examine co-production for veterinary service provision. It is accepted that there are study limitations and equally, many questions for further investigation have arisen. However, research into allied health services in the veterinary domain is often over-looked, irrespective that these professionals are integral to the vet-led team and the overall service provided. A more enhanced balance in interview participants could have been achieved with the inclusion of more veterinarians (including equine practitioners), and the broad study approach taken leads to generalised results. Thus, further work to clarify differences in service provision between different allied practitioners and veterinarians would be of value. Figure 2. Challenges to co-production.
Further Work
As far as the authors are aware, this is the first study to examine co-production for veterinary service provision. It is accepted that there are study limitations and equally, many questions for further investigation have arisen. However, research into allied health services in the veterinary domain is often over-looked, irrespective that these professionals are integral to the vet-led team and the overall service provided. A more enhanced balance in interview participants could have been achieved with the inclusion of more veterinarians (including equine practitioners), and the broad study approach taken leads to generalised results. Thus, further work to clarify differences in service provision between different allied practitioners and veterinarians would be of value.
As a novel work, the study outputs raised many areas for future research. Fundamental questions are raised on our understanding of how to provide co-produced care across animal health and veterinary services and how this may effectively be integrated into daily practice. Questions on the applicability of co-production across distinct animal health sectors (farm animal, equine, and companion animal) within the UK have been raised, and also across international veterinary services and practice. This study has highlighted the potential practical barriers for co-produced service, but these require further investigation and evaluation to understand the challenges from the perspective of different practitioners and sub-sections of the veterinary sector.
Whilst benefits of enhanced relationship development between client and practitioner through co-produced care are evident, the impact of prolonged authentic care on practitioner compassion fatigue and resultant work-related stress remain to be fully understood.
Conclusions
The co-produced nature of services is a well-developed concept across the services sector and is proposed and supported through the results of the present study to be relevant and valuable to the veterinary and animal health services. The emergent dimensions are strong advocates for the wider adoption of a co-produced service for the sector, but equally, the pragmatic challenges are identified. Quality communication serves as a proxy for relationship formation and could aid the development of co-produced service provision from practitioners. Client involvement in the animal health care process is evident through the stated wish for active participation in the service process and a strong desire for reciprocity of communication delivered through a robust practitioner-client relationship.
|
2020-10-07T13:06:56.269Z
|
2020-10-01T00:00:00.000
|
{
"year": 2020,
"sha1": "3fb047e872d1dfa60735cfebc640c6794edf5d81",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2306-7381/7/4/149/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "68b9d5e5e51e9a9812e869dc4e88651852dffed1",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business",
"Medicine"
]
}
|
7995756
|
pes2o/s2orc
|
v3-fos-license
|
Association Between Gastroesophageal Reflux Disease After Pneumatic Balloon Dilatation and Clinical Course in Patients With Achalasia
Background/Aims The occurrence of gastroesophageal reflux disease (GERD) is known to be associated with lower post-treatment lower esophageal sphincter pressure in patients with achalasia. This study aimed to elucidate whether GERD after pneumatic balloon dilatation (PD) has a prognostic role and to investigate how the clinical course of GERD is. Methods A total of 79 consecutive patients who were first diagnosed with primary achalasia and underwent PD as an initial treatment were included in this retrospective study. Single PD was performed using a 3.0 cm balloon. The patients were divided into two groups: 1) who developed GERD after PD (GERD group) and 2) who did not develop GERD after PD (non-GERD group). GERD was defined as pathological acid exposure, reflux esophagitis or typical reflux symptoms. Results Twenty one patients (26.6%) developed GERD after PD during follow-up. There were no significant differences between the two groups in demographic or clinical factors including pre- and post-treatment manometric results. All patients in GERD group were well responsive to maintenance proton pump inhibitor therapy including on demand therapy or did not require maintenance. During a median follow-up of 17.8 months (interquartile range, 7.1–42.7 months), achalasia recurred in 15 patients (19.0%). However, the incidence of recurrence did not differ according to the occurrence of GERD after PD. Conclusions GERD often occurs after even a single PD for achalasia. However, GERD after PD is well responsive to PPI therapy. Our data suggest that GERD after PD during follow-up does not appear to have a prognostic role.
Introduction
Achalasia is a primary esophageal motor disorder of unknown etiology in which there is degeneration of neurons in the wall of the esophagus leading to absence of peristalsis and impaired relaxation of the lower esophageal sphincter (LES). 1,2 The symptoms of achalasia are dysphagia for solids and liquids, regurgitation of undigested food, respiratory symptoms (nocturnal cough, recurrent aspiration and pneumonia), chest pain and weight loss. 3,4 Achalasia is not curable, which leads to be a chronic condition. Currently pneumatic balloon dilatation (PD), surgical myotomy, per-oral endoscopic myotomy (POEM) and botulinum toxin injection have been performed as initial treatment for achalasia depending on patient's condition and center expertise. [4][5][6][7] All these treatment options aimed at reducing the elevated pressure of LES. 1,[8][9][10] However, the LES hypertonicity returns over time and repeated interventions are needed. Gastroesophageal reflux disease (GERD) may occur due to disrupted LES after PD as well as after surgery and POEM. [11][12][13][14] Then, the occurrence of GERD after treatment might be a prognostic factor for a favorable long-term outcome in patients with achalasia. This study aimed to determine whether GERD after PD has a prognostic role for recurrence free survival (RFS) in patients who received PD for achalasia and to investigate how often GERD occurs in achalasia patients who undergo PD and how the clinical course of GERD after PD is.
Patients
This retrospective study included data from a total of 82 consecutive patients who were diagnosed with primary achalasia and with no previous history of PD, botulinum toxin injection, surgical myotomy or POEM at Samsung Medical Center, Seoul, Korea between January 2002 and December 2010. The diagnosis of achalasia was made based on the results of the radiographic, endoscopic and manometric studies according to accepted published criteria. 1 All patients underwent PD as the initial treatment. The patients were divided into the 2 groups: (1) who developed GERD after PD (GERD group) and (2) who did not develop GERD after PD (non-GERD group) during follow-up. GERD was defined as pathologic acid exposure (PAE), reflux esophagitis or typical reflux symptoms. PAE was defined as an intra-esophageal pH of <4 for more than 4.0% of the recording time of 24-hour pH monitoring. Reflux esophagitis was defined by esophagogastroduodenoscopy (EGD). Typical reflux symptoms were defined as heartburn and/or acid regurgitation. Heartburn was described as "a burning sensation rising from the lower chest up toward the neck," and acid regurgitation was described as "regurgitation of acidic fluid from the stomach or lower chest to the throat." 15
Initial Evaluation and Follow-up
The pretreatment evaluation consisted of symptom assessment, EGD, esophageal manometry and 24-hour pH monitoring. Symptoms were scored using the Eckardt score, which is the sum of the scores for dysphagia, regurgitation and chest pain on a scale from 0 to 3 (0 = absent, 1 = occasional, 2 = daily and 3 = each meal) and weight loss (0 = no weight loss, 1 = < 5 kg, 2 = 5-10 kg, 3 = > 10 kg). The total score ranges from 0 to 12 points. Recurrence was defined as recurred or aggravated symptoms of achalasia requiring additional treatment together with compatible radiographic, endoscopic and manometric study results during follow-up.
Patients underwent EGD, esophageal manometry and 24-hour pH monitoring 1 month after the initial treatment and yearly thereafter and at the time of symptom recurrence.
Esophageal Manometry
Esophageal manometry was conducted with the patient in the supine position, using an eight-lumen polyvinyl manometric tube with 4 distal side holes and 4 proximal holes situated 5-cm apart (ESM38R; Armdorfer Medical Specialties, Greendale, WI, USA). The manometric tube was transnasally introduced, and then slowly withdrawn in 1-cm increments by station pull-through in order to measure the LES resting and residual pressures. The LES relaxation was evaluated with wet swallows of 5 mL of water. Completeness of relaxation was assessed via measurements of residual LES pressure as compared with resting LES pressure. Peristalsis was assessed by positioning at least three pressure sensors situated at 5-cm intervals within the body of the esophagus. The distal sensor was positioned at a level 3-cm above the LES and a series of 10 wet swallows was conducted. From September 2008, esophageal manometry was conducted using the high-resolution manometry (HRM) system (Sandhill Scientific Inc., Highlands Ranch, CO, USA) in a standard manner. The HRM probe has 32 circumferential pressure sensors spaced 1 cm apart. The HRM probe was transnasally introduced and positioned with about 5 intragastric sensors.
Ambulatory 24-hour Esophageal pH Monitoring
Twenty-four hour pH monitoring was performed using a 2.1-mm monocrystalline pH catheter equipped with 2 antimony electrodes (Synectics, Irving, TX, USA). The pH catheters were calibrated at 37°C in standard buffer solution at pHs of 7 and 1 (Fisher Scientific, Fairlawn, NJ, USA), both before and after monitoring. The catheters were introduced transnasally, in order to position the sensors 5-cm above the upper border of the manometrically determined LES. The pH electrodes were connected to a portable digital data recorder (Mark II Gold; Synectics), which stored pH data every 4 seconds, for up to 24 hours. Patients returned home with instructions to keep a diary recording symptoms, meal times, time to bed and waking time. Patients were encouraged to do normal daily activities with no dietary restrictions. Patients returned the next day (after 18-24 hours) to have the probes removed and the diaries reviewed. Esophageal acid exposure values (percentage of time pH < 4) were calculated with a commercial software program (EsoPHogram, version 5.70C2; Gastrosoft, Irving, TX, USA). From January 2006, for 24-hour pH monitoring, a portable data logger (Sandhill Scientific Inc.) connected to a single-use combined impedance and pH probe (Sandhill Scientific Inc.) was used. Data analysis was performed using the BioView MII software (Sandhill Scientific Inc.).
Pneumatic Balloon Dilatation
PD was performed under fluoroscopic guidance with the use of a Rigiflex dilator (Boston Scientific, Boston, MA, USA). All of the patients fasted overnight and received topical anesthesia for the pharynx and intravenous midazolam and/or pethidine. The balloon of the dilator was positioned at the gastroesophageal junction under the guidance of the fluoroscope. It was then inflated until a minimum pressure of 10 psi was achieved, with the waist remaining in a stable position. Dilatation was conducted with the aim of maintaining this pressure for 2 minutes and obliterating the waist of the balloon. Single PD was performed using a 3.0 cm balloon.
Study Endpoint
The primary endpoint was recurrence of achalasia during follow-up. We defined RFS as the time from the first PD to the date of symptom recurrence. For RFS, patients without recurrence were censored at the last follow-up visit. Early recurrence was defined as any recurrence that occurred within 3 years after the initial PD and late recurrence as 3 years or more.
Statistical Methods
Statistical analyses were conducted using PASW Statistics 18 for Windows (SPSS, Inc., Chicago, IL, USA). Shapiro-Wilk test was performed for normality. The statistical results are presented as mean ± SD, median (interquartile range) or number of patients (%). Continuous variables were compared parametrically using Student's t test or non-parametrically using the Mann-Whitney U test. Categorical variables were compared using the χ 2 test or Fisher's exact test as appropriate. Wilcoxon's signed ranks test and Student's paired t test were used to evaluate changes of LES pressure and LES relaxation after PD, respectively. One-way ANOVA and Kruskal Wallis test were used to compare changes of LES pressure and LES relaxation after PD between the two groups. The drop of LES pressure was defined as the % change (decreased) of LES pressure after PD. RFS was calculated using the Kaplan-Meier method and compared using the log-rank test. A two-sided P-value < 0.05 was taken as statistically significant.
Patients
Two patients who did not have follow-up and one patient who developed esophageal cancer during follow-up were excluded from this study. Finally, a total of 79 consecutive patients were included in the current study. 63 patients underwent conventional manometry, while 16 underwent HRM at the time of making a diagnosis. Of them, 36 patients (45.6%) were male and the mean age was 44.2 ± 15.8 years. The median initial Eckardt score was 6 (4-9) and the median duration of symptoms before treatment was 3 months (2-9 months). Twenty-one patients (26.6%) were diagnosed with GERD during follow-up after PD. Sixteen patients were diagnosed with GERD by PAE or reflux esophagitis while 5 were diagnosed with GERD by typical reflux symptoms. The median time of diagnosis of GERD was 8 months (2.0-20.4 months). Baseline characteristics of the 2 groups are shown in Table 1. There are no significant differences between the two groups regarding age, gender, body mass index, initial Eckardt score, duration of symptoms before treatment, pre-treatment LES pressure and pre-treatment LES relaxation.
Treatment Long-term Outcomes
During a median follow-up of 17.8 months (7.1-42.7 months), recurrence was occurred in 19.0% (n = 15). There was no significant difference in terms of recurrence between the two groups by using the Kaplan-Meier method (P = 0.205, log rank test; Figure). In addition, age, gender, body mass index, initial Eckardt score, duration of symptoms before treatment, pre-treat- Among the 21 patients in GERD group, one patient was not followed after detection of GERD and 2 were not followed after starting proton pump inhibitor (PPI) use. In the remaining 18 patients, 14 had symptom relief on maintenance PPI therapy including on demand therapy (n = 1), 2 were able to discontinue PPI therapy and 2 did not receive PPI therapy due to asymptomatic mild erosive esophagitis.
Discussion
Currently performed treatment modalities cannot cure achalasia. 2,16 As such, each treatment aims to reduce the pressure gradient across the LES. 1,[8][9][10]17 Both PD and surgical myotomy are well-recognized modalities to disrupt LES for treatment in achalasia with comparable effectiveness. 18,19 Recently developed POEM is also effective in lowering LES pressure, but requires longer follow-up and needs to be compared with PD or surgical myotomy. 17,[20][21][22] The most popular protocol of PD is a graded di-latation starting with a 3.0 cm, followed by 3.5 cm and then 4.0 cm balloon, in subsequent sessions balloon. 23 In our institution, a single dilatation with a 3.0 cm balloon is performed as the initial treatment and additional PD is performed according to an "on demand" strategy, based on symptom recurrence during follow-up. Even after a single PD, GERD often occurs. The occurrence of GERD is known to be associated with lower post-treatment LES pressure. 24,25 We therefore hypothesized that GERD after PD could have a prognostic role for RFS in patients who received PD for achalasia. In addition, we investigated how often GERD occurs in achalasia patients who undergo PD as an initial treatment, what factors are associated with the occurrence of GERD and how the clinical course of patients with post-PD GERD is.
In the current study, 21 patients (26.6%) were diagnosed with GERD after PD during follow-up. Between the GERD and non-GERD groups, there was no significant difference in demographic or clinical factors including pre-and post-treatment manometric results. Thus, a fourth of patients undergoing PD, even a single dilatation with a 3.0 cm balloon, are expected to experience GERD regardless of demographic or clinical factors. The incidence of GERD after PD has been reported to range from 4% to 35%. 9,11,12,24,26 This wide range of incidence seems to stem from various different definitions used to make a diagnosis of GERD. In a prospective study by Novais and Lemme, 24 they reported the incidence of gastroesophageal reflux (GER) of 31% using 24-hour pH tracing analysis to distinguish true GER patterns from other findings due to esophageal food fermentations.
In the current study, there might be also little possibility that patients showing abnormal 24-hour pH monitoring findings due to food fermentation were erroneously included into GERD group. However, we did not include patients with concurrent dysphagia into GERD group and the patients suspicious of recurrence underwent other radiographic, endoscopic or manometric studies. Thus, we minimized the possibility of an erroneous inclusion of recurred patients into GERD group. Among the 12 patients who were diagnosed with GERD by PAE, 11 had available esophagographic data at the time of detection of PAE. Four patents had neither significant esophageal dilation nor passage disturbance on esophagography and 6 had improved and mild dilated (< 4 cm) esophagus with mild passage disturbance. Remaining 1 patient had improved but moderately dilated esophagus (4-6 cm) together with PAE at the follow-up time of 1 month after PD. However, this patient did not have PAE before PD. Taken together, we believe that the incidence of GERD in the current study could reflect its true incidence.
On the contrary to our hypothesis, RFS of achalasia did not differ according to the occurrence of GERD. In patients who underwent PD, GERD occurring during follow-up is not a prognostic factor but a complication to be controlled. Although the time of diagnosis of GERD needs to be uniform to play a proper role in predicting outcomes, we could not do that in this retrospective study. Therefore, we performed additional analysis after excluding patients who developed GERD over 12 months after PD from GERD group. However, the incidence of recurrence did not differ consistently between the 2 groups even after the exclusion (data not shown). In the current study, all patients with GERD showed resolved or reduced reflux symptom with PPI therapy. This finding also supports that GERD detected in this study is from the true GER. To date, several studies have addressed predictors of outcome of PD. 27 Age, gender, esophageal body diameter, balloon diameter, pre-and post-treatment LES pressure and timed barium esophagogram are factors useful to predict outcome of PD, however these depend on the type of protocol and dilator used. [28][29][30][31][32][33] Therefore our results also need to be interpreted in the context of a single PD protocol used. We performed the first PD with a 3.0 cm balloon to avoid a procedure-related perforation, resulting in no perforation. There have been several reports showing a good long-term outcome of graded dilatation with progressively increasing balloon size. 23,28,34 However, we did not routinely perform a subsequent PD with a larger balloon without an insufficient symptom relief. Instead, we performed the second PD in case of symptom recurrence requiring additional treatment with objective findings compatible with a recurrence. The size of balloon used was determined according to the time of recurrence. In case of early recurrence less than 3 years after the initial PD, a 3.5 cm balloon was used. However, in case of late recurrence, PD with a 3.0 cm balloon was repeated except one patient who recurred 47.2 months after the initial PD and received the second PD with a 3.5 cm balloon. All of the patients who underwent the second PD had satisfaction for symptom relief. Although the efficacy of PD strategy is out of the scope of this study, our data suggest that the "on demand" strategy after a single PD with a 3.0 cm balloon is effective and safe for treatment naïve patients with achalasia.
In our results, post-treatment LES pressure was measured in 66/79 patients (20 in GERD and 46 in non-GERD group). The median LES pressure significantly decreased from 39.9 mmHg (28.7-50.3 mmHg) to 28.1 mmHg (17.6-34.9 mmHg) after PD. Ghoshal et al 35 have reported 22.5 mmHg as a best cut-off value of post-treatment LES pressure differentiating responders and non-responders after PD. However, among the current study patients, only 28 patients (43.9%) showed post-treatment LES pressure within 22.5 mmHg even though they all showed symptom improvement. This discrepancy might stem from the different definitions used in the each study. In the study by Ghoshal et al, 35 response to PD was defined as a decrease in dysphagia score to 0 or 1 and/or total symptom score to ≤ 3 on follow-up visit after PD. However, we evaluated the symptom response by subjective satisfaction for symptom relief. In addition, GERD occurred after PD in a significant number of patients and post-treatment LES pressure did not differ between the 2 groups. These observations suggest that the development of GERD in achalasia patients received PD is not associated with post-treatment LES pressure but rather associated with combined multiple factors which are affected by PD. The current study had some limitations. First, the retrospective design may have introduced selection bias and underreporting of reflux symptoms. However, the presence or absence of reflux symptoms was well described in medical records. On the other hand, GERD could have been masked by PPI use for other symptoms. Among the 58 patients of non-GERD group, 5 took half dose PPI intermittently for their dyspeptic symptoms. There was no recurrence of achalasia in these patients. After excluding these 5 patients from the study, we re-analyzed the data but the results did not change regarding the comparison of RFS between the 2 groups (data not shown). Second, the study population was rather small and the follow-up duration was limited to draw firm conclusions. In the current study, to test the difference of RFS with a significant power between the 2 groups, several hundred patients were necessary with this follow-up time because there were many censored patients at an early follow-up time. This seems that some patients with improvement after treatment might not want to visit a clinic regularly further because there were also some patients who revisited clinic due to symptom recurrence after a certain period of follow-up loss. This made it difficult to conduct the current retrospective study with a good power. To overcome this limitation, a large prospective study with a strict real time data management is needed. Until then, however, our results would be of worth because this is the first study to determine whether GERD during follow-up after PD predicts the recurrence of achalasia. In addition, the current study includes data from 24-hour pH monitoring, which is considered the best diagnostic method for GER. In addition, we provided overall outcomes of a single PD with a 3.0 cm balloon with an "on demand" strategy. In conclusion, GERD occurs after even a single PD for achalasia in a significant number of patients. However, GERD after PD is well responsive to PPI therapy. Our data suggest that GERD during follow-up after PD does not have a prognostic role.
|
2018-04-03T01:22:57.464Z
|
2014-04-01T00:00:00.000
|
{
"year": 2014,
"sha1": "292c8a6e8e639054a3c44f487aeebfab9e27e938",
"oa_license": "CCBYNC",
"oa_url": "http://www.jnmjournal.org/journal/download_pdf.php?doi=10.5056/jnm.2014.20.2.212",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "292c8a6e8e639054a3c44f487aeebfab9e27e938",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119270254
|
pes2o/s2orc
|
v3-fos-license
|
Polarization-induced phase separation and re-entrant transition of two component-fermions in a one-dimensional lattice
By investigating the compressibility of one-dimensional lattice fermions at various filling factors, we study the phase separation and re-entrant transition within the framework of the Bethe ansatz method. We model the system using the repulsive Hubbard model and calculate compressibility as a function of polarization for arbitrary values of chemical potential, temperature, and interaction strength. For filling factors $ 0<n<1$, compressibility is a non-monotonic function of polarization at all thermodynamic parameters. The compressibility reveals a phase transition into a phase-separated state for both low and intermediate temperatures at intermediate interactions as one increases the polarization. For certain filling factors, we find the re-entrant transition into the mixed phase at a higher polarization.
I. INTRODUCTION
Ultracold fermionic atoms in optical lattices are currently attracting a great deal of interest due to the possibility of impressive experimental simulation of rich physics associated with the strongly correlated condensed matter systems [1,2]. While Feshbach resonance and laser intensities provide unprecedented control of atomatom interactions in the optical lattice, laser interference phenomena provides the control of dimensionality. Two hyperfine states of Fermi atoms play the role of the up and down spins of the electrons. Unlike condensed matter systems, the population of spin up and down particles can be independently controlled by using a radiofrequency field [3][4][5][6][7][8][9][10][11][12]. As a result, polarization can be maintained at any desired value between 0 and 100%. The term polarization here refers to a density imbalance of two hyperfine states.
This study was motivated by the possibility of using ultra-cold atoms to engineer condensed matter systems. Condensed matter systems, such as transition metal oxides and rare-earth materials show collective and interaction dominated phenomena due to the electron-electron correlation effects. These phenomena such as the Mottinsulator transition and magnetism in strongly correlated materials is believed to be explained by the Hubbard model. The repulsive Hubbard model is the simplest model capable of explaining both metallic and insulating like behaviors, as well as the magnetic properties caused by electron correlation. The insulating states can be either a band insulator, caused by the Pauli exclusion of fermions, or a Mott-insulator, caused by strong on-site interactions. In the strong interaction limit, the localized magnetic properties depend on various parameters such as filling factors, orbital occupations, crystal field effects, and Hund's coupling strength.
Over the years, the Hubbard model has been the center of intense research as it captures the behavior of parent superconducting compounds and other magnetic materials. The studies of phase separation for the Hub-bard model intensified after the experimental indication of phase separation of hole-rich and hole-poor regions in cuprate superconducting materials [13]. Strongly correlated electrons and holes are expected to play a key role in these materials and their phase separation is believed to hinder the superconductivity. If the Hubbard model is the correct model for the parent compound of superconductors, can it be used to explain phase separation? This is the question that inspired us to study the phase separation of an exactly solvable one-dimensional model relevant to a flexible cold-atom experiment. It has been theoretically shown that there is no phase separation for two-dimensional bipartite lattices at any filling factors at finite temperatures [14]. In contrast, the phase separation for the one dimensional Hubbard model is confirmed only close to a half filling in the presence of a critical magnetic field [15]. Finite-temperature phase separation for the one dimensional Hubbard model away from half filling has not been intensively investigated except for special cases [16,17].
In this paper we investigate the phase separation of one dimensional lattice fermions by Bethe ansatz (TBA) numerical method. We use the one dimensional Hubbard model as an effective model to describe the populationimbalanced two-hyperfine mixture in the optical lattice. We study phase separation by calculating the compressibility for various parameter regimes. By investigating the compressibility, we find phase separation at finite temperatures and at intermediate interactions as one increases the polarization. For some filling factors, we find a re-entrant transition into a mixed phase at a higher polarization.
This paper is organized as follows. In section II, we discuss the geometry of the system and its connection to the one dimensional Hubbard model. In section III, we briefly discuss our finite-temperature TBA calculation scheme. In section IV, we discuss the compressibility calculations and their connections to the phase separation and re-entrant transition into a mixed phase. We devote section V to discussion of the experimental connections and we provide an experimental scheme to detect the phase separation. Finally in section VI, we draw our conclusions.
II. THE MODEL: A ONE DIMENSIONAL OPTICAL LATTICE AND THE HUBBARD MODEL
In general, a one dimensional optical lattice refers to an optical lattice generated by one set of laser standing waves. The result of combined trapping and a periodic potential gives a pancake-like shape of the surfaces of constant potential. However, the geometry we consider here is generated by a three-dimensional optical lattice where reduced dimensionality is achieved by freezing the atomic motion in the transverse direction. This can be done by operating two standing waves out of three mutually perpendicular laser standing waves at higher beam intensities. The higher intensities suppress tunneling in the transverse direction and create an array of one-dimensional lattice tubes. The dynamics of the atoms in one lattice tube can be modeled by the one dimensional Hubbard model given by, The first term is the kinetic energy and is proportional to the tunneling amplitude t between lattice sites i and j = i+1. The operator c † iσ (c iσ ) creates(destroys) a Fermi atom with hyperfine state denoted by pseudo-spin σ =↑ , ↓ (±1) at lattice site i. The second term describes the on-site interaction energy U . The density operator or the occupation number operator is n iσ = c † iσ c iσ . Notice that < ij > indicates only the nearest neighbor pair of sites and we neglect tunneling beyond the nearest neighbors. The average chemical potential µ = (µ ↑ + µ ↓ )/2 and the chemical potential difference h = (µ ↑ − µ ↓ )/2, where µ σ is the chemical potential of hyperfine state σ. Here we neglect the confinement harmonic trapping potential and consider the lattice tubes are homogenous in space. The effect of trapping potential is discussed in section V.
The tunneling amplitude and on-site interaction are related to the complete set of Wannier functions w n,i ( r) = α w n (α − α i ) localized at position r i with band index n, where α = x, y, z are the components of Cartesian coordinates. As the band gap becomes larger than U and temperature T , only the lowest band n = 0 is populated. For deep lattices, the lattice potential at site i can be approximated as a three-dimensional harmonic potential with frequency ω α = 2E R √ s α /h, where s α E R is the laser intensity of the standing wave in the α direction. The recoil energy E R = (hk) 2 /2m is the kinetic energy of an atom with mass m and the momentumhk of a single lattice photon. For deep lattices, taking w 0 (α − α i ) as a ground-state harmonic oscillator function with frequency ω α , the tunneling amplitude in the one dimensional geometry becomes is the periodic potential generated by counter propagating lasers in the x direction. The lattice constant d = λ/2 is related to the laser wave length λ, hence the wave vector k = 2π/λ. The on- Notice that the on-site interaction U can be repulsive or attractive depending on the free-space s-wave scattering length a s . In the present work, we consider a tight one dimensional geometry in the x direction with a positive U modeled by Eq. 1. Notice that the ratio t/U can easily be controlled by the laser intensity I ∝ s x of the counter propagating lasers in the x-direction. In our model, the laser intensities in the transverse directions that are proportional to s y and s z are maintained at higher intensities so that the tunneling in the transverse direction is neglected.
III. THERMODYNAMIC BETHE ANSATZ METHOD
Lieb and Wu have shown that the model presented in the previous section is exactly solvable in one dimension using the thermodynamic Bethe-ansatz method [18]. Following Takahashi [19,20], the thermodynamic potential per site is given by The energy per site here is given as e 0 = 2tI, and two distribution functions of k's and Λ's are given by, The two additional expressions introduced in the equations are a 1 (x) = 4u/[π(u 2 + 16x 2 )] and s(x) = csc(2πx/u)/u with u = U/t. The quantity I is related to the mth order Bessel functions J m (x) through, The particle-hole ratios of k excitations and Λ excitations, ξ(k) and η 1 (k) are obtained by an infinite set of nonlinear integral equations: Here we use two integral functions given by The average chemical potential and the chemical potential difference are entered in the formalism through the grand potential Ω, and In order to calculate the thermodynamic potential numerically, one has to cut off the set of infinite equations at a finite number j. We achieve this by following the numerical procedure proposed by Takahashi et al. [21]. The infinite set of equations is truncated by replacing s(Λ) by δ(Λ)/2 at j > n c . Then the integral equations are converted into a set of matrix equations in which 2n c + 1 unknown functions are represented in terms of discrete points of k and Λ. These non-linear matrix equations are then solved iteratively using Newton's method for a given temperature (T ), average chemical potential (µ), and chemical potential difference (h). The details of the numerical procedure can be found in Refs. [21][22][23]. From the numerical solutions of the non-linear integral equations, we first calculate the thermodynamic potential Ω using Eq. (2), and then the particle density n ≡ n ↑ + n ↓ = −∂Ω/∂µ and the magnetization (the density difference of two hyperfine states, n ↑ − n ↓ ) m = ∂Ω/∂h follow. The compressibility is then calculated numerically at a constant polarization P = m/n using the second derivative of the thermodynamic potential with respect to the chemical potential [21,24].
IV. THE RESULTS: IDENTIFYING PHASE SEPARATION AND RE-ENTRANT TRANSITION THROUGH COMPRESSIBILITY
We examine the stability of the mixed phase through the sign of compressibility. Negative compressibility indicates an instability of the mixed phase, where the system enters into a phase-separated state. Figure 1 shows the compressibility at a constant interaction strength and a constant finite temperature for various values of density. The compressibility is always positive close to the densities of half filling and zero filling. However, away from these two limits, the compressibil- ity becomes negative and then gets positive again as one increases the polarization. Notice the compressibility for filling factors n ≃ 0.5 and n ≃ 0.75 in the figure. The negative compressibility indicates the instability of the mixed phase, meaning that the system is phase separated into two different distinct phases corresponding to their pseudo-spins. A further increase of polarization causes the system to make a re-entrant transition into the mixed phase. In the mixed phase, both spin components coexist in the same region of space. This positive compressibility at higher polarization itself does not guarantee the stability of the mixed phase over the phase separated phase. One has to compare the energies of phase-separated state and the mixed phase to determine the stability. The zero-temperature stability of the mixed phase at higher polarizations is justified in Ref. [17]. This justification has been confirmed by comparing the ground-state energies in both mixed and phase separated states using both weak and strong coupling approaches. We believe this is true even for finite temperatures. The comparison of finite temperature energies of the phase-separated state and the mixed state is not trivial and these calculations are beyond the scope of the present paper. The zero-temperature instability of the mixed phase at higher polarizations has already been established within the bosonization theoretical frame work [16]. Bosonization theory suggests that phase separation occurs for U/t ≥ 4π{sin[π(n + m)/2] sin[π(n − m)/2]} 1/2 . As shown in Fig 2, the mixed phase is stable only at higher densities, low polarizations, and low interaction strengths. The phase-separated state is stable at higher polarizations; however, unlike finite temperatures, the system does not make a re-entrant transition into the mixed phase at zero temperature. The compressibility at various interaction strengths and temperatures is shown in Fig. 3 and Fig. 4 respectively. Here the compressibility is calculated at a desired polarization P by varying the chemical potential difference h and keeping the average chemical potential at a representative fixed value, µ = 3t. Notice that the mixed phase is stable for the entire range of polarization at smaller and larger interactions. This can be justified by the compressibility at the infinite interaction and non-interacting limits. At the infinite interaction limit, the TBA equations can be solved to get analytical results [25]. In the limit U → ∞, the polarization P = tanh(βh)/2 and the compressibility can be calculated as κ = 2β cosh(βh)/4 As f (k) > 0 for all k values, the compressibility at the infinite interaction limit is always positive. This is intuitive as the system can be considered as spinless fermions in this limit. On the other hand, in the limit U → 0, no phase separation occurs as the system consists of non interacting fermions. Again, the negative compressibility at intermediate interactions suggests the phase separation into two pseudo spin states.
Consider the temperature dependence shown in FIG. 4. The mixed phase makes a transition into a phase separated state and then makes a re-entrant transition into mixed phase at higher polarization for low temperatures. In contrast, the mixed phase is stable for higher temperatures (small β) over the entire range of polarization. The high temperature expansion of the thermodynamic potential for the one dimensional Hubbard model has been carried up to the fourth order in β by Charret et al [26] and up to the sixth order in β by Takahashi et al [21]. By using the 6th order expansion, we confirm the positive compressibility at higher temperatures by an analytic calculation. The high temperature expansion of the compressibility and the polarization up to the sixth order is given in the Appendix.
Notice that compressibility is a non-monotonic function of polarization for all temperatures and interactions. In contrast, compressibility is a non-monotonic function of the interaction parameter only for larger polarizations. However, as evident from the Fig. 3, compressibility is a monotonic function of temperature for the entire range of polarizations.
It is worth mentioning that a small density imbal- . We define the scaled lengthz = z mω 2 /2, where ω is the one dimensional trapping frequency. We fixed the on-site interaction (U = 2t) and the inverse temperature (βt = 1).
ance can be induced in condensed matter electronic systems by applying an external magnetic field. Thermodynamic properties of such one dimensional systems are thoroughly discussed in Ref. [20]. Though finite temperature compressibility as a function of polarization is not discussed in there, special attention has been given to the ground-state properties such as susceptibility, magnetization, and densities [27].
V. CONNECTIONS TO EXPERIMENTS
Recent progress in experimental techniques with ultracold atoms, such as single-site detection [28,29], noise correlations [30,31], Bragg scattering [32], and in situ imaging in the lattice scaling [29], allows one to probe the density variations in cold-atom experiments. For the case of equal-population two-component fermions on a three-dimensional cubic lattice, the compressibility has already been measured [33].
Though we neglected it in this study, the underlying harmonic trapping potential present in all cold gas experiments causes the density to vary across the lattice. See FIG. 5. By combining the TBA solutions with the local density approximation (LDA), we then extract the local density n(z), magnetization m(z), and then polarization P (z). In LDA, the external trapping potential V i = mω 2 z 2 /2 at site i is related to the local chemical potential through the relation µ i = µ 0 − V i , where ω is the one-dimensional trapping potential, µ 0 is the central chemical potential and z = id, with lattice constant d the spatial coordinate. As shown in Fig. 5, the density monotonically decreases, while polarization monotonically increases from the center to the edge of the trap [23]. This trapping potential induced inhomogeneity allows both mixed-phase and phase-separated states to exist simultaneously inside the trap. At the center of the trap, the density is higher and the polarization is lower. On the other hand, the polarization is higher and the density is lower at the edge of the trap. As a result, the mixedphase should exist at the center and at the edge of the trap. However, depending on the density, the phase separated state can exists in the middle (not the center) of the trap. Therefore, by adjusting the total density in the trap, any polarization induced phase separation and re-entrant transition can be investigated experimentally with currently available experimental techniques.
VII. CONCLUSIONS
In conclusion, we considered two-component Fermi atoms in a highly tunable optical lattice to study the phase separation of fermions in one dimension. We have calculated the compressibility of one dimensional lattice fermions using the thermodynamic Bethe ansatz method.
We find that compressibility is a non-monotonic function of polarization. At filling factor 0 < n < 1, with low temperatures and intermediate interactions, compressibility becomes negative, indicating instability of the mixedphase state towards the phase separated state. For some parameters at higher polarizations, compressibility becomes positive again indicating a re-entrant transition in to a mixed phase. These phase-separation and re-entrant transitions can be detected by using currently available experimental techniques.
V. ACKNOWLEDGEMENTS
We thank Joseph Newton for critical comments on the manuscript.
|
2015-05-26T15:59:41.000Z
|
2015-05-26T00:00:00.000
|
{
"year": 2015,
"sha1": "9c2ce0c9c7be4488a3a1b56cf1c84d79dfe7c587",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1505.07026",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "9c2ce0c9c7be4488a3a1b56cf1c84d79dfe7c587",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Physics"
]
}
|
216598503
|
pes2o/s2orc
|
v3-fos-license
|
Heritage and Patrimony of the Peasantry: an analytical framework to address rural development
The term “rural development” is exceptionally multifaceted, which makes it difficult to define. This and other features make it a ‘wicked problem’, which means the consequences of rural developmental problems can create other complications. To date, the important discussion of rural development has dealt with productivity and economic concerns. This discussion has many crucial aspects such as the environment, infrastructure, and respect for fundamental rights. This paper describes the ‘Heritage and Patrimony of the Peasantry’ as an alternative analytical framework for addressing rural development. This analytical framework takes important topics from other rural development perspectives (primarily focused on food sovereignty principles). The heritage and patrimony of the peasantry framework moves away from the market point of view, which converts everything into an asset that can be marketed, and utilizes other sources of heritage. The peasantry has seven kinds of ‘heritages’ or ‘patrimonies’: natural, cultural, economic, physical, social, institutional, and human. These heritages or patrimonies are the bases of construction for a decent standard of living which will accomplish full rights for all rural inhabitants, i.e. rural development. El término desarrollo rural es excepcionalmente multifacético, lo que dificulta su definición. Esta y otras características lo convierten en un “problema complejo”, lo que significa que las consecuencias de los problemas de desarrollo rural pueden crear otros problemas. Hasta la fecha, la importante discusión sobre el desarrollo rural ha sido sobre productividad y asuntos económicos. Sin embargo, esta discusión tiene muchos aspectos cruciales como el medio ambiente, la infraestructura y el respeto de los derechos fundamentales. Este estudio describe los Patrimonios del Campesinado, un marco analítico alternativo para abordar el desarrollo rural. Este marco analítico toma temas importantes de otras perspectivas de desarrollo rural, pero está enfocado principalmente en los principios de la soberanía alimentaria. Patrimonios del campesinado se aleja del punto de vista del mercado, que convierte todo en un activo que se puede comercializar, y se enfoca en otras facetas del patrimonio. El campesinado tiene siete tipos de patrimonios: naturales, culturales, económicos, físicos, sociales, institucionales y humanos. Estos patrimonios son la base de la construcción de un nivel de vida que, a su vez, permitirá alcanzar plenos derechos para todos los habitantes rurales, es decir, el desarrollo rural.
Introduction
Rural development and the alleviation of poverty have been a primary concern for many governments in developing countries over the last few decades. Though we have seen impactful advances in many communities, the strategies and solutions proposed have not ensured changes to an acceptable quality of rural life nor have they been able to guarantee respect for all rural inhabitants' rights (Scoones, 2015). This paper is designed to suggest an alternative analytical framework for addressing rural development in a straightforward way. By analyzing and factoring in heritage and patrimony of the peasantry, this paper takes into consideration different points of view, based on a literature review and taking into account the idea of heritages and patrimonies, suggests a way in which all heritages can cooperate and, thereby, achieve a better life for all rural inhabitants.
Rural development, a 'wicked problem' Rittel and Webber (1973) defined a 'wicked problem' as a malignant, tricky or aggressive condition enclosed in a vicious circle. A 'wicked problem' is difficult to explain and solve for several reasons. The first challenge steams from an incomplete understanding of a situation or contradicting information (Roberts, 2012). In other words, it is hard to define and fix something clearly and completely if there is a lack of comprehension (Kuhmonen, 2018). Second, with many people there are many opinions that make it difficult to decide how to tackle a problem (Norris et al., 2016). Third, there are often great financial burdens and barriers associated with wicked problems (Gharehgozli et al., 2017). Finally, it is difficult to make accurate assessments and thorough changes since there are so many intertwined problems (Dutta, 2018). On top of that, it is difficult to know if taking action could create unwanted/ unforeseen complications (Probst and Bassi, 2014;Innes and Booher, 2016). Rittel and Webber (1973) defined ten characteristics of wicked problems that could be applied in the scope of understanding the complexities of addressing and applying rural development issues and strategies. First, wicked problems have no conclusive formulation (Zijp et al., 2016). Concerning rural development, several approaches from the technocratic point of view to a new political approach represented by food sovereignty have tried to address many issues. Each approach offers a set of steps and solutions for rural development problems. However, so far these solutions have not been comprehensive enough to have a definitive understanding of the entire problem(s) and how to fix it (Pachón et al., 2016).
Second, it is difficult to quantify or declare success with wicked problems, primarily because they create many other problems (opposed to the limits of conventional problems that can be explained or interpreted) (Elia and Margherita, 2018). There is often a disagreement about the causes of problems of rural development. Sometimes politicians and technicians blame the idiosyncrasy of rural people (Castro-Arce and Vanclay, 2019). Others blame the policies, especially in developing countries. The fact is that rural inhabitants in many places remain trapped in poverty, illiteracy, and illness. In other words, rural development has exceeded the capacity and/or willingness of their governments' ability to deal with these very problems (Head and Alford, 2015).
Third, the solutions to wicked problems are dichotomous. There is no suggestion that some of these answers are perfect or better than any other answer. It is important that these approaches are tractable methods for the condition we are trying to enhance (Farrell and Hooker, 2013). Rural development approaches, especially from the technocratic perspectives, have proposed alternatives for solving the problems of rural communities. Unfortunately, these attempts have often led to unforeseen outcomes that can occasionally be extremely deleterious for community dynamics, economics, and the environment (Kay, 2009). New solutions create extra dimensions that must be integrated into an analysis before steps towards change are made that ensure that unintentional consequences do not arise (Luckey and Schultz, 2001).
Fourth, there is no pattern to follow when confronting a wicked problem, despite the guidance the past can offer. People working with wicked problems must build new ways and ideas as they go along (Dentoni and Bitzer, 2015). First and foremost, the widespread approaches have offered partial solutions for rural development challenges. Their focuses have mainly been on economic activities rather than on the people themselves. Their solutions have aimed to increase incomes as a way to isolate rural people. Every rural community has its needs and wishes, and the solutions to these needs must be constructed taking into consideration the opinion of rural people themselves. These processes, constructed from the bottom-up, require flexibility to accommodate dissimilar situations and, therefore, to maintain the legitimacy of the inclusion of people in the decision-making processes (Chambers, 1983).
Fifth, there are several explanations for a wicked problem, and the pertinence of the explanations depends on the particular perception of the designer. As described previously, the main approaches to rural development for explaining the consequences of rural problems is to propose a course of action to solve them (Gold et al., 2018). The perspectives of the technocratic approach have focused their proposals on an economic point of view. From the green revolution to neoliberalism to the import substitution industrialization (ISI) to neostructuralism, the modernization of agricultural production has been deemed the answer to rural development problems. In contrast, a sociological approach has focused on the rural inhabitants' personal and communal needs. In the center we find the socio-technocratic approach, which analyses productive problems in a social context and proposes competitiveness as the way to solve them (Kay, 2009). Another example is the political approach that has used food sovereignty to focus on the rights of rural inhabitants and consumers as its response to rural development problems (Pachón et al., 2016).
Sixth, every negative consequence of a wicked problem is a symptom of another problem. Equally, the causes of problems are, at the same time, the consequences of others. Rural development problems are narrowly interconnected with the causes and consequences of many other problems (Andersson and Törnberg, 2018). For instance, illiteracy and a low level of education in rural areas are some of the reasons for other phenomena such as poverty, lack of participation, and low agricultural production. Likewise, when people do not know how to read and write, their integration into society is harder for them than it is for those who do know how to read and write (Leverenz, 2014). Rural poverty is narrowly related to low agricultural production, although a high agricultural production does not guarantee freedom from poverty. Clearly, identifying the main causes of rural development problems is a complicated task. That is why a multidisciplinary approach is necessary when addressing these problems (Pacanowsky, 1995;Norris et al., 2016).
Seventh, a lack of an alleviation policy for a wicked problem has a decisive scientific test because society and scientists understand problems differently. The scientific approaches to addressing rural development are incomplete (Tietjen and Jørgensen, 2016). A multidisciplinary approach that takes the interactions and connections into consideration and then places the emphasis on the peoples' rights over economic concerns might be better for tackling a wicked problem, such as rural development. Rural development policy actions have partially failed in the last decades because of the lack of a "people first" mindset. For instance, the distribution of power among rural stakeholders remains concentrated in those that hold land, money, and political influence (Roberts, 2000).
Eighth, finding a "solution" to a wicked problem usually focuses on a design effort, opposed to a rigid strategy which reduces the likelihood of trial and error (Came and Griffith, 2018). Rural development seems to go beyond the capacity of the governments and public policies, which creates dissatisfaction among rural and, sometimes, urban inhabitants (Brugue et al., 2015). Traditionally, public policies have addressed rural development problems based on a disciplinary policy, almost entirely avoiding integrating other concerns (Pachón et al., 2016).
Even though rural development challenges are similar in many places, the solutions vary drastically. The problems are similar because public policies, especially in developing countries, have followed the same pattern based on the green revolution and neoliberalism (Kay, 2009;Pachón et al., 2016). Hence, the consequences of such policies trigger analogous problems and difficulties. However, the solutions to these problems are different everywhere (Bitsch, 2009), because they must be formulated based on the peculiarities of the rural areas and the idiosyncrasy of their people. Obviously, the rural inhabitants themselves should construct such solutions, furthering solution variances.
Tenth, the designers trying to tackle a wicked problem must be held responsible and accountable for their actions. Governments must acknowledge that they are responsible for the consequences of the application of rural policies that have tried to solve rural development problems (Xiang, 2013). However, in many places the rural inhabitants themselves have been suffering from the effects of such policies, due to a lack of accountability. Rural inhabitants are often isolated from society where their importance is not often recognized (Probst and Bassi, 2014).
Rural development is a complex and interdependent situation that is difficult to explain and comprehend (Anderson, 2003). It has been improperly understood, which means that the different approaches to address it have been incomplete. Some strategies have successfully helped to manage and solve problems. However, many problems related to rural development such as poverty, illiteracy, income inequality, lack of access to health care and education, degradation of the environment, and lack of access to credit and technical assistance still remain. Especially in developing countries, the persistence of issues such as poor infrastructure, isolation, and absence of social recognition only fuel the difficulties of solving problems of rural development (Chambers and Conway, 1992;Ellis and Biggs, 2001;Brass, 2002;Molina, 2010). Two significant points emerge from the above debate. What have the central themes for successful approaches to rural development been? And, what are the most important characteristics to take into consideration to approach and solve a wicked problem such as rural development?
How to address a wicked problem
The most efficient way to tackle a wicked problem, such as rural development, is through an interdisciplinary and transdisciplinary framework. The integration of different disciplines, points of view, and an innovative analytical framework based on such amalgamation allows us to address the complexity of real life (Norris et al., 2016;Elia and Margherita, 2018).
The characteristics of social problems regarding rural development are complex, ambiguous, and uncertain (König et al., 2013). However, the disciplines and traditional approaches to planning try to simplify their approaches, splitting them up for the purpose of analyzing every component separately (Espina, 2007). Such separation reduces the scope of analysis of the methods, minimizing the attributes that emerge from the interaction of all the factors. Indeed, reality requires comprehensive analytical frameworks that overcome the boundaries of disciplines. Comprehensive analytical frameworks enable us to address complex problems successfully and efficiently throughout the process (McKee et al., 2015;Henriksen, 2016).
A holistic analytical framework allows the identification of a complete and wide-ranging image of the problems. Such methodology attempts to tackle the complexity of problems and allows a better understanding of all their synergies and connections (Delgado and Rist, 2011). Equally, a comprehensive analytical framework realizes the emerging capacity of the problems in rural territories that are everchanging. Usually, new situations, attributes, and problems appear according to the interaction of every component.
Besides the holistic analytical framework, adequate organization is necessary to address wicked problems. Members of an organization who usually come from diverse disciplines must share similar objectives, cooperate, and, most importantly, be able to manage heterogeneity and the complexity of the disciplines (König et al., 2013). The organization must be able to manage conflicts stemming from various points of view. Finally, and maybe most importantly, the organization must take into consideration previous research and proposals that have addressed problems to avoid wasting significant time and energy trying to do something that somebody else has already done.
Interdisciplinary and transdisciplinary frameworks
The academic community (Dewey, 1938;Miguélez, 2009;Olivé, 2011;Raasch et al., 2013) commonly defines an interdisciplinary framework as the integration, combination, or mixture of scientists of two or more disciplines, fields, bodies of knowledge, or modes of thinking. An interdisciplinary framework brings skills, techniques, concepts, and expertise to create meaning, explanations, solutions, understanding, and alternatives for tackling complex problems that have been incompletely understood or are socially complicated (Norris et al., 2016).
Scientists working under an interdisciplinary framework must demonstrate willingness, temperament, and commitment to cross the boundaries of disciplines because their results depend on the relationships, judgement, and dialogue with the scientists of other areas (Dentoni and Bitzer, 2015;Gharehgozli et al., 2017). An interdisciplinary framework is necessary for innovation and, in fact, it has been stimulated by international funding (Millar, 2013). It operates primarily at a university level, because there is greater access to know-how, tools, and funds. In addition, universities offer transversal enrichment, prestige and the acquisition of reputation, learning of techniques, efficiency enhancement, and recruitment of scholars (van Rijnsoever and Hessels, 2011). However, its implementation and outcomes at the institutional level are still doubted by the scientific community (Elia and Margherita, 2018).
A transdisciplinary framework aims to understand and address complex problems through the interaction of diverse disciplines (Dentoni and Bitzer, 2015). Besides scientists of specific fields, this interaction includes other stakeholders who come from any discipline, for instance, peasants who can make relevant contributions (Olivé, 2011). The main goal of a transdisciplinary framework, besides tackling complexity, is to create novel concepts, methods, and approaches that improve on disciplines. Hence, in a transdisciplinary framework, there is a dialogue between the scientific and empirical knowledge, and as a result, interesting epistemological bridges are created (Miguélez, 2009) that strengthen both science and practice.
A transdisciplinary framework is greater than a mere sum of the disciplines. It is a collaboration among them, a method to merge knowledge where the boundaries of the disciplines are blurry (Espina, 2007). These methodologies are characterized by an emergent attribute that bridges the gap between disciplines and implies a novel transcultural, transnational, and transpolitical approach. Zemelman (2001) argues that a transdisciplinary framework must take into consideration all the inputs and outputs as a unity of all the sides to explain and solve problems. He suggests avoiding methodologies focused on factorial logic. Instead, he proposes the implementation of a methodology focused on a matrix of complex relationships with reciprocal effects. In this matrix, the problem is analyzed as a network, emphasizing all the dimensions and connections that are reliant on each other (Dutta, 2018). In the scope of rural development, challenges must be addressed and measured individually and communally to better understand output causes. In other words, the problems of rural development addressed in a transdisciplinary framework identify all the connections among the problems and the consequences of these relations (Fig. 1).
Figure 1 displays some of the problems of rural territories and some of their consequences. It also establishes the relationships among them, whether as cause or consequence. For example, education is one of the most important topics that determines the quality of life and exerts a strong influence on other subjects such as migration, land use, and poverty (Brown and Park, 2002). Education affects migration because in some rural areas young people who hold a medium or high educational level usually migrate to urban areas looking for jobs related to their backgrounds. However, when educated people remain in rural areas, positive changes in land use, conservation of biodiversity, and female participation in decision making are evident (Gustafsson and Li, 2004). A similar description could be established with the other problems. For example, social justice, one of the main demands of the peasantry around the world, is directly connected to rural policies, social acknowledgement, and access to markets. Since rural developmental problems are narrowly associated with one another, none of them should be addressed separately. An interdisciplinary and transdisciplinary framework is decisive for solving most of the main problems and their consequences integrally. In this scenario, 'Heritage and Patrimony of the Peasantry' is the proposal of an analytical framework to address rural development that integrates many of the concerns of rural populations and incorporates the main characteristics of the most important rural developmental approaches, especially food sovereignty (Desmarais, 2002;Holt-Giménez and Altieri, 2013).
Heritage and patrimony of the peasantry, an alternative analytical framework
Initially, it is important to define rural development and heritages that the peasantry offer us as an alternative viewpoint. This first stage aims to provide all rural residents with a basic standard of living, which can only be accomplished through the protection of the human rights of rural residents (Rosset, 2003;Borras Jr., 2009). Heritage and patrimony of the peasantry aims to organize, as much as possible the topics involved in problems of rural development by addressing them in an interdisciplinary and transdisciplinary framework. Heritage and patrimony of the peasantry framework is based on four milestones: rural territory, heritage and patrimony, quality of rural life, and respect for human rights. Figure 2 shows the interaction of these milestones.
Rural territory
It is important to understand, in general, what rural territory means. A territory is defined as a space that holds feelings of identity and collectively constructed ideas of development whose transformation is a result of the mobilization and appropriation of the inhabitants (Schejtman and Berdegué, 2003;Jouini et al., 2019). Besides the differences between the rural and urban concepts based on population totals, three main approaches have analyzed this concept: as a historical process; its functionality; and its environmental viewpoint. Rural territory as a historical process is tightly linked to the meaning of the territory for its inhabitants. In this sense, rurality is a series of social networks whose inhabitants' livelihoods rely on rational use of available resources (Chambers and Conway, 1992). Furthermore, the relationships among these inhabitants are characterized by tradition and culture, the basis of rural identity. Rural territory and its inhabitants are characterized by a behavior that symbolizes an appropriation of the spaces and its resources, where the population shares feelings of identity, cooperation, and a sense of belonging (Dirven et al., 2011). Even though many of the members of new generations have migrated to urban places, these feelings remain deeply rooted out of respect and love for their heritage and ancestry.
Traditionally, the functionality of the rural territories has been related to the economic activities performed there. For instance, crops or livestock production can be strongly influenced by culture and tradition. However, another type of agricultural production is strongly influenced by the market (Gutierrez-Montes et al., 2009). That production is highly specialized, industrialized, and organized in groups of people very close to each other, or clusters by vicinity, according to the likelihood of using the natural resources, such as land and water, or the natural advantages for mining or tourism. These clusters ultimately seek to improve competitiveness and increase individual profit. The benefit of organization in clusters is its ability to facilitate the offering of technical services, inputs, and support on the assumption that the profitability could be transferred into the territory and to other inhabitants that do not participate in the cluster (Echeverri, 2011).
The environmental point of view highlights concerns related to climate change and the likelihood that rural activities mitigate the factors that increase global warming. For many years, when many people realized the consequences of global warming and the impact it has on normal lives, rural territories gained more relevance because they offered additional services compared to the traditional ones. These services are related to the likelihood of an alternative model of development based on ecosystem services, represented by environmental markets and environmental supply (Dirven et al., 2011).
The previous discussion emphasizes the multifunctionality and pluriactivity of rural territories. However, beyond the multifunctionality of rural areas, it is crucial to take into account more integrative ideas such as the "inter-functionality" of rural territories. "Inter-functionality" means that there should be stable relationships, close interactions, and deep integrations among all the functions and activities developed there (Florian, 2012;Kolstad, 2012). The primary goal of the "inter-functionality" is to preserve all the heritages of the peasantry present in these territories.
An example in which the inter-functionality of rural areas is not working appropriately are those territories where monoculture is predominant, undermining the possibility of producing food to feed their inhabitants. Many times, the target of the monoculture is a well-paid international market. The region of Uruapan in the State of Michoacan (Mexico) is a true archetype for this kind of production. Avocado is a widespread monoculture, mainly destined to the United States market. It is produced by peasants, small, medium and large farmers, as well as by multinational food companies. This monoculture, which is indeed well-paid, has increased the incomes of many people (input sellers, transporters, harvesters, and packers) who are directly and indirectly related to production (Pachón et al., 2017b).
The international peasant movement La Via Campesina and its proposal for food sovereignty through the Declaration of Nyéléni (2007) describe the principles that, according to their deliberations, are essential for the improvement of their quality of life and will guarantee that the rights of the peasantry and all rural inhabitants are respected. Figure 2 shows some of these principles (the interaction inside the rural territories plane). In the background of these principles, a political dimension can be found because, although essential, the technocratic dimension has proved to be insufficient compared to the other rural aspects. Primarily, neoliberal and neocolonialist proposals, as well as the World Trade Organization, free trade agreements, and other policies exclude the peasantry (Pachón et al., 2016). In this scenario, systems that allow unfair trade, such as dumping and subsidy schemes in developed countries and those that are against the likelihood of subsistence of small farmer production from developing countries are shunned (Barker, 2007).
Heritage and patrimony
The next crucial point is heritage and patrimony. At this level, seven kinds of heritage and patrimony that the peasantry must mix to improve their quality of life and ensure that their rights are respected are organized (Pachón, 2013). The first issue to discuss is the meaning of heritage followed by a description of each element in the proposed heritage. Heritage is a net of beliefs, traditions, and customs which a civilization considers significant to its history, culture, and identity (Littaye, 2016). Heritage must be understood in the scope of patrimony. They are the structures, articles, or concepts that a civilization gets from the communities who lived before them. That means that for the current framework, heritage and patrimony could be assumed in the same way (Cominelli and Greffe, 2012). Beyond the concept, many aspects enrich and transform heritage and patrimony into one of the milestones of the current framework (Calvo et al., 2017).
First, we must look at the social importance of heritage and patrimony. This constitutes the traces of memories that represent a social fact legitimized as something that reflects the importance of being analyzed, preserved, and inventoried. Hence, it is socially appreciated as a cultural phenomenon such as collective memory (Criado-boado and Barreiro, 2013). Then, a heritage and a patrimony are the results of social construction. It is a symbolism for the dissemination of collective memory.
Second, we must look at the cultural importance of heritage and patrimony. This is the repository that gathers common behaviors from different societies and groups, ways to solve difficulties, knowledge, values, symbols, and socio-cultural frameworks. Heritage and patrimony are used as a means to illustrate the culture, traditions, customs, background, and landscapes (Dormaels, 2012).
Finally, we identify the importance of heritage and patrimony. The acts appreciate heritage and patrimony as something personal and distinguishable; these are impossible to separate from the admiration and respect of peoples, communities, and individuals. For that reason, heritage and patrimony are valued, managed, and conserved. Something that is poorly appreciated is no longer valued as heritage and patrimony. These are a network of paths of life, beliefs, values, emotions, and meanings that offer a resource of identity and add value to social, political, and economic claims. It is the process of unification of identities (Santos, 1993).
Heritage and patrimony are the expressions of the accumulation of knowledge through time. They are the way to understand and link the history and the traditions from our past with our present. At the same time, heritage and patrimony are the best ways to construct the future (Calvo et al., 2017). Figure 3 describes the heritage and patrimony of the peasantry framework in a virtuous circle. They must be, and are, appreciated and valued because they constitute the fundamental part of our lives. Venerated heritage and patrimony are protected and saved because they conserve part of our history. If heritage and patrimony are appreciated and protected, society, in general, will ponder the Satisfying existencial needs (being, having, doing, interacting) and axiological needs (subsistence, protection, affection, understanding, participation, leisure, creation, indentity, freedom) Affirm the identity of the peasantry and the territory. Promote values such as solidarity.
Define the characteristics of the territory and strengthen the thoughts of the peasantry importance of the peasantry and will encourage them in the coming generations. That promotion will inspire essential values of the peasantry. The cycle will then end but will start again when heritage and patrimony invoke the satisfaction of fundamental human needs (Max-Neef et al., 1994).
The circle begins with the recognition of the importance and significance of the peasantry and their customs from society as a whole. People must appreciate how rich the peasantry is, more than producing food that is vital, to maintain their rootedness (Wittman et al., 2010). People must also recognize that several customs of the peasantry are the best options for mitigating the consequences of climatic change. In addition, people must understand that the peasantry and their activities indirectly provide many of the products and raw materials used in urban areas. In other words, people must recognize the special qualities of the peasantry, the places where they live, and the things that they have done. If society properly appreciates the peasantry, their value would gradually increase, and, in turn, society will protect the peasantry (Patel, 2009).
The second step is the protection of the peasantry and their customs by society through collective action. For example, people must defend the peasantry from the policies that affect their customs and traditions, such as the disadvantages of free trade agreements. People can also help save the landscapes and rural environment against harm and damages to preserve them to mitigate the effects of climate change. This will help to defend the peasantry from expulsion from their lands and territories (Bebbington, 1999). When society protects the heritage and patrimony of the peasantry, society will, in turn, promote the heritage because it is important for new generations.
The third step is the promotion of the heritage and patrimony of the peasantry by society, especially among the new generations. An example of this can be, people supporting the peasantry by purchasing their products at a fair price. In this way, society helps the peasantry to reach a decent quality of life and helps to ensure respect for their human rights (Parrado and Molina, 2014).
The human scale of development defines basic measurements for human needs for both urban and rural populations. This is the last step of the circle (Max-Neef et al., 1994). The heritage and patrimony of the peasantry allows the rural population to satisfy their human needs because their heritage creates levels of self-reliance. It also articulates the satisfaction of human needs with environmental, technological, global and local processes, and for individuals within their communities. The human developmental scale describes two types of human needs: existential and axiological. These needs are multiple, interdependent, finite, few, and classifiable (Fig. 3). They create an interactive network whose key features are simultaneity, complementarity, and trade-offs, which characterize the process of satisfying human needs (Max-Neef et al., 1994).
Finally, we must treat the heritage and patrimony of the peasantry as invaluable. They are not marketable as part of their identity, as a social construction. In this scenario, the idea of 'capital' is no longer used. Capital is associated with the process of purchasing commodities in one place and selling them in another for profit (Flora et al., 2015). That means that the idea of the peasantry regarded just as a food supplier is excluded, forgetting its social prominence as part of the origin of the majority of societies. Because of these two different facets, patrimony can be categorized as tangible and intangible (Holt-Giménez and Altieri, 2013). Tangible patrimony is defined as those assets that are measurable, that people can touch. Intangible patrimony is the assets that are not able to be touched and which are difficult to clarify and describe (Calvo et al., 2017).
Tangible Patrimony Economic Heritage and Patrimony
Clearly, this heritage refers to monetary resources available for an individual, a family, and for the society. The discussion about this issue has been carried out in two different ways. First, we analyze the origin of these funds and how they have been earned. Then, we analyze the way family/ members in a household spend their money. Regarding this it is important to understand that having more income does not necessarily improve rural development (Gutierrez-Montes et al., 2009). Some examples of this are when the natural heritage or the environment are destroyed as a result of rural activities, or when these economic resources are the result of child labor, which impacts the social and cultural heritage. Regarding resources and the way they are spent, it is important to highlight that earning more money does not necessarily mean that the quality of life is going to improve. A household could increase its income but if the family's head spends money on alcohol consumption instead of on other aspects, such as education, rural development will not be achieved (Schultz et al., 2002).
In rural territories, pluriactivity has become critical. Essentially, pluriactivity in economic heritage and patrimony is understood as alternative ways to earn money for the household. Pluriactivity can improve post-harvest activities, which add value to products and create different modes to commercialize these products (Pachón et al., 2016).
Monetary resources become indispensable when they are used as a way to strengthen other heritages, such as physical or human heritages. For instance, physical heritages are enhanced when the funds are spent to improve households (better floors, restrooms, and ceilings, among other things). Another example is when the funds are used as part of collective action to improve post-harvest infrastructure. Human heritage is strengthened when these funds are spent to improve education for children, healthcare, among others (World Bank, 2000).
Physical Heritage and Patrimony
Physical heritage and patrimony are imperative for improving the level of rural development. However, they have not been attended to in public policies in many developing countries due to the implementation of neoliberal dogmas. According to the neoliberal perspective, many investments in rural infrastructure must be focused on capitalist agriculture to improve competitiveness (Kay, 2009). Physical heritage and patrimony are essential elements for improving the quality of life and ensuring the respect of the rights of rural populations. For instance, roads and bridges are vital since they create access to other communities and markets. Hence, roads belong to the physical heritage, as well as health centers, schools, bridges, clean water, electricity services, among other things (Shen et al., 2012).
Governments of several developing countries have abandoned the construction of adequate infrastructure. According to The Global Competitiveness Report 2014-2015, the countries with the worst infrastructure are in Africa and Asia. Latin American countries, in general, are in the middle of the ranking (Corrigan et al., 2014). Besides the differences between developed and developing countries, the differences between rural and urban areas are significant because the preferences for investment are always prioritized for urban zones due to the population impacts.
We must also take into consideration the household infrastructure. In other words, the infrastructure that directly affects the quality of life for rural families is related to their homes, for example, access to clean water or restrooms. This aspect is narrowly related to economic heritage and patrimony because the individual use of the household incomes could improve household infrastructures (Shen et al., 2012).
Natural Heritage and Patrimony
Natural heritage and patrimony refer to biological resources. Some examples are water resources, landscape and land. Water sources include lakes, rivers, canals, and ponds. Landscapes consist of mountains, hills, plateaus and highlands. Finally, land comprises soil, alluvium and clay. It also includes biodiversity such as insects, birds, frogs, fish, flowers, plants, seeds, and trees as well as genetic resources and ecosystems. Weather is also taken into account through sun, rain, wind, air, and snow. Most human actions have severely damaged all these resources (Sun et al., 2019). This negative influence on natural patrimony has developed irreversible harm that currently impacts all of humanity.
We rely on the peasantry to manage all these shared resources and to use them based on ancestral knowledge. However, productive pressure and current policies do not support sustainable management. Recovering traditional ways to utilize these common resources will be beneficial for everyone. Natural heritage and patrimony managed with the ancestral knowledge of the peasantry could be a viable alternative for producing food for all humanity and for mitigating many effects of climatic change (Pachón et al., 2016).
Intangible Patrimony Cultural Heritage and Patrimony
Cultural heritage and patrimony are centered on identity but more importantly on creativity. This patrimony is reliant on acting according to traditions. Of course, spiritual and religious practices, as part of the connection with the world, belong to this patrimony (Desmarais, 2002). Unfortunately, neglectful policies have placed priority on commercial production, opposed to peasant activities. Examples of this kind of cultural heritage are the traditional communal labor or 'minga', terrace farming, ancestral forms of cropping as polyculture, ancestral pest control, and the barter system. In many places, these practices have been a means of survival for the peasantry (Declaration of Nyéléni, 2007). However, government policies, research preferences or non-governmental organizational practices, and cultural 'capitals' from hegemony groups have been privileged over the traditions of the peasantry (Flora et al., 2015).
Human Heritage and Patrimony
Human heritage and patrimony could be described as the traditional knowledge of local people and the communities to which they belong. Education, formal and informal, is possibly the best means for the construction of human heritage. As a result of instruction and experience, people and their communities obtain "know-how", skills, and abilities. Therefore, they obtain new ways to address problems (Crawshaw et al., 2014). Traditional knowledge is perhaps one of the most important human patrimonies, especially in rural areas, even though it has not been adequately valued in many places. However, it is essential to understand that people cannot acquire this knowledge in schools and universities (Patel, 2009). Without a doubt, human heritage and patrimony must be transferred through tradition, which needs to be taught through formal and informal education to children and adults alike.
Social Heritage and Patrimony
Social heritage and patrimony dictate belonging to a society and the ways of interacting inside that society. Many relationships build roads that establish and strengthen social collaboration. Committed relationships are the cornerstone of social patrimony. We know that trust is fundamental for creating real participation in social networks, such as communal organizations. These organizations must generate collective actions for consolidating cooperation, improving the quality of the rural life, and ensuring respect for their rights, besides pursuing individual benefits (Dormaels, 2012).
Institutional Heritage and Patrimony
The institutional heritage can be understood as the net of formal and informal institutions and stakeholders that interact in rural areas. It also takes into account the rules that they develop, agree upon, and implement for regulating access to power and resources. Of course, these rules contribute towards improving the quality of life, and hence, they lead to rural development, by providing equitable participation for all the stakeholders involved, but primarily for those who have been traditionally excluded (Kay, 2009;Pachón et al., 2016).
These kinds of arrangements, which many times are informal, can be carried out through the involvement and empowerment of the stakeholders. Empowerment is the result of the interaction of all heritages and patrimonies described above. This interaction maintains a virtuous circle that ensures the improvement of the other heritages, while at the same time creates the ability to improve the quality of life through respect for the rights of rural inhabitants.
Heritages and patrimonies can also be analyzed from an economic/sociological point of view (Leibenstein, 1984;Biggart and Beamish, 2003). Sometimes, institutional arrangements between different stakeholders have been constructed by custom or tradition. These habits, routines, or conventions become part of the everyday practices and ways of life for the entire community, which must be adopted as part of normal behavior. In many cases, conventions correspond to the prevailing political-economic model. However, some of these habits play out in unusual ways, meaning that these conversations can become an alternative for many rural inhabitants.
Quality of life and respect for human rights
The final key point and main goal for rural development is quality of life and respect for human rights of the rural population, which is its simplest definition. Since there is great academic discussion over the definition of quality of life and human rights, for this discussion we will use the human scale of development. Quality of life could be understood as the satisfaction of every fundamental human need. This will happen through the increase of self-reliance and the articulation of different levels among populations: the environment, technology, globalization and local processes, individuality and community. Of course, the primary focus is on people, because fundamental human needs are measured through people's involvement, prioritizing both autonomy and diversity. It aims to transform people, who are often perceived as an object, into actors of development. Participatory democracy, constructed from the bottom up, stimulates real solutions for real problems, which can satisfy all fundamental human needs (Max-Neef et al., 1994).
To sum up, the peasantry must combine all their heritages and patrimonies with the purpose of improving the quality of life and ensuring that their rights are respected. The interaction of heritages creates the conditions under which the peasantry will be able to identify and satisfy their own fundamental human needs. This construction must take into consideration their beliefs, ideas, and meanings in order to better satisfy all fundamental human needs. This means that the peasantry must internally identify its needs according to the particular circumstances of each community. This concern is paramount because the generalization of problems and solutions has shown poor results in many rural places (Pachón et al., 2017a;2017b)
Conclusions
Rural development has many characteristics of 'wicked problems', which is why we have evaluated and examined it from different viewpoints. As a result, stakeholders often complain or disagree about the proposed alternatives. That is why this paper considers all stakeholders' interests in rural matters. The current analytical framework, based on the idea of the heritages and patrimonies that peasantry hold, suggests a path where all heritages interact and, thereby, helps us achieve a better level of rural development.
The heritages and patrimonies of the rural small farmer interact inside the rural households, among rural families, and, finally, in rural territories. In all cases, the stakeholders must take possession of these heritages, mobilizing all their knowledge and traditions. In turn, it is important that society, as a whole, recognizes the importance of the peasantry and their heritages. When that recognition happens, reaching satisfactory rural development will be possible for all rural inhabitants.
However, the analytical framework of the heritages and patrimonies of the peasantry still has gaps to be filled. It is necessary to propose a methodology that validates the framework and measures the level of these patrimonies. The analytical framework requires some examples for the application of these indicators in rural territories with rural families. Regarding this concern, a question must be asked: What indicators can be used to measure the level of these heritages? Finally, we must ask: Do public policies allow the improvement of heritages and patrimonies? We also must take into account the involvement of all rural stakeholders while trying to tackle these concerns.
|
2020-04-02T09:34:30.109Z
|
2019-09-01T00:00:00.000
|
{
"year": 2019,
"sha1": "96634e53c47d6d94043c1c28b7276d005bf45fa5",
"oa_license": "CCBYNCSA",
"oa_url": "https://revistas.unal.edu.co/index.php/agrocol/article/download/76757/74084",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ae2b78b1dafc687cbfadb237e2a5dbd864037b22",
"s2fieldsofstudy": [
"Sociology",
"Economics"
],
"extfieldsofstudy": [
"Political Science"
]
}
|
159872339
|
pes2o/s2orc
|
v3-fos-license
|
Commerce and Sentiment in Tales of Barbary Encounter: Cathcart, Barlow, Markoe, Tyler, and Rowson
S A number of American sailors were taken hostage by Barbary Corsairs and held as slaves in North Africa in the years following the Revolutionary War. The crisis would ultimately lead to open warfare, but many Americans were optimistic that international commerce and common sympathy might overcome religious differences. This essay sketches the history of the Barbary conflict and considers three fictionalized accounts of Barbary encounter as secular conversion narratives, two of the three demonstrating how even despotic slaveholders could learn to embrace commerce and sentiment. Peter Markoe’s novel The Algerine Spy in Pennsylvania (1787), Royall Tyler’s The Algerine Slave (1797), and Susanna Rowson’s drama Slaves in Algiers (1794) suggest a certain openness to religious and national difference; however they are clearly about American concerns and, in fact, more committed to secularized Christian norms than their praise of common sentiment would suggest. In all three texts, Jews are excluded from the vision of common sentiment and made to symbolize what was cruel about commerce; it is my argument that they served as scapegoats for the American discomfort with its own failures of sentiment, evidenced most obviously by chattel slavery and the slave trade. These fictionalized tales of encounter hold up sentiment as the solution to all sorts of conflict. However, they also deploy sentiment, paradoxically, as a pre-biological marker of race, designating those beyond the union of sentiment—Jews—as somehow detached from the quality that makes people human.
go beyond the narrow limits of parochialism, opening corridors of cooperation as extensive as those of Atlantic trade. In this paper I will argue that the first American encounter with Islam, which culminated in a war waged to free the American hostages being held as slaves, was nevertheless widely seen as an opportunity to forge international bonds of commerce and common sympathy. American authors with no firsthand knowledge of the Barbary Coast represented North African pirates as possible converts: to Christianity, in some cases, but more importantly to democratic sentiments and free trade. After briefly sketching the history of the Barbary conflict at the close of the eighteenth century, I will read three fictionalized accounts of Barbary encounter as secular conversion narratives, two of the three demonstrating how even despotic slaveholders could learn to embrace commerce and sentiment. Peter Markoe's novel The Algerine Spy in Pennsylvania (1787), Royall Tyler's The Algerine Slave (1797), and Susanna Rowson's drama Slaves in Algiers (1794) suggest a certain openness to religious and national difference; however they are clearly about American concerns and, in fact, more committed to secularized Christian norms than their praise of common sentiment would suggest. In all three texts, Jews are excluded from the vision of common sentiment and made to symbolize what was cruel about commerce; it is my argument that they served as scapegoats for the American discomfort with its own failures of sentiment, evidenced most obviously by chattel slavery and the slave trade. These fictionalized tales of encounter hold up sentiment as the solution to all sorts of conflict. However, they also deploy sentiment, paradoxically, as a pre-biological marker of race, designating those beyond the union of sentiment-Jews-as somehow detached from the quality that makes people human.
Cathcart'sintimate knowledge of the Barbary Coast led to a diplomatic career that spanned the presidential administrations of Adams, Jefferson, and Madison (Waller 139). One of his first official tasks as U.S. consul general was to deliver yet another ransom for other American captives held in the neighboring Tripoli (1796-7). The payment was stipulated in The Treaty of Tripoli, which was drafted in part by the Connecticut wit Joel Barlow. Barlow's early epic of America-The Vision of Columbus (1787)-is largely ridiculed today, but an article he included in the English version of the treaty, endorsed by John Adams and unanimously approved by the Senate, is still considered a milestone: As the Government of the United States of America is not, in any sense, founded on the Christian religion,-as it has in itself no character of enmity against the laws, religion, or tranquility, of Mussulmen [Muslims],-and as the said States never entered into any war or act of hostility against any Mahometan [Muslim] nation, it is declared by the parties that no pretext arising from religious opinions shall ever produce an interruption of the harmony existing between the two countries. ("Barbary Treaties") How the article separating church from state made it into the treaty is unclear-it is not in the Arabic version. Curiously, it echoes lines that had appeared a decade earlier in Barlow's epic: The task, for angels great, in early youth To lead whole nations in the walks of truth, Shed the bright beams of knowledge on the mind, For social compact harmonize mankind. The treaty was designed to protect American shipping in the Mediterranean and Atlantic; but its language links commerce to an epic vision of global cooperation. ii Whatever the poetic appeal of the word "harmony," it was a choice bit of diplomatic hyperbole.Barbary piracy was arguably the most difficult foreign policy problem faced by the Early Republic (Kitzen). Corsairs had long preyed upon the Mediterranean and Atlantic shipping routes, as had European privateers. The major powers, as Cathcart complains in his memoirs, declined to mount a concerted effort against pirates because it was more convenient to use them in proxy fights against each other (Cathcart 129). New England merchants had benefited from British arrangements before the Revolutionary War and French protection during. With independence-and the growing conflict with France after the French Revolution-came vulnerability. Washington, Adams, and Jefferson were opposed to paying for protection-the saying "millions in defense but not a penny in tribute" stems from this time-but they disagreed about the desirability of maintaining a navy and lacked the revenue to build a strong one (Waller 137).
Adams and Jefferson did end up paying tribute. However the Treaty of Tripoli failed because payments were slow in coming. Corsairs again started seizing American prizes. This time Jefferson declared war on Tripoli (Allison 22; Waller 138-39; Field 44). The war has been forgotten, but it was crucial for establishing American access to Atlantic shipping routes, and for testing American ideas about the relation of trade to democratic sentiment.The mission did not, however, begin well for the Americans. The U.S.S.Philadelphia and its 307 sailors were captured after the frigate ran aground on a sandbar off the Barbary Coast. To solve the crisis, Jefferson ordered the bombardment of the Port of Tripoli coupled with a land invasion led by the marines. The regency capitulated.
Public celebrations of the American victory were patriotic, but they were also remarkably secular-as secular as the treaty that had failed (cf. Wilson). The most significant hymn commemorating the battle is the one still sung by the Marine Corps: "From the halls of Montezuma/ To the shores of Tripoli..." Several American cities are named after the hero of the naval battle, Stephen Decatur, who set fire to the captured Philadelphia in a daring commando raid, and then personally avenged the death of his younger brother, who had been killed in action. Decatur's famous toast, "our country, right or wrong," also survives. Frances Scott Key commemorated the bombardment of Tripoli in a song which, revised a decade later, would express American defiance in the face of the British bombardment of Fort McHenry (1814). The title of the better-known version is The Star Spangled Banner. The prototype praises "the light of the star-spangled flag of our nation" for eclipsing "the Crescent, its splendor obscured" (Allison 205).
The Crescent was obscured, but American religiosity also seemed to be waning. Historians argue that the revolutionary period marked "a decline for American Christianity as a whole" and the rise of a "civil religion" (to use Robert Bellah's famous phrase) based on "a shared dedication to republican government and equal liberty" (Ahlstrom 365; see also Beneke 159). The secularization thesis needs to be qualified (as Ahlstrom himself does). American religiosity would go through a number of waves and fluctuations that would continue up to the present, and at the time of the Barbary conflict Christianity was changing, but it had not disappeared. There were of course ministers who insisted on describing the conflict in religious terms (Baepler 65-9; Waller 85). An admittedly extreme example of evangelical orthodoxy was Timothy Dwight, a former classmate of Barlow's who would become the eighth president of Yale and a leading figure in the religious revival known as the Second Great Awakening (1790-1870). iii Dwightcharacterized the fight against Barbary piracy as a crusade against the infidels (a term he also used to decry domestic opponents). Ironically, it was the man Dwight considered the most dangerous infidel of all-Pope Pius VII-who also celebrated American gains as a victory for Christendom, and the man he castigated as an atheist-Jefferson-who declared war in the first place (Allison xvi). iv Dwight had an audience but little direct political influence. The American civil religion, if not completely secular, was certainly disestablishmentarian. v The key political actors in the Early Republic separated church and state-against Dwight's will-in order to avoid sectarian conflicts. This harmonized with the turn towards personal belief advocated by the Evangelicals gaining prominence anyway. vi Properly considered, the separation of church and state did not lead to a diminishment but a shift in religious emphasis. Faith became a matter of personal feeling; piety became more important than creed (Beneke 13,Abzug 37;Bell 17). There were limits to pluralism; in a moment I will explore how Jews were excluded from the community of sentiment. First I want to point out that the personalization of belief as feeling led to a re-sanctification of government in another form.The state that recognized a plurality of churches could command devotion to its own political ideals: democracy, based on self-evident truths and universal principles, became more than an institution; it was a conviction.
This conviction was pursued with missionary zeal. Bring commerce to despotism, Cathcart and Barlow believed, and those who begin to acquire wealth will want to join the community of free nations.Americans, in other words, were content to let the Barbary States remain Islamic as long as they converted to commerce. Two complementary beliefs were at work in this seeming acceptance of religious difference: one is the belief that people want to be free in a particular way; the other is that commerce brings them into common accord.Terry Eagleton describes this eighteenthcentury platitude as "the ideology of so-called commercial humanism, for which the proliferation of trade and the spawning of human sympathies are mutually enriching" 56). The key figure often cited in commercial humanismis Adam Smith, author of both the Theory of Moral Sentiments (1759, first edition) and the more famous Wealth of Nations (1776). Smith is sometimes described as the prophet of self-interest, although his views were much more complicated than that. vii As Amartya Sen points out it, Smith also believed that people are motivated by custom, reputation, and sympathy (187). Commenting on the importance Smith attached to considering moral questions from an impartial perspective, he notes that "Smithian reasoning thus not only admits but requires consideration of the views of others who are far as well as near" (126). This is a heavily debated issue. Sam Fleischacker points out that while Adam Smith intended his moral philosophy to be universal, he was nevertheless hard pressed to explain how values could be grounded beyond parochial community standards; Eagleton goes so far as to describe such morality as resembling a higher form of manners 31; see also Mullan 9). I do not propose to take sides on the Smith debate, but I do want to point out that there is a large body of scholarship interested in tracing philosophical sentimentalism back to Smith as well as thinkers such as Hume, Shaftsbury, and Hutcheson; thinkers who countered Hobbes' bleak view of humanity-and the political absolutism he saw as the only possible check-with an appeal to the benevolence they claimed to be instinctual.A number of contemporary scholars have contributed to the analysis of what is now variously being called "American sympathy" (Caleb Crain), "the culture of sentiment" (Shirley Samuels), "the culture of feeling" (Michael Bell), and "sentimental democracy" (Andrew Burstein).They have done a thorough job of showing how sentiment both reinforced hierarchies of class, race, gender, and sexual orientation and "undermined the hierarchical assumptions of republican ideology [,] extend[ing] the category of the fully human to persons consigned to the margins: the poor, nonwhites, women and children" (Gilmore 608).The consensus seems to be that sentiment played a patronizing but ameliorative role from before the Revolution to after the Civil War.
The international perspective afforded by Barbary conflict complicates this picture by showing how sentimental democracy was married to sentimental notions of commerce. The picture needs to be complicated because this marriage was a difficult one. The actual practices of eighteenth-century trade, when placed under the microscope, are a far cry from the sentimental ideals used to justify them. The Early Republic wanted to bring free trade and democracy to a despotic region, but democracy was more despotic than it cared to admit. The president who waged war to free white slaves from North African masters was himself a slaveholder; and the same year that he ordered the bombardment of Tripoli (1804), he refused to recognize Haiti, a new republic established through slave rebellion. There is a rift between the language of sentimental commerce and the actual space of the transatlantic slave trade. The voices emerging from this rift-like that of the black slave Olaudah Equiano, whose servitude on English vessels carried him across the Atlantic and the Mediterranean-do not distinguish between slavery in European colonies and the Ottoman Empire, except to point out that in the latter freedom could be won through religious conversion (Equiano 124).
This disanalogy posed a challenge to the universal principles that were supposed to provide the framework for, but also be the expression of, natural sentiment. The United States might be no better than the Barbary States; in some ways it might even be worse. The nation that was pluralist in terms of religious freedom-because faith was more important than doctrine-was absolutist when it came to slavery. Slaves in America might escape or pass, but they could not win their freedom by converting to the civil religion. They were beyond the "union of sentiment"-Jefferson for instance distinguished between black and white ways of feeling (Levecq 53)-which is why Stowe's sentimental portrayal of a slave family in Uncle Tom's Cabin caused such a furor in 1852. The American civil religion personalized dogma as religious sentiment, but it also personalized slavery as the lack of feeling or its inadequacy.
Not everyone bought this argument. Some eighteenth-century writers invoked the obvious parallels between white slavery in Africa and black slavery in America to make abolitionist arguments. viii However, many writers were more concerned with defending the sanctity of American sentiment than with addressing the discrepancies between free trade and political freedom. Instead of arguing against the slave trade, they blamed the failure of the American vision on betrayers. The scapegoats were often renegades or converts to Islam, who in fact captained the corsairs in large numbers. ix Commercial humanism was charitable enough to bring the gospel of wealth to the unconverted; it held nothing but contempt for the deniers.
The other scapegoats were those traditionally libeled as the deniers of Christ, namely Jews. The United States had no credit in Europe because it wasn't paying off its revolutionary debt to France. It lacked the tax structure to raise revenue for ransom. When Barlow negotiated for the release of Cathcart's 100 American companions, the only Commerce and Sentiment in Tales of Barbary Encounter: Cathcart, Barlow, Marko...
European journal of American studies, 9-2 | 2014 way to pay the Dey was to borrow from what Barlow called the "Jew House" of Baccri (Todd 134; see also Baepler 100).
Ironically, the money Barlow borrowed to pay the Dey had been loaned to Baccri by the Dey himself (Todd 134). Barlow celebrated this as a minor victory, but he wasn't amused when the Dey used American tributes to outfit corsairs, or when he used American debt as a pretext to commandeer the frigate George Washington to carry his own tribute to Turkey (Field 41). Barlow's subordinates blamed this debacle on a Jewish conspiracy (Allison172,177). The explanations were far-fetched, but given the repressed religious underpinnings of commercial humanism this was hardly a surprising surmise. Jews were somehow different even when they fit in; they engaged in commerce not to promote the general welfare but to satisfy nefarious designs. x At least this was commonly believed (cf. Harap). In 1785, Patrick Henry expelled a group of would-be Jewish immigrants from the state of Virginia on the grounds that they might be spies for the Dey (Allison 3-4, 6-7). The Jews were not deported because of their beliefs but on suspicion of espionage-by authority of a Virginia law passed the same year that the state officially recognized religious pluralism (ibid).
In theory the union of sentiment was capacious enough to include Muslims, who could be converted to commerce and were not immigrating to North America anyway. Jews were seen as permanent renegades to sentiment and made to personify the problem that commerce is itself renegade. Free trade did not lead to political freedom but to inequality, and in fact depended on the repression of an underclass who labored without hope of emancipation. The Africans forced into slavery were beyond the pale off common feeling; in the fictional accounts studied below, Jews are made to personify this absence of feeling or its manipulation and abuse. In what follows I will analyze The Algerine Spy in Pennsylvania, The Algerine Captive, and more briefly Slaves in Algiers as penitential texts of the American civil religion. While they are optimistic about converting Muslims to commerce, they cannot ignore, although in two out of three cases they do not mention, the chasm between American ideals and practice. That chasm is the space of the Atlantic slave trade. Their response to this credibility chasm was to symbolically sacrifice Jews to atone for slavery. The sacrificial logic reveals the religious underpinnings of American sentimentalism. It also prophesied a global economic system that has diverged in significant ways from the visionary harmony between political freedom and free trade. The Atlantic revolutions that followed the American Revolution did not lead to a global union of sentiment, but to a center-periphery geography organized around the complementary discourses of nation and race . Whatever the universal moral intent of philosophical sentimentalism, the language of sentiment prefigured and helped provide the coordinates for this modern geography.
The Jews expelled as spies from Virginia were trying to make it to Pennsylvania, which had a reputation for being more welcoming to strangers (Allison 6). Two years after the incident Peter Markoe, a resident of that state, anonymously published his only work of prose fiction, The Algerine Spy in Pennsylvania (1787). xi The slim epistolary novel claims to be a collection of secret letters written by the spy Mehemet.This is an obvious ploy; Mehemet's letters are actually catalogues of republican virtues barely draped in the customary cloak and dagger (Markoe 58).There are comic-ethnographic moments in the tradition of Swift and Voltaire and pointing towards Irving and Twain, such as when the bemused Algerian tries to make sense of a tea party and a Quaker meeting (40-46). The closest the novel comes to actual espionage is a one-paragraph plan to conquer Rhode Island, place it in the hands of Daniel Shay-leader of Shay's Rebellion (1786-87)-and use it as a base to cruise the New England coast for virgins to send back to the Sultan's harem (100).
The editor of the recent edition of Markoe's novel suggests that the digressions are so amusing one hardly notices the absence of a plot. But what the novel lacks in narrative development is made up for in polemic; the letters are actually anti-Federalist broadsides aimed at the Constitutional Convention, which also took place in Philadelphia in 1787 (xviii). The main argument is an economic one: the spy advocates maintaining a proper balance between agrarian and mercantile economies. The emphasis on agrarianism resonates with his development as a character. The supposedly secret letters end up with an American publisher because Mehemet has converted to "FREEDOM AND CHRISTIANITY," as the final words of the novel put it in capital letters (122, 125). He purchases two farms and invites his former concubine Fatimah-who has rechristened herself Maria and married a Christian-to move in next door.
The final words of the novel establish the Christian coordinates for Mehemet's conversion to the universal values of freedom and harmony.Mehemet's conversion is not only religious and political but also sentimental, and it is precipitated by a betrayal and a friendship (3; 47).The spy doesn't relay much useful information to Algiers, but his correspondence, his travels, and his finances have to be channeled through two intermediaries-a Jew in Lisbon and another in Gibraltar. The Jew in Lisbon lies to the Dey about Mehemet's loyalties-his motives are unclear (53,112). This makes it impossible for the spy to return-his property is handed over to a renegade-but the betrayal proves to be a blessing in disguise (116). In his heart, Mehemet always rejected tyranny, which is why he paid his debts and freed most of his slaves prior to departing Algiers (60-61, 69, 103-4). The man who counsels him to stay in Pennsylvania and aids him in recovering some of his wealth is the Jew from Gibraltar.
Mehemet's conversion to freedom and Christianity is really a homecoming, but it is only possible when he separates the good Jew from the bad Jew. They personify different economic principles typical of the lands in which they live: the one trading information and money in the same way that Lisbon depends on trading gold; the other putting down roots in the same way Britain gets rich-at least according to Markoe's analysis-by exporting domestic resources like tin (54-56). Too exclusive a reliance on mercantilism leaves a nation open to outside influence and corruption (109).The novel hammers this point home in a longish essay on how Jews make excellent merchants and diplomats, and most of them are loyal anyway, but they would be even more loyal and productive were they allowed to purchase real estate (14,(17)(18)(19)55).
There is no evidence that Markoe knew about the Jews deported from Virginia, but his long digression on Jewish property suggests that their story would have paralleled Mehemet's, had he been allowed to write it. The moral seems to be that most people just want to settle down on farms and cultivate good republican virtues. Christianity is the umbrella term for those virtues, although the novel also endorses religious tolerance (14). Certainly doctrine is less important than the religious sentiment that is roughly equivalent to loving one's neighbors. We know little about Markoe's own background. His family, probably French in origin, immigrated to Pennsylvania from the Caribbean. He clearly identifies with Jews as merchants and travelers but sees the potential for betrayal in a mobile, commercial life. Sentiment grows up inside an international economy, embracing the universal principles that allow it to operate, but it must be Commerce and Sentiment in Tales of Barbary Encounter: Cathcart, Barlow, Marko...
European journal of American studies, 9-2 | 2014 allowed to take root in a community.Markoe's protagonist articulates a version of Hume's argument that neighborliness is a much stronger sentiment than "global fellow feeling" (Eagleton 55). The novel takes up the international theme of the Barbary conflict to offer a parochial alternative. "A philosophical history of commerce," says the protagonist, would be an invaluable present to mankind." It is not as useful as cultivating your own garden, however, because "The ambition of princes and the avarice of merchants will never be restrained or regulated by systems" (55).
Markoe's parochialism makes it unnecessary for him to confront cultural difference or the problem of American slavery directly. This is a Pennsylvania novel written by an anti-Federalist; the wholly imaginary Algiers serves as a projection screen for problems-like slavery and despotism-that Markoe simply refuses to admit have anything to do with his state. Nevertheless the repentant spy articulates an argument often employed against Southern slaveholders. They don't work the land themselves, and so reap none of its personal benefits, instead developing into a class of lazy aristocrats.
This argument is a fair summary of the first half of Royall Tyler's The Algerine Captive (1797), which capitalized on the recent liberation of American hostages by purporting to be an actual account of captivity. xii The novel takes a clear stand on slavery by sending its northern protagonist south in search of work, first as a teacher and then as a doctor. Updike Underhill is an educated fool who can't understand why a Southern belle might take offense at an ode comparing her to "the ox eyed Juno," and the first half of the novel is a series of picaresque misadventures generally poking fun at northern pedantry and southern pretense (Tyler 46,(47)(48)(49)(50)(51). The tone only becomes serious when Underhill assists a doctor in performing a successful cornea surgery. The blind man restored to sight continues to believe in the superiority of the tactile senses, however, insisting that a sensitive touch proves there is more difference between any two men's fingernails than between white and black men generally (39-42). The purpose of the medical interlude, like the parody framing it, is to ascribe any differences between the North and the South, or between whites and blacks, to misperception. Sentiment, like touch, is a common feeling that goes beyond mere appearances (cf. Pangborn).
The conclusion of the first half of the book is put to the test when Underhill, incapable of holding down a job, signs on as a ship's surgeon on the merchant vessel Freedom and then a slaver named Sympathy. It is on the second ship that he witnesses the horrors of the middle passage including torture, rape, and the sailors' cynical manipulation of the power of sympathy to dominate the captives. When the men go on a hunger strike, the sailors force them to eat by beating their wives and children in their presence: "though the man dared to die, the father relented, and in a few hours they all eat their provisions, mingled with their tears" (99). Underhill does his best to ameliorate the suffering, but only succeeds in earning the scorn of his shipmates (99). They abandon him to a corsair, and he experiences slavery himself-as well as the sympathy of some of the black captives liberated in the raid, who care for him because he has a good black soul in a white body (101).
This turnabout clearly informs an abolitionist project, although most scholars agree that the second half of the novel radically diverges from the first. I want to suggest, however, that the exploration of sentiment provides a common theme that goes beyond the critique of slavery. Tyler is most interested in describing what makes a community a home even when it does not live up to its sentimental ideals (because of the slave trade), and therefore can hardly hope to export them. He goes to great lengths to corroborate its descriptions of the Barbary Coast with the up-to-date information then available in English. However, the geographical exploration is really a pretext for Tyler's philosophical exploration of the negative side of sentiment. The preface mocks the growing interest in gothic fiction by eroticizing its effects: "Dolly, the dairy maid, and Jonathan, the hired man…amused themselves into so agreeable a terror, with the haunted houses and hobgobblins [sic] of Mrs. Ratcliffe [sic], that they both were afraid to sleep alone" (6). Feelings, as the blind man affirms, are more powerful than appearances, but this is precisely what makes them so useful in manipulating captives and readers.
There are two crises of feeling in the second half of the novel. The first occurs when Updike is almost converted by a kind Mollah, himself a convert to Islam, who "disdained the use of other powers than rational argument" to convince the protagonist to convert (130). Although Underhill "trembled for [his] faith, and burst into tears," it is the tears that save him from "sophistry" (ibid., 136). Religious sentiment is a guide to belief where religious reason fails. Feeling preserves Underhill's faith, but feeling alone cannot save his self-respect or his sense of community. As in the case with the African captives who are forced to eat on board the slave ship, it can be used to manipulate the powerless-even to sever them from social bonds.
The second crisis of feeling occurs when Underhill is forced to witness the public impaling of a Christian caught trying to escape. The terror he experiences convinces him at last that he is a slave. The passage is often, but erroneously, cited because the definitive Modern Language edition inadvertently drops a line, which I here add in brackets: "I will not wound the sensibility of my [humane fellow citizens, by a minute de-] tail of this fiend like punishment" (143; for the dropped line cf. the edition edited by Don Lewis Cook, 75-76). Contemporary critics, influenced by the modern philosophy of sentiment called trauma theory, often dwell on how the pun on tale/tail shows that pain is contagious or transferable to the body of spectators/ the body of the text. Their ideal is the negative union of sentiment called "bearing witness" (Felman and Laub). The sense of the restored passage, however, runs in the opposite direction. Pain can be inflicted or displayed to cause fear but this does not necessarily lead to sympathy. Underhill refrains from detailing the torture for his readers because the torture isolated him; his body is penetrated not with fellow feeling but with fear that does indeed succeed in destroying his "innate" love of liberty and his sense of solidarity with the other captives. Slavery, in other words, manipulates feelings to isolate the slaves from common sentiment.
I have suggested that Markoe sees sentiment, in the form of neighborliness, as a compensation for the tyranny of global commerce; the good Jew can be separated from the bad. Tyler, more skeptical, shows how sentiment can actually be the instrument of tyranny and sympathy can be withheld or manipulated for personal or economic advantage. His symbols, once again, are Jews, who are described in keeping with the tradition of gothic villains. A Jewish father and son are the most consistently evil characters in the novel. They are always in disguise; even their palace is hidden behind a beggar's façade. They at first seem to evince sympathy for Underhill, but this too is a deception. They aim to use him in a scheme to immigrate to America, but once this proves futile, they use his services as a doctor, steal his money, and sell him into slaverynot once but twice. In the end he is saved only by accident (220-224).
This is an unsatisfying resolution in a novel with many formal problems. The biggest problem is that the two halves-picaresque and gothic-threaten to come apart at the seams. The genre trouble mirrors the political trouble that Tyler cannot solve by putting his faith in sentiment. He believes in the importance of sentiment, but he also sees that it can be abused. His ambivalence makes him more attentive than many to the parallels between Barbary captivity and American slavery. He does not believe that American commerce will spread fellow feeling because he knows commerce depends on the brutal exploitation of feeling. Nevertheless, for want of a better alternative, he has his protagonist return home to his parents and his country. Underhill's closing words are the Federalist motto-BY UNITING WE STAND, BY DIVIDING WE FALL (226). This is sentimental, but defensive rather than missionary. xiii Tyler seems to advocate pledging allegiance to a flawed system rather than working to spread sympathy throughout the world. He worked closely with other Federalists supporting the Alien and Sedition Act (1798), passed a year after the publication of his novel. He did not see any contradiction between isolationism, expressed in the novel as anti-Semitism, and abolitionism. A few years later he moved to Vermont, where he became a State Supreme Court Justice, and where, according to his biographer G. Thomas Tanselle, "His Federalism seems to have become so mild that he could take office amidst a general Republican victory, but for the same reason could not survive the Federalist victory of 1813" (Tanselle 34). He remains, like his protagonist, an ambiguous figure, pulled in contradictory directions by the ships named Freedom and Sympathy, and settling at home for lack of any better alternatives. (1794), slightly breaking with chronology in order to draw attention to what might be described as the sliding scale of sentiment within the universal framework of commercial humanism. Markoe is parochial, equating sentiment with neighborliness; Tyler is an isolationist who sees sentiment as a necessary but inadequate line of defense against the depredations of a commercial system that sanctions slavery. Rowson is a missionary who believes that sentiment can democratize foreign despotisms . Her universalism serves the drama's overtly feminist purpose of representing women as agents-not merely symbols-of political freedom. To do so she adapts the then pervasive fantasy of white Christian women kidnapped into harems. xiv
I now turn briefly to Susanna Rowson's drama Slaves in Algiers
The premise of the play is farcical, perhaps even pornographic, but Rowson's purpose is avowedly feminist. As Rowson's biographer Patricia L. Parker puts it, Rowson set her scenes in palace gardens and hid her characters behind fig trees, but she…seemed only slightly acquainted or little concerned with specific facts about the nation. She did know that olives and figs grew there and that Jews lived among the Moors without discrimination. Rowson's interest, however, lay not in Algeria itself but in the subject of tyranny in general and of tyranny of men over women in particular. She used this popular topic to make her first feminist statement on stage. (Parker 68) The heroine of Slaves in Algiers, Rebecca, is an American mother, who through a comicopera plot involving cross-dressing, disguises, and chance encounters, is reunited with her long-lost husband and daughter. Her perseverance teaches Algerians the importance of liberty, and sets up dramatic moments and an epilogue where strong female characters are able to express their commitment to liberty and convincingly decry the tyranny of men over women (Rowson 18,72,78). A slave revolt that makes the happy ending possible is ultimately welcomed by the Dey because he has learned the importance of liberty from the woman he wanted to make his concubine (74). Algiers is liberated according to the American plan, but ultimately it must consolidate its victory without American help. As Elizabeth Dillon points out, all of the interracial and interfaith marriages are avoided at the end. America and Algiers are analogous, but they remain Commerce and Sentiment in Tales of Barbary Encounter: Cathcart, Barlow, Marko... European journal of American studies, 9-2 | 2014 distinct in terms of race and religion (Dillon 415). The most threatening figure of all, an English Jew who pretended to convert to Islam in Algeria, is exiled from both lands. His daughter, whom he sold to the Sultan because he believed money to be more important than any sentimental bonds to family, nevertheless rejects marriage with a Christian to care for him (74). Sentiment wins out in the end, but it defines itself in terms of family and tribe. Rowson's women, who feel the need for freedom as strongly as men do, are qualified to be active political agents; nevertheless, the differences between races and religions establish themselves as boundaries to the more personal forms of affection. Rowson's powerful vindication of women's rights is also a plea for strictly delineated geography of national and racial boundaries which leaves Jews nowhere to go.
The boundaries that reveal themselves in Rowson's comic resolution map on to the racial and national boundaries of a global system of trade, which spread across the Atlantic under the banner of universal liberty, but distributed wealth in terms of core nations and their colonies (cf. Wallerstein). Jews seemed to have no place of their own in this emerging economic and geographical system. This is part of the reason why in these two novels and a play, they serve as figures for that other diaspora of enslaved Africans who were treated, in the name of profit, as if they were beyond the boundaries of human feeling.
Cathcart and Barlow dreamed of international harmony. However, they wound up martyrs to sentiment that also seemed to have no place of its own. Cathcart ended his career (1818-1820) by surveying the newly acquired Louisiana territory and complaining about slavery to his superiors. Nobody listened, and his letters were not published (Waller 177). Barlow, who had helped free him, already lay buried in Żarnowiec, Poland, where he died of the pneumonia he caught while chasing Napoleon during his disastrous Russian retreat in 1812. Barlow had been dispatched (by James Madison) to negotiate yet another treaty, but what he saw convinced him that the international "harmony" achieved by revolution didn't rhyme commerce with sentiment, but slaughter with slaughter. Barlow's last poem, "Advice to A Raven in Russia," is a catastrophic description of the final equality of "every nation's gore" (rpt. in . His vision of global harmony turned out not to have a jurisdiction; his grave is its cenotaph.
NOTES
i. In the penultimate paragraph of his Second Inaugural Address (1805), Jefferson employs the same phrase-"union of sentiment"-to argue that the nation had overcome bitter factionalism to unify itself through his election (Jefferson). He is referring to Federalists who, led by ministers including Timothy Dwight, referenced later in this essay, accused him of atheism. Jefferson does not claim that he has won the Federalists over-the Inaugural is surprisingly specific about the vitriol of the attacks-but he does think that the two parties can agree to disagree, this time within the framework of free speech. It is unclear who borrowed the phrase from whom-Cathcart wrote his memoirs years after his captivity, but he was sending official dispatches to Jefferson during his first term of office. Neither of them invented it, however. The first usage I have found is in Thomas Paine's American Crisis, no. 3, which describes how British oppression forged a "sentimental union" in the colonies reaching its fruition in the Declaration of Independence . Paine saw himself as a rationalist who lived according to universal principles rather than a sentimentalist with attachments to the particular. In American Crisis, no. 7, he argued, "my principles are universal. My attachment is to all the world, and not any particular part" (197). Whigs, on the other hand, would argue that this kind of abstract universalism sacrificed the concrete bonds of local human affection. However, Paine was more than willing to admit the importance of particular communities of sentiment within a framework of universal principles, as in the following excerpt from American Crisis, no. 9: "America ever is what she thinks herself to be. Governed by sentiment, and acting her own mind, she becomes as she pleases the victor or the victim" (231). In Cathcart, Jefferson, and Paine the union of sentiment is a way of imagining community as common feeling within a framework of universal laws.
ii. In the revised version of Barlow's epic, The Columbiad (1809), Columbus is granted a vision of a future world encircled by a commercial armada: "by fraternal hands their sails unfurl'd/ Have waved at last in unison o'er the world" (Barlow 193).
iii. Dwight preached a sermon in Connecticut on July 4, 1798, "The Duty of Americans at the Present Crisis," where he talked about the Christian victory against Islam ("Duty of Americans"). iv. "Dwight, a stalward Federalist among the stalwarts, was inclined to place Jeffersonians in the same category as infidels" (Cuningham 220).
v. "Eventually Dwight grew disillusioned about his country's peculiar role in world redemption, as disestablishmentarians and democrats gained power, but he did not despair of his faith's ultimate success. Always the evangelical activist, he merely reconceived his millennial army in less nationalistic formation. He and his associates looked to a worldwide union of evangelical Calvinists to propagate the Gospel and subdue Infidel conspirators in every locality" (Berk 116).
vi. What I am offering here is a version of the familiar, and to some degree no longer fashionable, argument that Protestantism led to secularization. Here it is in Peter Berger's version: "[Protestentism] only denuded the world of divinity in order to emphasize the terrible majesty of the transcendent God and it only threw man into total 'fallenness' in order to make him open to the intervention of God's sovereign grace, the only true miracle in the Protestant universe. In doing this, however, it narrowed man's relationship to the sacred to the one exceedingly narrow channel that it called God's word (not to be identified with a fundamentalist conception of the Bible, but rather with the uniquely redemptive action of God's grace-the sola gratia of the Lutheran confessions). As long as the plausibility of this conception was maintained, of course, secularization was effectively arrested, even though all its ingredients were already present in the Protestant universe. It needed only the cutting of this one narrow channel of mediation, though, to open the floodgates of secularization" (118). It is the argument of this essay that secularism was more religious than it admitted.
vii. Christine Levecq in Slavery and Sentiment: "The fact that Smith…authored both a moral study of sentiment and a book of liberal, free-market economics…suggests that his theory of individually negotiated emotional exchange is ideally suited to naturalize the individualism at the heart of his political philosophy" (21).
viii. What qualifies as the first American anti-slavery tract, The Selling of Josesph (1700) by Samuel Sewall, who was one of the judges at the Salem Witch Trials, invokes the Barbary comparison: "I am sure, if some Gentlemen should go down to the Brewsters to take the Air,and Fish: And a stronger party from Hull should Surprise them, and Sell them for Slaves to a Ship outward bound: they would think themselves unjustly dealt with; both by Sellers and Buyers. And yet 'tis to be feared, we have no other kind of Title to our Nigers (Sewall 15). Benjamin Franklin wrote an ironic letter to the editor from the perspective of a Barbary ruler to lampoon the position of Southern slaveholders (Baepler 8).
ix. Barlow tried to blame delays in American payments on a renegade captain-Peter Lyle-who had originally shipped out with Cathcart; his letters of the period express remorse for spreading a rumor that could have cost the renegade his head (it did not) (Barlow 142). Lyle's defection seemed to place him beyond the pale of sentiment. The new millennium would be characterized by free trade and political freedom, and those who "turned Turk"-as the saying had it-were worse than unenlightened. They were deniers.
x. Curiously, Timothy Dwight played a role here as well. His Conquest of Canaan, which preceded Barlow's attempt to write the epic of America by a decade, was a religious allegory of American independence, but it also served as a blueprint for American missionary projects in Jerusalem, some of which advocated both the conversion and the emigration of Jews (Field 274).
xi. Markoe's biographer (Sister Mary Chrysostom Diebels) claims that The Algerine Spy was the first American novel (William Hill Brown's The Power of Sympathy and Susanna Rowson's Charlotte
Temple both appeared in 1791, the latter published in England). This may be true, although it should be pointed out that Markoe did not invent the premise of an "oriental" spy who decides to settle in the west. The Algerine Spy was clearly modeled on Montesquieu's Persian Letters (1721), which describes a Persian spy who never makes it back to his homeland.
xii. The Algerine Captive is dedicated to David Humphreys, who was the American envoy to the region, stationed in Lisbon, and like Barlow known as one of the Connecticut Wits.
xiii. Gilmore: "The Algerine Captive, like Tyler's comedy The Contrast, integrates sentiment into the world of men. The novel politicizes affectivity so that it becomes an instrument of public virtue rather than the invitation to feminine self-indulgence deplored by critics" (637). xiv. Historians agree that European women were rarely taken captive by Barbary pirates, and probably never an American, for the simple reason that women were seldom on ships. However, spurious captivity narratives such as those purporting to be by one Maria Martin and a Mary Velnet were extremely popular around 1800 (Baepler 147). The virtue that Velnet managed to preserve could be seen in all its naked glory on the frontispiece, which depicted her at the low point of her captivity, bare-breasted and chained in a dungeon (Baepler 148).
|
2019-05-21T13:07:27.363Z
|
2014-09-26T00:00:00.000
|
{
"year": 2014,
"sha1": "13be5bd597cce4b6f255096f7e1d859375baaf5e",
"oa_license": "CCBYNC",
"oa_url": "https://journals.openedition.org/ejas/pdf/10358",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "3c662b13ab92424586157d86e1c17f934857ddc3",
"s2fieldsofstudy": [
"History"
],
"extfieldsofstudy": [
"History"
]
}
|
237755876
|
pes2o/s2orc
|
v3-fos-license
|
Repellent Effect of the Pandanus (Pandanus amaryllifolius Roxb.) and Neem (Azadirachta indica) Against Rice Weevil Sitophilus oryzae L. (Coleoptera, Curculionidae)
The aim of this study was to determine the effect of Pandanus (Pandanus amaryllifolius 20 Roxb.) and Neem (Azadirachta indica) leaves powder on the repellency, mortality, and weight loss 21 of grains due to Sitophilus oryzae. The methodes of this study used a completely randomized design 22 (CRD) with 7 treatments and 4 replications. The results of this study indicate that the best treat- 23 ment in terms of causing repellency was the treatment of 10 grams of pandanus with a percentage 24 of 87.5%, while the best treatment in terms of causing pest mortality and was also able to reduce 25 the risk of rice weight loss due to Sitophilus oryzae was treatment 10 gram of neem with a mortality 26 percentage of 76.25% and weight loss of rice 3.14%. This research showed that neem leaf com- 27 pounds are better in terms of causing mortality, while Pandanus compounds are better in terms of 28 causing mortality of Sitophilus oryzae.
Introduction
Rice is the staple food for people in the most parts of Indonesia. Various disturbances from organisms that can reduce the quality and quantity of rice are always there. One of the pests that exist in rice and can cause the decline in quality and quantity of rice is the post-harvest pest S. oryzae [1][2][3][4]. The presence of S. oryzae in rice is extremely detrimental and causes damage. Management is necessary in order to avoid damage caused by S. oryzae, proper handling is needed when storing rice.
According to the ICSA, in 2019, Indonesia's rice production reached 54.60 million [5]. The total production decreased by 7.76% from the total production in 2018 which reached 60.60 million tons. Food Agricultural Organization in 2017 stated that Indonesia as a Southeast Asian has a risk of losing rice products after harvest reaching 10-37%, so it is necessary to take control to minimize the risk of damage during its storage period.
Rice stored in warehouses globally reduced 5-10% in damaging in a relatively short time due to warehouse pests, especially S. oryzae [6]. The level of damage for each country varies depending on the region, type of rice, climate, and duration of storage [3]. For instance, rice damage in Ukraine can reach 25-50%, Australia 0.7%, Pakistan 16%, India 20-25% in 2010-2011, and Poland 3-5% [6][7][8]. According to Phillips and Throne [9], the loss of storage products due to S.oryzae pests ranges between 9% in developed countries and 20% in developing countries. This shows the importance of controling S. oryzae to be carried out.
The use of synthetic fumigants causes a variety of serious problems, such as pest resistance, human health, the environment, natural enemies of pests, and pollution [10][11][12]. The synthetic fumigants commonly used to control S. oryzae are methyl bromide and phosphine [13]. The latest research by Kim et al. [14], stated that the S.oryzae in the quarantine of South Korea was resistant to Methyl Bromide (CH3Br) and Phosphine (PH3).
Hereby, it is important to control S. oryzae by using natural materials from plants used as bioinsecticide repellents so it will not cause negative effects on food safety. One of the alternative materials that can be used as bioinsecticides is Neem and Pandanus leaves. Neem leaves contain are alkaloid compounds, nimbidine, tannins, resins, azadiractin [15], and Pandanus leaves contain are alkaloids, flavonoids, polyphenols, terpenoids, steroids, hydrocarbons, and aldehydes [16][17][18][19]. According to Ahmad et al [20], stated that the bioactive compounds commonly used in plant pesticides are mostly steroids, alkaloids, tannins, terpenoids, phenols, flavonoids, and resins that have insecticidal properties. The content of these compounds is following accordance with the researchers chose to study the repellent effect of Pandanus leaves and neem bioinsecticide combination against S.oryzae pests.
The previous research that also evaluated the repellent effect from the other botanical pesticides against S. oryzae was conducted by Auamcharoen et al., [21]; Akhtar et al., [2]; Yankanchi et al., [22]; Aref and Valizadegan [23]; Das et al., [24]; Klys et al., [25]. The results of a study by Auamcharoen et al., [21], explained that the fastest repellent effect on S. oryzae in just 5 minutes given by the methanol extract of Duabanga grandiflora at a concentration of 0.252 mg/cm2 which resulted in a 63% response rate. After 4 hours, the repellency rate to S. oryzae was 100%, while the results of Klys et al., [25], showed a 98% repellent effect against S. oryzae after 5 hours of exposure to bioinsecticide from caraway essential oil with a concentration of 0.5%. Elgizawy et al., [26], have also evaluated the fumigant toxicity and repellent activity of Litsea cubeba essential oil and its two main active ingredients against S.oryzae. The results showed that the repellency of S.oryzae are 81.83% at 4 hours after the start of the study. Different things were done by Fernando and Karunaratne [27], through the use of bioinsecticides from Olax zeylanica leaf powder (a rural vegetable in Sri Lanka) at a dose of 7 gr and 1 gr per 50 gr of rice can provide a repellent effect against S.oryzae of 96% and 50%.
The repellency assessment of the insecticides of Neem and Pandanus leaves is highly necessary. The aim of this study was to test the repellency of the combination of neem leaves and Pandanus, and without the combination of the two being used as a powder to control S.oryzae. This research is expected to be able to control S.oryzae with an environmentally friendly manner, reduce the risk of product damage, and novel scientific development.
Materials and Methods
The study was conducted in the laboratory conditions at 28˚ ± 1 ˚C with 60 ± 5% relative humidity at the Department of Plant Protection in the University of Jember. The research design used a completely randomized design (CRD) with 7 treatments and 4 replications, the treatments consisted of: 3 of 11 P2: Rice with 8 g of neem and 2 g of Pandanus leaves powder P3: Rice with 6 g of neem and 4 g of Pandanus leaves powder P4: Rice with 4 g of neem and 6 g of Pandanus leaves powder P5: Rice with 2 g of neem and 8 g of Pandanus leaves powder P6: Rice 10 g of Pandanus leaves powder S. oryzae adults reared was carried out according to the procedure and modification of Klys et al., [25] namely 50 (25 male and 25 female) S. oryzae were kept in a plastic jar filled with rice as a place to live and the food was then covered with a saffron cloth, and invested for 7 days and then all the pests were removed from the jar and waited for ± 35 days for the emerging of S. oryzae to come out of the rice. The method of obtaining adult female and male insects with a uniform number and size was then carried out by separating the young (virgin) with a slightly reddish brown color. The male and female insects could be distinguished by their body size, female insects are larger in size than male insects. Beside, it could also be seen in the snout, where the snout length of the female of S.oryzae is longer than the male [28] The application methodolgy of neem and Pandanus leaves obtained was weighed according to the dosage in the treatment, namely 2 g, 4 g, 6 g, 8 g, and 10 g. The insecticide was put into a tea bag and then put into a plastic cup filled with 100 g of rice and 20 adapted S. oryzae covered with saffron cloth. This was done as a new way of applying insecticides by utilizing volatile compounds from the materials used or using the concept of inhalation poisons system. The Repellency observations were observed every 2 hours during the day and 4 hours at night for 48 hours. The calculation of repellency is seen from the insects that leave/move from the place that was given treatment to the outside of plastic cup in the jar. The evaluation of the repellency effects were based on the emigration index, calculated as a percentage proportion of individuals emigrating when compared with the total number of individuals in the population. The calculations were made using the following formula [25]:
Repellency effect = 100%
The percentage of mortality was calculated by comparing the number of dead insects after application compared to the number of tested insetcs using the formula from Abdelatti and Manfred [29]. The calculation of mortality at intervals of 3 days and until no more insects have died.
Mortality index = 100%
The percentage of weight loss was calculated after 30 days after insecticide application. The method used was by sifting with a size of 40 mesh, then the sieve in the form of flour from rice was weighed using a digital scale. The calculation used the formula from Mehta and Surjeet [30] as follows : where U is the weight of uninfested grains (g), NU is the number of uninfested grains (n), D is the weight of infested grains (g), and ND is the number of infested grains (n).
We have investigated whether there were statistically significant diferences in the repellent effect of different concentrations of neem and Pandanus leaves powder on S. oryzae. The dependent variable is the insect emigration rate. The analysis of variance (ANOVA) Kruskall-Wallis rank test was applied followed by the Duncan's Multiple Range Test (DMRT), in this case a multiple comparison test [31]. The test probability level "p"and the significance level" were 0.05. The calculations were performed in the Excel's Statistics tool.
Results
The results of the application of insecticides from the pandanus and neem leaves powder, and the combination of the two had a higly significant effect on the control of the repellency of S. oryzae. Based on the results of the analysis of variance (ANOVA), the F count of 48 hours of response was 155.85 and was greater than the F crit of 5% and 1%, namely 2.57 and 3.81. The results of the Analysis of Variance were very significantly different and continued with the Duncan's Multiple Range Test (DMRT) 5% which can be seen in the Table 1. The values followed by the same code are not significantly different in the same column P0 (control), P1 (10 g neem), P2 (2 g pandanus + 8 g neem), P3 (4 g pandanus + 6 g neem), P4 (6 g pandanus + 4 g neem), P5 (8 g pandanus + 2 g neem), and P6 (10 g pandanus) Based on the results of the Duncan's Multiple Range Test at 2 hours to 48 hours of observation, there was a difference between treatments, the control was very significantly different from all other treatments. The Differences also occurred in the change and increase in the percentage of repellents in all treatments compared to controls. The control tended to have no increase in the percentage of repellents after exceeding the observation time of 4 hours and remained at 17.5% until the end of the observation. Treatments other than control continued to experience an increase in the percentage of response up to 22 hours of observation. Based on the figure 1, that the highest increase in repellency was in the first 2 hours of observation with a percentage of 40% in the combination treatment of 8 grams of pandanus and 2 grams of neem (P5). However, the highest response from 10 hours to 48 hours was the treatment of 10 grams of pandanus (P6). In addition to the repellency rate, insects mortality was also calculated based on the results of the analysis of variance (ANOVA), the F value of the insect's mortality count is 169. 15 Based on the results of the Duncan's Multiples Range Test at 5% level, that is the observation of 3 to 21 days after application there were differences between treatments, namely the control was highly significantly different from all other treatments. In the control treatment, there was no increase in the percentage of mortality ranging from 3 til 21 days after application. The treatments other than controls continued to experience increased mortality up to 18 days after application (Figure 2). Based on the Figure 2, that the highest increase in mortality was observed 3 days after application with a percentage of 27.5% in the 10 g neem treatment (P1). The treatment of 10 grams of neem remained the treatment that caused the highest mortality until the end of the observation up to 21 days after application, which was 76.25%. The combination treatment of 2 g of pandanus and 8 g of neem (P2) also had a fairly good mortality percentage of 62.5% up to 18 days after application. The treatment other than control that had the lowest percentage mortality value was the combination treatment of 6 g of pandanus and 4 g of neem with a mortality percentage value of 43.75% until the end of 21 days after application. The percentage value of mortality which was almost the same as the combination treatment of 6 g panda and 4 g of neem was in the combination treatment of 8 g of pandanus and 2 g of neem of 45%. The values followed by the same code are not significantly different Based on Table 3 above, it showed that the control treatment with other treatments was very significantly different. The treatment of 10 g neem (P1) was significantly different from the combination treatment of 4 g pandanus and 6 g neem (P3), the combination of 6 g pandan and 4 g neem (P4), the combination of 8 g pandan and 2 g neem (P5), the 10 g treatment pandanus (P6), and control. However, the treatment of 10 g neem (P1) was not significantly different from the combination of 2 g pandan and 8 g neem (P2). The percentage of high and low weight loss can be seen in Figure 3 The percentage of rice weight loss as shown in the figure 3 above is quite diverse. The highest percentage of loss occurred in the control at 6.55% or the equivalent of 6.55 grams of weight loss. The treatment that had the least weight loss was 10 g of neem (P1), which was only 3.14%. The combination treatment of 2 grams of pandanus and 8 grams of neem was the second-best treatment with a percentage of rice weight loss of only 3.40% of the initial rice weight and a percentage difference of 0.36% of 10 grams of neem. This indicates that the treatment of 10 grams of neem is the most effective in reducing or suppressing rice weight loss due to S. oryzae.
Discussion
The results of this study showed that the highest percentage of repellency occurred in the insecticide combination treated with a combination of 8 g of Pandanus and 2 g of neem at 61.25% at 8 hours after application and was not significantly different from the treatment of 10 g of Pandanus. The percentage of repellency was still greater than the combination of 10 g of Pandanus treatment which causes repellent of 56.25% (Figure 1). This phenomenon is following the results of a similar study by Das et al., [24], that the insecticide combination from the combination of mahogani and neem was 66.67% greater resulting in repellency to S. oryzae than without the combination. In addition, the study from Abdellati and Manfred [29], state that the combination is carried out to increase the effectiveness of insecticides, including in terms of repellent to S. oryzae. Beside, the results of this study are in line with similar research but differ in the basic materials of insecticides, that the highest repellant effect occurs in the first 5 hours after application, such as research from Auamcharoen et al., [21], which explains that the fastest repellent effect on S. oryzae in just 5 minutes given by waste methanol extract at a concentration of 0.252 mg/cm2 which resulted in a 63% repellency rate. After 4 hours, the repellency rate on S. oryzae was 100%. The results of the study by Klys et al., [25], showed a 98% response effect against S. oryzae after 5 hours of exposure to insecticides from caraway essential oil with a concentration of 0.5%. The repellency effect is due to the volatile compounds produced by insecticides so that insects stay away from bioinsecticide sources [32]. The insects stay away from insecticide sources due to the volatile compound that produces a distinctive odor so that insects try to save themselves as shown in the results of this study. The repellency of 10 g of Pandanus treatment was the highest at 70% when it entered 10 hours after application (Figure 1). The treatment which consisted of a smaller neem dose in the combination of 8 g pandan and 2 g neem had more high repellency than the larger neem dose combination. This study is different from the results of research by Fernando and Karunarate [27], which explain that the higher the dose, the greater the repellency. The results of his research explained that insecticides from the leaf powder of olax zeylanica (a rural vegetable in Sri Lanka) at a dose of 1 g and 7 g per 50 g of rice can provide 50% and 96% repellency effect against S.oryzae. However, the results of this study are following the results of the study by Klys et al., [25], that the repellent effect of insecticides is not always influenced by the larger the dose is given, it will produce a greater effect, sometimes low doses are more effective in controlling insects than using too high a dose. Based on the result of his research, the repellency effect of caraway essential oil at a dose of 0.5% and 0.1% was 98% and 100%, respectively. According to Dhakshinamoorthy and Selvanarayanan [33], this phenomenon is caused by differences in the materials used which contain synergistic or antagonistic compounds, different formulations, and compounds.
The highest repellency in this study occurred in the first 2 hours after the application of insecticide (Figure 1). The information in the figure 1 showed that S. oryzae are quite fast in responding to repellent sources of insecticides. The highest repellency value after 2 hours of application was the combination treatment of 8 g of Pandanus and 2 g of neem at 41.25%. This study proved that the use of a combination of insecticide-based materials is better at repellency effect of S. oryzae in the first 2 until 8 hours. The ability of the basic materials to be repellent between Pandanus and neem, so in this study it was seen to be greater in Pandanus. The repelency can be seen in the combination treatment of 8 g of Pandanus and 2 g of neem and treatment of 10 g of pandanus. The percentage produced is also greater than the 10 g neem treatment which produces a repellency effect at 2 hours after application is only 36.25%. This phenomenon may be due to the content of Terpenoid compounds which produce a stronger repellency effect on S.oryzae of Pandanus than neem [34,17].
The repellency effect tends to increase due to the increase in a certain time ( Figure 1), but in this study, the increase in repellency did not occur after 26 hours of observation. This is following previous research Das et al., [24]; Klys et al., [25]; Fernando and Karunatne [27]; Abtew et al., [35] which reported that the repellency of insects of insecticides can still have an effect on 1 until 24 hours after application and after that time the effect will not be too high on repellency. This phenomenon is since insecticides can evaporate over time, thereby reducing their repellency to target insects, which is in this case are S.oryzae [36][37].
The lack of the repellency activity occurred 26 hours after the application of the insecticide in this study. In order the percentages were in the treatment of 10 g of Pandanus was 87.5%, a combination of 8 g of Pandanus and 2 g of neem was 73.75%, 10 g of neem treatment was 67.5%, the combination of treatment of 6 g of Pandanus and 4 g of Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 6 July 2021 doi:10.20944/preprints202107.0123.v1 neem was 58 75%, the treatment combination of 4 g of Pandanus and 6 g of neem was 57.5%, the combination of treatment of 2 g of Pandanus and 8 g of neem was 53.75%, and control was 17.5%. The interesting thing in this study was that the insecticide is still have repellent after 24 hours although there is less repellency than before 24 hours after application.
In mortality rate, the mortality of S. oryzae in this study began to occur at 3 days after application with the age of S. oryzae was 42 days. The highest mortality at 3 days after application occurred in 10 g of neem treatment with a percentage of 27.5%. S. oryzae has not experienced mortality at 1 to 2 days after application. This result was different from previous studies, such as Akter et al., [38], in that the S. oryzae died in 100% after 72 hours of insecticide application from extracts of materials containing neem. Differences in toxicity that cause mortality may be influenced by differences in the way the insecticides work. Previous studies used contact poison or insecticide mixed with rice which is the food of S. oryzae so that the toxicity tends to be faster. The difference in this study was the use of neurotoxins by utilizing the distinctive odor of volatile compounds from insecticides so that the toxicity may tend to be longer in causing mortality in insects.
The phenomenon is because most of the toxic fumigant action of insecticides is made to penetrate the insect's body through the respiratory system or the inhalation system [39]. In addition, the same thing was conveyed by Fernando and Karunaratne [27], that volatile compounds that evaporate can interfere with the physiological function of insects through the entry of spiracles so that they interfere with breathing and cause death. This is also due to the differential response of insects to insecticides that block octopamine receptors in insect so that they experience interference with their bodies which results in an abnormal respiratory system and causes death [40][41][42]. Insects that experienced mortality were in a lying position with their legs bent stiffly.
The highest mortality percentage until the last observation in this study was still in the combination treatment of 10 g neem, which was 76.25% at 18 and 21 days after insecticide application, followed by the combination treatment of 2 g Pandanus and 8 g neem which had a mortality value of 62.25%, treatment of 10 g of Pandanus was 55%, a combination of 4 g of Pandanus and 6 g of neem was 51.25%, a combination of 8 g of Pandanus and 2 g of neem was 45%, and a combination treatment of 6 g of Pandanus and 4 g of neem has a percentage of 43.75%, and 0% control ( Figure 2). This result was almost the same as the research by Singh et al., [43], that insecticide with neem as the base material causes 76.10% mortality against S. oryzae with a concentration of 0.5% per 100 g of rice. The results of this study may be because the compounds in neem are more deadly to S. oryzae (alkaloids, resins) than the compounds in Pandanus leaves [44]. Insecticide with neem composition causes higher mortality than pandan or a combination of the both. According to Dhakshinamoorthy and Selvanarayanan [33], the efficacy of pesticides used depends on the content of compounds that are synergistic or antagonistic, differences in formulation, temperature, and origin of the plant.
Based on the figure 2, it was known that in addition to control, the effect of insecticides on S. oryzae's mortality will tend to increase from 3 until 18 days after application of insecticide. However, when entering 15 days after application, the increase in effect on mortality tended to be slight and stable. This is different from the previous percentage of repellant effect, that insecticides with more pandan-based ingredients will cause higher repellent than neem, while on insects mortality effects, insecticides with more neem composition produce higher mortality than Pandanus or a combination of both. The insects that died from the insecticides showed stiff bodies with bent legs and the body lying sideways. This may be due to the insecticidal effect exposed. According to Ravi Dhar et al., [45], that the volatile organosulfur constituents of insecticides can enter through the cuticle or through the spiracles and disrupt gas exchange in respiration or shortness of breath, resulting in death. This is also due to the differential response of insects to insecticides that block Octopamine receptors in insects so that they experience interference in their bodies which results in an abnormal respiratory system and causes Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 6 July 2021 doi:10.20944/preprints202107.0123.v1 death [40][41][42]. S. oryzae can experience death due to exposure to volatile compounds from the insecticides used.
In weight loss rate due to S. oryzae, the results showed that the use of insecticides can reduce the risk of weight loss in rice. Based on the Figure 3, it can be seen that in the control treatment without the use of insecticides had the highest percentage of rice weight loss was 6.55%. This study is following the research of Prakash and Rao [46], that S. oryzae cause 5-25% weight loss starting from 30 days of it infected rice.
The percentage of rice weight loss depends on the length of time the rice is infested by S. oryzae. According to Okpile et al., [47], so far no rice is resistant to S. oryzae. This is based on the results of his research, that of the 10 types of rice (royale stallion, mama royale, parboiled rice, mama gold, white rice, super eagle, indian rice, champion rice, abakiliki rice, and mama africa) none of them are resistant and permanent, experienced weight loss with an average of 19%. According to Gvozdenac et al., [48], also reported on the results of their research that the S. oryzae caused a weight loss of 35.4% in wheat for 50 days of infestation. The large weight loss is due to the absence of special treatment for S. oryzae, such as the use of pesticides to control them.
The percentage of rice weight loss was quite different between the controls and those treated with insecticide. The lowest percentage of rice weight loss was in the treatment of 10 g of neem (P1) which only experienced a weight loss of 3.14%. The decreasing weight loss of rice indicate that the material or insecticide used is effective. This is because great insecticides affect appetite, disrupt metabolism, and block pest spiracles, causing pests to die [49]. The high insects mortality will cause a small risk of rice weight loss. Based on the results of this study, if sorted from the treatment that caused weight loss from the least to the most, namely the treatment of 10 g neem was 3,14%, the combination of 2 g of Pandanus and 8 g of neem was 3,40%, treatment of 10 g pandan neem resulted in a weight loss of 3,57%, the combination of 4 g of Pandanus and 6 g of neem was 3.87%, the combination of 8 g of Pandanus and 2 g of neem was 3,92%, and the combination of 6 g of Pandanus and 4 g of neem was 4,32%.
The different effectiveness is caused by differences in the composition of the insecticide used [33]. The results of this study provided information that the effect of the compounds contained in neem is better in terms of preserving rice as evidenced by the least of weight loss than in Pandanus, a mixture of both, and controls. According to Shannag et al., [44], neem has alkaloid and resin compounds that give a more lethal effect on S. oryzae than the terpenoid content contained in Pandanus leaves.
Conclusions
In conclusion, Based on the results of research that had been carried out, the best treatment in terms of causing repellent was the treatment of 10 grams of Pandanus with a percentage of 87.5%, while the best treatment in terms of causing mortality and reducing the risk of rice weight loss due to S. oryzae was the treatment of 10 grams of neem with a mortality percentage of 76.25 and weight loss of rice was 3.14%. The results of this research still need to be continued by other researchers by separating pure compounds from pandanus and neem leaves and formulating them in gas form so that they will become new products in the form of biodegradable fumigants. All of these efforts are expected to help reduce the risk of rice damage caused by S. oryzae.
|
2021-09-28T01:09:52.468Z
|
2021-07-06T00:00:00.000
|
{
"year": 2022,
"sha1": "401e98859f9e4c60a19f57f99324240f7c024f51",
"oa_license": "CCBY",
"oa_url": "https://www.preprints.org/manuscript/202107.0123/v1/download",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "a25112178ab40202073e9d7e18a53fcbeaa5173e",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
}
|
16030879
|
pes2o/s2orc
|
v3-fos-license
|
Clinical outcome for patients with dedifferentiated chondrosarcoma: a report of 9 cases at a single institute
Background Dedifferentiated chondrosarcomas consist of two distinguishable components: low-grade chondrosarcoma components and high-grade dedifferentiated components. Materials and methods Nine cases (4 males, 5 females) of dedifferentiated chondrosarcoma were treated in our institute. The average age was 58.6 (range, 37–86) years. The tumor location was the long bone in 7 cases (femur, n=5; humerus, n=1; tibia, n=1) and the pelvic bone in 2 cases. The average time from appearance of symptoms to treatment was 9.4 (range, 1–40) months. Results and discussion On plain radiographs, matrix mineralization was seen in all 9 cases (100%). Bone destruction was observed in 5 of 9 cases (56%), while pathological fracture was seen in one femur case (11%). Lung metastasis was observed in all cases (initially in 5 cases; during the treatment course in 4 cases). Surgery was performed in 8 cases, with local recurrence occurring in 2 of those cases (time to recurrence, 2 and 10 months). Chemotherapy was administered in 4 cases, but did not result in significant improvement. All 9 cases died of lung metastases, with a median survival time of 10 (range, 3.4-18.8) months. The presence of initial metastasis at diagnosis was a significant unfavorable prognostic factor. Conclusion The prognosis of dedifferentiated chondrosarcoma is dismal. With the lack of convincing evidence of the benefit of chemotherapy, complete surgical excision is the initial recommended treatment.
Background
Dedifferentiated chondrosarcomas consist of two distinguishable components: low-grade chondrosarcoma components and high-grade dedifferentiated components. This group of cancers was first described in 1971 [1]. The dedifferentiated components have varying features, including features of undifferentiated sarcomas, osteosarcomas, angiosarcomas, fibrosarcomas, rhabdomyosarcomas, leiomyosarcomas, and giant cell tumors [2]. Dedifferentiated chondrosarcomas are highly malignant with a very poor prognosis [3,4]. Surgery is the primary treatment modality, and chemotherapy has been adapted for use in selected cases with dedifferentiated chondrosarcoma [2,4,5]. In the current report, a series of dedifferentiated chondrosarcoma cases treated in our institute is examined, with a focus on identification of prognostic factors.
Materials and methods
Between 1996 and 2010, 9 cases of dedifferentiated chondrosarcoma were treated in our institute. All cases had been referred to our institute by nearby hospitals. Clinical information of these cases is presented in Table 1. The cases included 4 males and 5 females. The mean age at the first procedure was 58.6 years (range, 37 to 86 years). The tumor location was the long bone in 7 cases (femur, n=5; humerus, n=1; tibia, n=1). In the other 2 cases, tumors were located in non-long bone (pelvic bone in both cases). Biopsy was performed in all cases for the purpose of diagnosis. Surgery was performed in 8 cases, while 1 case received palliative care only.
Plain radiographs taken at the initial visit to our institute were assessed. They were evaluated with respect to matrix mineralization, bone destruction, and the presence of fracture. The presence of extraskeletal extension was assessed by magnetic resonance imaging (MRI). Surgical treatment was performed by either resection alone or resection followed by implantation of an endoprosthesis. Chemotherapy was administered to select cases.
Statistical analysis
The survival estimates were determined by Kaplan-Meier analysis, while the survival differences according to clinical parameters (lung metastasis, local recurrence, and location) were evaluated by the log-rank test. A P value of less than 0.05 was considered to indicate statistical significance.
Results
Time from the appearance of symptoms to treatment ranged from 1 to 40 months, with an average of 9.4 months and a median of 3 months. All 9 cases presented with pain (100%). A palpable mass was detected in 3 of the 9 cases (33%), including 1 humerus case and 2 femur cases. Numbness was observed in 1 of the 9 cases (11%), a femur case ( Table 2).
On plain radiographs, matrix mineralization was observed in all 9 cases (100%). Bone destruction was detectable in 5 of the 9 cases (56%). Pathological fracture was observed in 1 femur case (11%). On MRI, extraskeletal extension was seen in 6 of the 9 cases (67%). All of the cases with bone destruction on plain radiographs had extraskeletal extension on MRI ( Table 2).
Eight of the nine cases were managed with surgery (89%), while the other one case received non-surgical palliative care only. This latter patient was an 86-yearold woman with dedifferentiated chondrosarcoma in the femur who refused to undergo amputation. Among the 8 cases who underwent surgery, 6 cases were given a wide surgical margin (75%), 1 case was given a marginal margin (13%), and 1 case underwent intralesional resection (13%). In the long bone cases, wide resection was performed in all 6 cases. In contrast, in the pelvic bone cases, 1 case was given a marginal margin, while the other case underwent intralesional resection (Table 1).
Chemotherapy was given to 4 patients following surgical tumor resection. The chemotherapy regimen consisted of adriamycin, ifosfamide, cisplatin, and methotrexate.
Lung metastasis was observed in all cases (initially in 5 cases; during the treatment course in 4 cases). In one case, bone metastasis was seen in addition to the lung metastasis. Time to lung metastasis ranged from 4 to 15 months, with an average of 9.5 months. Local recurrence following resection occurred in 2 of the 8 surgery cases, at 2 and 10 months. Both of these local recurrence cases had undergone amputation ( Table 1).
The median survival time was 10 months, with a mean survival time of 10.1 months, ranging from 3.4 to 18.8 months (Table 1, Figure 1a). All 9 cases died of lung metastases. The presence of metastasis at diagnosis was a significant unfavorable prognostic factor ( Figure 1b). The median survival time was 6.7 months in cases with (Figure 1c). Patients with trunk tumors tended to have shorter survival (mean, 7.1 months; median, 5 months) than patients with extremity tumors (mean, 10.3 months; median, 10 months) (Figure 1d). Neither gender nor age were significant factors. No significant difference in survival was noted between cases that received chemotherapy and those who did not. However, the median survival time was 11.8 months for the 4 cases who received chemotherapy (mean survival time, 12 months) while it was 9.1 months for the cases who did not receive chemotherapy (mean survival time, 7.8 months).
Discussion
Dedifferentiated chondrosarcoma is known to have a poor prognosis. Median survival has been as short as 6 months, and 5-year survival rates may be as low as 10% to 13%. Patients rarely survive for more than 2 years [3,4]. Consistent with the reported unfavorable prognosis, all current cases died of lung metastases, with a median survival time of only 10 months (range, 3.4 to 18.8 months). In two very recent reports, the median survival time was 7.5 months [6] and 1.4 year [5], and the overall 5-year survival rate was 7.1% [6] and 24% [5], respectively. The prognosis of dedifferentiated chondrosarcoma has shown a slow improvement trend over time, presumably due to earlier diagnosis and improved staging and treatment [5]. Metastasis, especially to the lung, is the most important treatment problem in patients with dedifferentiated chondrosarcoma [5][6][7]. In the current study, the presence of metastases at diagnosis was a significant unfavorable prognostic factor. Chemotherapy is an option for control of metastatic lesions. However, the value of chemotherapy for cases with dedifferentiated chondrosarcoma has not been supported [2,4]. In addition to the lack of convincing evidence of the benefit of chemotherapy, the toxicity of chemotherapy experienced by older patients with dedifferentiated chondrosarcoma generally rules it out as a standard treatment [5]. However, in a previous report, 2 out of 13 patients with dedifferentiated chondrosarcoma experienced a good response to chemotherapy, demonstrating more than 90% necrosis [5]. These results suggest a possible benefit of chemotherapy for some patients with dedifferentiated chondrosarcoma.
The prognosis of patients with dedifferentiated chondrosarcoma can be improved by an accurate preoperative diagnosis [8]. Characteristically on plain radiographs, a combined pattern composed of the aggressive parts of dedifferentiated chondrosarcoma components and the less aggressive parts of well-differentiated chondrosarcoma components suggests dedifferentiated chondrosarcoma [8,9]. Reflecting the aggressive nature of dedifferentiated chondrosarcoma, osteolytic lesions associated with cortical destruction have been reported in the majority of cases on plain radiographs [9]. In the current study, bone destruction was present in about half of the cases on plain radiographs, and all of these cases demonstrated extraskeletal extension on MRI. The presence of matrix mineralization, indicative of a cartilaginous tumor, accompanied by bone destruction of an aggressive nature may suggest a diagnosis of dedifferentiated chondrosarcoma based on plain radiographs.
It is important to distinguish between conventional chondrosarcoma and dedifferentiated chondrosarcoma, considering the greatly different prognosis [2]. A pathological diagnosis would be inaccurate if the biopsy sample contained just one component of dedifferentiated chondrosarcoma. Therefore, a carefully planned biopsy based upon MRI, assessing each component of dedifferentiated chondrosarcoma, is essential. Even if only one component-either a conventional chondrosarcoma component or a dedifferentiated component-is detected in the pathological sample, possible dedifferentiated chondrosarcoma should be considered in cases in whom plain radiographs show matrix mineralization with a destructive aggressive feature.
Local control by surgery does not appear to be related to prognosis [5][6][7]. This observation seems to emphasize the importance of not only local control, but also control regarding metastasis, which is directly associated with the prognosis. However, surgical treatment for local control is still an important procedure for dedifferentiated chondrosarcoma. Local recurrence has been reported in up to 50% of cases after excision, with much better local control when there is adequate, or wide, resection [3][4][5]. In the current study, local recurrence was seen in 2 out of 6 cases treated with wide resection, including 1 case in the tibia and 1 case in the humerus. Since the tibia and humerus are close to essential vessels, the creation of an adequate surgical margin intended to prevent local control may be difficult.
Regarding the pathogenesis of dedifferentiated chondrosarcoma, no structural or numerical chromosomal aberrations that are highly specific for dedifferentiated chondrosarcomas have been identified. However, some evidence has suggested clustering of breakpoints in specific regions of 6q13-22 and 9p21-24 in dedifferentiated chondrosarcoma [10]. High-grade dedifferentiated components have been shown to have a higher malignant potential than low-grade chondrosarcoma components, with increased proliferation as demonstrated by expression of Ki-67 and proliferating cell nuclear antigen [11]. The plasminogen activator system is a key regulator of invasion and tumor angiogenesis. Plasminogen activator inhibitor 1 (PAI-1) acts as an inhibitor of tissue-type plasminogen activator (tPA) and urokinase-type plasminogen activator (uPA) [12]. In dedifferentiated chondrosarcoma, high-grade dedifferentiated components display diffuse coexpression of t-PA, u-PA, and PAI-1 [13]. Matrix metalloproteinases (MMPs) are a family of zinc-dependent endopeptidases that are principally involved in the breakdown of the extracellular matrix, as well as in tumor angiogenesis [14]. Upregulation of MMP2 and MT1-MMP has been reported in high-grade malignant cartilaginous tumors, as well as in the highgrade dedifferentiated component of dedifferentiated chondrosarcoma [15].
Conclusion
The current study describes a series of dedifferentiated chondrosarcoma cases treated in our institute. The presence of initial metastasis at diagnosis was a significant unfavorable prognostic factor. The prognosis of dedifferentiated chondrosarcoma remains dismal. With the lack of convincing evidence of the benefit of chemotherapy for dedifferentiated chondrosarcoma, including what constitutes standard treatment, its use in this patient population remains controversial. Complete surgical excision should be the initial treatment for patients with a dedifferentiated chondrosarcoma, with chemotherapy reserved for palliative purposes.
Competing interests
There are no competing interests.
Authors' contributions KY and AS drafted the manuscript. KY, AS, YM, SM, and KH administered the treatment. KY, AS, YM, and YO participated in the design of the study. YI conceived of the study, and participated in its design and coordination and helped to draft the manuscript. All authors read and approved the final manuscript.
|
2016-05-04T20:20:58.661Z
|
2012-12-10T00:00:00.000
|
{
"year": 2012,
"sha1": "bac3b98bc4046e62c5616cbe577ebc71dd38841d",
"oa_license": "CCBY",
"oa_url": "https://josr-online.biomedcentral.com/track/pdf/10.1186/1749-799X-7-38",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2af1a6709e7434c8abe91db697569d0936acfbc4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
73524416
|
pes2o/s2orc
|
v3-fos-license
|
Wavelet-based techniques in MRS
This book intends to provide highlights of the current research in signal processing area and to offer a snapshot of the recent advances in this field. This work is mainly destined to researchers in the signal processing related areas but it is also accessible to anyone with a scientific background desiring to have an up-to-date overview of this domain. The twenty-five chapters present methodological advances and recent applications of signal processing algorithms in various domains as telecommunications, array processing, biology, cryptography, image and speech processing. The methodologies illustrated in this book, such as sparse signal recovery, are hot topics in the signal processing community at this moment. The editor would like to thank all the authors for their excellent contributions in different areas of signal processing and hopes that this book will be of valuable help to the readers
Introduction: magnetic resonance spectroscopic (MRS) signals
A magnetic resonance spectroscopic (MRS) signal is made of several frequencies typical of the active nuclei and their chemical environments.The amplitude of these contributions in the time domain depends on the amount of those nuclei, which is then related to the concentration of the substance (Hornak, 1997).This property is exploited in many applications of MRS, in particular in the clinical one.The MRS spectra contain a wealth of biochemical information characterizing the molecular content of living tissues (Govindaraju et al., 2000).Therefore, MRS is a unique non-invasive tool for monitoring human brain tumours, etc. (Devos et al., 2004), if it is well quantified.When an MRS proton signal is acquired at short echo-time (TE), the distortion of spectral multiplets due to J-evolution can be minimized and the signals are minimally affected by transverse relaxation.Such signals exhibit many more metabolite contributions, such as glutamate and myo-inositol, compared to long TE spectra.Therefore, an MRS signal acquired at short TE presents rich in vivo metabolic information through complicated, overlapping spectral signatures.However, it is usually contaminated by water residue and a baseline which mainly originates from large molecules, known as macromolecules.As the shape and intensity of the baseline are not known a priori, this contribution becomes one of the major obstructions to accurately quantify the overlapping signals from the metabolites, especially by peak integration, which is commonly used in frequency-based quantification techniques.Also, by seeing only the frequency aspect, one loses all information about time localization.A number of quantification techniques have been proposed, which work either in the time domain (see Vanhamme et al. (2001) for a review) or in the frequency domain (see Mierisová & Ala-Korpela (2001) for a review).The time-domain based methods are divided into two main classes: on one side, non-interactive methods such as SVD-based methods (Pijnappel et al., 1992) and, on the other side, methods based on iterative model function fitting using strong prior knowledge such as QUEST (Ratiney et al., 2004;2005), LCModel (Provencher, 1993), AQSES (Poullet et al., 2007), or AMARES (Vanhamme et al., 1997).
Signal Processing 168
However, there also exist techniques that analyse a signal in the two domains simultaneously and are therefore more efficient than, say, the Fourier transform, which gives only spectral information.The result is a time-scale and or a time-frequency representation, such as provided by the wavelet transform (WT) and the Short-Time Fourier transform (STFT).In addition, both transforms are local, in the sense that a small perturbation of a signal which may occur during the data acquisition will result only in a small, local modification of the transform.A number of wavelet-based techniques have been proposed for spectral line estimation in MRS, including the continuous wavelet transform (Delprat et al., 1992;Guillemain et al., 1992;Serrai et al., 1997) and the wavelet packet decomposition (Mainardi et al., 2002).Among the various possibilities, we will concentrate our discussion on the continuous wavelet transform (CWT) with the Morlet wavelet (MWT).All wavelet calculations have been performed by our own wavelet toolbox, called YAWTb (Jacques et al., 2007).Some of the experimental aspects have been reported in Suvichakorn et al. (2009).For the convenience of the reader we have collected in the Appendix the basic features and properties of the CWT.In the following sections, we will study the performance of the Morlet WT to retrieve parameters of interest such as resonances frequencies, amplitude and damping factors, for nuisances or impairments generally encountered in in vivo MRS signals: noise, baseline, solvent, and non-Lorentzian lineshapes.
The Morlet wavelet transform
The wavelet transform (WT) of a signal s(t) with respect to a basic wavelet g(t) is where S(ω) is the Fourier transform of the signal, a > 0 is a dilation parameter that characterizes the frequency of the signal (since 1/a is essentially a frequency), τ ∈ R is a translation parameter that indicates the localization in time and G(aω) is the complex conjugate of the (scaled) Fourier transform of g(t).We can think of the basic wavelet as a window which slides through the signal, giving the information at instantaneous time τ.The window is also dilated by a, so that a small a corresponds to a high frequency of the signal, and vice versa.As a result, the WT becomes a function of both time and frequency (scale).For more details, see the Appendix.
A technique based on the continuous wavelet transform (CWT) was proposed by Guillemain et al. (1992).By exploiting the ability of the CWT to see the information in the two domains simultaneously, it can extract the information from MRS signals directly without any decomposition or pre-processing, in order to quantify an MRS signal.The technique proceeds in two steps: (i) detection of the frequency of the peaks in MRS signals and (ii) characterization at each detected frequency.It can be described as follows.
At a particular value of a, the WT S a (τ) ≡ S(τ, a) can be represented in terms of its modulus |S a (τ)| and phase Φ a (τ), namely, with an instantaneous frequency Next, let us consider an MRS signal with a Lorentzian damping function, namely, where D and ϕ denote the damping factor and the phase of the signal.Its WT is accordingly For a Morlet function scaled by a dilation parameter a (we omit the negligible correction term, see Eq.(A.9)), namely, it can be seen that the modulus of S(τ, a) is maximum, i.e., ∂ ∂a S(τ, a) → 0, when ∂ ∂a G → 0. Given that a > 0 and the assumption that ω s ≫ D, the maximum can be found along the scale a r = ω 0 /ω s (this is called a horizontal ridge), which then gives and consequently which is identical to the signal s(t) multiplied by a coefficient depending on the still unknown D. Consider the modulus of the Morlet wavelet transform (MWT) along a r , That is, Since S a r (τ) is a function of time, the derived D is also a function of time.This is beneficial for analysing signals that do not have a steady damping function.In addition, considering the phase of the MWT along a r , namely, arg S a r (τ) = ω s τ + ϕ, we also have as in Eq.(3).Strictly speaking, the instantaneous frequency at the scale a r of the Morlet transform is ω s .This can be observed in Figure 1, which shows that the instantaneous frequency intersects the line ω 0 /a at a = ω 0 /ω s , where ω s =32 and 64 rad/s are the frequencies of the signal.The phase of the signal ϕ ∈ (−π, π) can also be derived from the phase of the WT, if needed.The property given in Eq.( 12) is useful for analysing an n-frequency signal; it indicates the actual frequencies of the signal and the scale a that we should consider.In addition, if its frequencies are sufficiently far away from each other, so that G(aω) treats each spectral line independently (Barache et al., 1997), the amplitude at each frequency can thus be derived.When two frequencies are very close to each other (this also depends on the sampling frequency), increasing the frequency of the Morlet function ω 0 can better localize and distinguish the overlapping frequencies.On the other hand, ω s can be obtained iteratively by 1. Initializing a = a i at some values.2. Calculating the instantaneous frequency, namely Ω a i .
3. Assigning the new value to a i+1 = ω 0 /Ω a i .4. Repeating the process until a converges to ω s .
Figure 2 illustrates an overlap of two frequencies and the derived instantaneous frequencies using the iteration method.The derived frequencies converge to the true frequencies within a few steps.i60t) and (b) its instantaneous frequencies when using the iterative method.Here σ = 1, ω 0 = 5.5 rad/s, F s = 800 s −1 , l = 1024 points.(c) Comparison of the instantaneous frequencies by the non-iterative and the iterative method.The symbol • indicates an initial value of a.
Gaussian White Noise
An in vivo MRS signal is always impaired by additive noise, which is usually assumed to be white gaussian.This noise causes oscillations in the instantaneous frequency derived with the CWT representation, as illustrated in Figure 3 which shows the instantaneous frequency derived from a signal with a peak at a frequency of 32 rad/s with an additive Gaussian noise corresponding to a signal to noise ratio (SNR) of 10. 1 In order to reduce this effect, Guillemain et al. (1992) suggested averaging in time the derived parameters, for instance Ω a (τ), i.e., As can be seen in Figure 3, averaging in time reduces the noise effect on the derivation of the instantaneous frequency. 2One can see that averaging creates many steady points.At the scale a = ω 0 /32, the instantaneous frequency is about, but not exactly, 32 rad/s.Here, the averaging time is 1.56 s. Figure 4(b) shows the evolution of the absolute frequency estimation with respect to the averaging time.Increasing the averaging time is likely to decrease the estimation error, as illustrated in Figure 4(b).The same approach can be used to derive the instantaneous damping factor.The estimated instantaneous damping factor is also smoother and closer to the actual damping factor when time averaging is employed.Although the method described above should work at any value of a, there is a particular range of a that is meaningful, and should be wisely selected.As a rule of thumb, this range should not be far from the scale that maximizes the modulus of the Morlet WT.
Baseline
The baseline corresponds to contributions from large molecules, with a broad frequency pattern in the MRS spectrum.Thus, it becomes a major obstruction in the quantification of metabolite contribution from the MRS signals.First, we simulate the baseline by cubic splines in order to study the performance of the MWT when a baseline is present.In the case of Figure 5, the simulated baseline has no effect on the instantaneous frequency derived from the WT.Then, we used a baseline modelled with 50 randomly distributed Lorentzian profiles with a large damping factor, compared to the signal-of-interest at 3447 rad/s, e.g.
is the baseline (see Figure 6).The first component of B(t) has the same frequency as the signal, in order to imitate the overlap between the baseline and the signal.It is found that the modelled baseline does not prevent an accurate estimation of both the damping factor and the amplitude derived from the Morlet WT, provided one waits until both the effect of the baseline and the edge effect (discussed in Section 4.1 below) have died out.In the example shown here, the waiting time is approximately 0.2 s.The MWT in Figure 6(b) tells us that the baseline affects only the beginning of the transform in the time (τ) axis, comparing to the long, clear peak of our 3447-rad/s signal.This means that the baseline can be assumed to decay faster than the pure signal, and the method described should still be effective without removing the baseline beforehand.Such an assumption has been widely used in spectroscopic signal processing, where several authors have proposed truncation of the initial data points in the time domain, which are believed to contain a major (c) damping factor and (d) amplitude.The actual parameters are 10 s −1 and 1 a.u.for the damping factor and amplitude, respectively.(ω 0 = 100 rad/s, σ = 1).From Suvichakorn et al. (2009).part of the baseline.However, some information of the metabolites could be lost and a strategy for properly selecting the number of data points is needed (see Rabeson et al. (2006) for examples and further references).Next, in order to study the characteristics of the real baseline by the Morlet wavelet, an in vivo macromolecule MRS signal was acquired on a horizontal 4.7T Biospec system (BRUKER BioSpin MRI, Germany).The data acquisition was done using the differences in spin-lattice relaxation times (T1) between low molecular weight metabolites and macromolecules (Behar et al., 1994;Cudalbu et al., 2009;2007).As seen in Figure 7, the metabolite-nullified signal from a volume-of-interest (VOI) central- creatine is derived with the Morlet WT.Next, we multiply the simulated, normalised creatine by 0.5, 1, 1.5,. . . .For each of these values, we derive the amplitude and plot the result in Figure 9.The recovery of the (simulated) creatine at different amplitudes, after adding it to the baseline signal, reveals that the amplitude of the metabolite can be correctly derived using t = 0.4 s, whereas at earlier time (t < 0.2 s) the derived amplitude still suffers from the boundary effect (we will discuss this effect in Section 4.1).However, the metabolite signal is covered later by noise (t = 0.77 s), giving an inaccurate amplitude estimate.Therefore, the time to monitor the amplitude of the metabolite should be properly selected.Another data set of the baseline4 acquired at 9.4T, with a better signal to noise ratio and a better water suppression, shows similar characteristics (see Figure 10).
Solvent
In MRS quantification, a large resonance from the solvent needs to be suppressed to unveil the metabolites without altering their magnitudes.The intensity of the solvent is usually several orders of magnitude larger than those of the metabolites.The Morlet WT sees the signal at each frequency individually, therefore it can work well even if the amplitudes at various frequencies are hugely different, which normally occurs when there is a solvent peak in the signal.In order to illustrate this, the Morlet WT has been applied to the following signal s(t) = 100e −8.5t e i32t + e −1.5t e i60t + e −0.5t e i90t + e −t e i120t + e −2t e i150t , ( as seen in Figure 11 (a).This signal has an amplitude of 100 at 32 rad/s and 1 elsewhere.The high amplitude can affect other frequencies if they are close to each other.This is illustrated in Figure 11 (b) when a Hann window is applied to the signal in order to separate each frequency.
Using the aforementioned method, the amplitude of 1 is derived as 0.980, 0.911, 0.988 and 0.974 respectively.The error ranges within 1.2-8.9%, without any preprocessing.
Non-Lorentzian lineshape
The ideal Lorentzian lineshape assumes that the homogeneous broadening is equally contributed from each individual molecule.However, imperfect shimming and susceptibility effects from internal heterogeneity within tissues lead to non-Lorentzian lineshapes in real experiments (Cudalbu et al., 2008).These effects are typically modelled by a Gaussian lineshape (Franzen, 2002;Hornak, 1997).Since the inhomogeneous broadening is often significantly larger than the lifetime broadening, the Gaussian lineshape is often dominant.If the lineshape is intermediate between a Gaussian and a Lorentzian form, the spectrum can be fitted to a convolution of the two functions (Marshall et al., 2000;Ratiney et al., 2008).Such lineshape is known as a Voigt profile.
Next we will explore how the Morlet WT can deal with the Gaussian and Voigt lineshapes.Consider a pure Gaussian function modulated at the frequency ω s , namely, Its Morlet WT is √ aσ e −γt 2 e iω s t e where Eq.( 16) is known as a Gaussian integral and can be computed explicitly: As a result, the Morlet WT at the scale a r = ω 0 /ω s is where which is also a Gaussian function at the frequency ω s .The width and amplitude of this new Gaussian function are functions of ω s and of the width of the original Gaussian signal s G (t).
Therefore, similarly to the process of the Lorentzian lineshape, the amplitude (A) and the width of the Gaussian function (inversely proportional to γ) can be obtained as follows: 1. Find ω s = ∂ ∂τ arg S G,a r (τ). 2. Find γ from the second derivative of ln |S G,a r (τ)|, which yields 3. Find A from the calculated ω s and γ.
On the other hand, the Morlet WT at the scale a r = ω 0 /ω s of a Voigt lineshape, where That is, at the scale a r , the Morlet WT of the Voigt lineshape is also a Gaussian function with the same width, but shifted in time, with the amplitude smaller than that of the Gaussian lineshape, and its instantaneous frequency is also equal to ω s .Note that the scale a r = ω 0 /ω s does not give exactly the maximum modulus of the WT.However, as seen in Figure 12, the modulus of the Morlet WT of a signal with a Lorentzian lineshape or a Gaussian lineshape (and also a Voigt lineshape) are maximal at the same scale a r , provided that a ∈ R and ω s ≫ D.
Figure 13 shows that the second derivative of the modulus of the Morlet WT can be used to describe the second-order broadening of the lineshape, no matter whether it is Gaussian or Voigt.In the case of a Voigt lineshape, γ actually gives back a Lorentzian whose damping factor is obtained by Eq.( 10).
Kubo's lineshape
The interaction between the Lorentzian and Gaussian broadening of lineshape depends on the time scale.For example, if the relaxation time (T 2 ) is much longer than any effect modulating the energy of a molecule, the lineshape will approach the Lorentzian lineshape.On the contrary, if T 2 is short, the lineshape is likely to be Gaussian.In order to account for this time 15), at the scale a r = ω 0 /ω s .We have put α = γ/ς, where γ and ς are the two parameters of the Kubo lineshape defined in Eq.( 22).scale, Kubo (1969) uses a so-called Gaussian-Markovian modulation, namely The parameter γ is inversely proportional to T 2 and ς is the amplitude of the solvent-induced fluctuations in the frequency.If α = γ/ς ≪ 1, the lineshape becomes Gaussian, whereas α ≫ 1 leads to Lorentzian.The width of the lineshape is ς 2 γ.Solving Eq.( 22) seems to be complicated, though may be possible.However, it turns out that the maximum modulus of the Morlet WT of a Kubo lineshape at ω s = 60 rad/s occurs also at the scale a r = ω 0 /ω s , like those of the Gaussian and Lorentzian lineshapes.In addition, the instantaneous frequency is still able to derive the ω s , even better than the Gaussian lineshape, as shown in Figure 14(a), although the amplitude is broader than those of the Lorentzian, Gaussian or Voigt profiles, as shown in Figure 14(b).The damping parameters can also be derived by the linear relation between ∂ ∂τ ln |S G,a r (τ)| and γ, as seen in Figure 15, whereas α is related directly to ∂ 2 ∂τ 2 ln |S G,a r (τ)|.
Limitations of the Morlet wavelet transform
In the previous section, the Morlet WT shows its potential for analysing an MRS signal by means of its amplitude and phase, in addition to its time-frequency representation.However, these techniques can be applied to well-defined lineshapes only.Another limitation is the requirement of a proper ω 0 that should distinguish the signal from the solvent, but should not introduce noise in the result.In this section, we will look further on some more limitations that prevent the use of the Morlet WT to quantify MRS signals directly.
Edge effects
Errors in the wavelet analysis can occur at both ends of the spectrum due to the limited time series.The region of the wavelet spectrum in which effects become important5 increases linearly with the scale a, thus it has a conic shape at both ends, as already seen in Figure 1(a) (see also the Appendix).The size of the forbidden region, which is affected by the boundary effect, varies with the frequency ω 0 of the Morlet wavelet function and the ratio between the frequency of the signal (ω s ) and the sampling frequency (F s ). Figure 16 shows that the size becomes larger for a large ω 0 and low ω s /F s .In practice, the working region is chosen so that the edge effects are negligible outside and the characterization of the MRS signals should be made inside this region, disregarding the presence of the macromolecular contamination.Fig. 16.Lines showing the width (in number of sample points) of the forbidden regions where the boundary effect becomes important, as a function of ω 0 (rad/s) and the ratio between the signal frequency (ω s ) and the sampling frequency (F s ).From (Suvichakorn et al., 2009).
Interacting/overlapping frequencies
If two frequencies of the signal are close to each other, the wavelet can interact with both of them at the same time.This was already observed in Figure 2(a).Barache et al. (1997) suggested the use of a linear equation system to solve the problem.In the sequel, the simulated N-Acetyl Aspartate (NAA) is used to illustrate how the problem could be solved.The spectrum of the NAA, shown in Figure 17(a), is composed of two different regions, the high, single peak (NAA-acetyl part) and a group of overlapping frequencies (NAA-aspartate part).By using a high ω 0 to separate the overlapping frequencies, the Morlet WT reveals that there are eight frequency peaks in the group as seen in Figure 17 From (Suvichakorn et al., 2009).
factor of the single peak.The size and frequency of the oscillation depends on the numbers of neighbours of each peak and the spectral distance to these neighbours.A proper damping factor can be achieved by averaging these oscillations in time.
Next, we will try to derive the amplitude of each peak.Let us consider an MRS signal composed of n Lorentzian lines s(t) = e −Dt ∑ n s n (t), where s n (t) = A n e iω n t+ϕ n and n = 1, 2, . . . is an indexing number.Its Morlet WT gives local maxima close to the scales a 1 = ω 0 /ω 1 , a 2 = ω 0 /ω 2 , and so on.Therefore, we can establish a systematic relation between S a r and s n (t) at each scale as follows: The value of |C mn | decreases when the resonating peaks are well resolved (no overlapping frequencies), in fact, it goes to zero when |ω m − ω n | increases, independently of D. Also, |C mn | decreases when ω m is high.If C mn is not negligible (overlapping frequencies), solving the linear equations gives the information for each s n (t).10); (b) Amplitudes of NAA-aspartate part, derived by the linear equations (with zero phase).From (Suvichakorn et al., 2009).
The damping parameter D for the equations can be derived by Eq.( 10), although the overlapping frequencies may cause oscillations in the solution, but these can be smoothened by averaging in time.
There can be a bias from the estimation, depending on the number and distribution of overlapping frequencies, e.g. the distance between neighbouring frequencies and ω 0 .For the NAA (ω = 3447 rad/s), the bias is approximately 1% of its amplitude (in time domain), when ω 0 = 200 rad/s is used.Note that Lorentzian lineshapes are assumed in these linear equations, and the result is presented in Figure 18(b).In case of non-Lorentzian lineshapes, the arbitrary damping function should be determined, and taken into account to solve the equation.
Arbitrary lineshape
Let us consider a signal with an arbitrary damping function D(t), namely, s(t) = AD(t)e (iω s t+ϕ) .( 23) Its Morlet WT is defined by where C 1 = e i(ωs τ+ϕ) 2πσ √ a and C 2 = ( √ 2π) −1 e i(ω s x+ϕ) .When implemented (thus discretized), the equation above can be seen as the product of two matrices, namely, S = C 2 DG, and the damping function could be solved from the following equations where S is the matrix of the scaled wavelet coefficients, G is derived from the Morlet WT and the frequency-of-interest ω s , and A is the unknown amplitude of the signal.For a combination of frequencies with the same damping function, dividing by |D(t)| should give us a possibility for comparing the amplitude at each peak relatively.
Working in a real life environment
By real life environment, we mean genuine acquired data, either in vitro or in vivo, rather than simulated ones.In that case, the ideal Lorentzian lineshape of individual peaks gets distorted.
To give an example, we show in Figure 19 the analysis of an in vitro creatine signal.We see that intermittent noise appears, in the form of many disrupted, horizontal bands in the WT.Thus the noise occurs for a while at some particular frequencies and then disappears.6Such characteristics differ from the Gaussian white noise that usually appears as vertical bands in the WT.It is also possible that the Gaussian white noise at that duration has the same intensity, however.The analysis of this in vitro creatine signal shows that the frequency distribution at each peak is broad and the almost stationary Gaussian damping factor indicates that the acquired signal has a lineshape close to that of the Gaussian function.Nevertheless, deriving the amplitude using the Gaussian assumption may lead to an inaccurate estimation.All the results presented here have been obtained with the Morlet wavelet, but they can easily be generalized to any analysing wavelet whose Fourier transform has a single maximum at ω = ω 0 , or even to the Short Time Fourier Transform (STFT)7 (Delprat et al., 1992).An important fact is the so-called reproduction property.Indeed it may be shown that the orthogonal projection P g from L 2 (R 2 + , dadτ/a 2 ) onto the closed subspace H g (the space of wavelet transforms) is an integral operator, with kernel In other words, a function f ∈ L 2 (R 2 + , da dτ/a 2 ) is the WT of some signal if and only if it satisfies the reproduction identity .11)For this reason, K is called the reproducing kernel of g.It is also the autocorrelation function g and as such it plays an essential role in calibrating the CWT (Antoine, 1994).Now the relation (A.11) shows that the CWT is enormously redundant (the signal has been unfolded from one variable t to two variables (τ, a)).Thus it is not surprising that the whole information is already contained in a small subset of the values of S(τ, a).An example of such a subset is the so-called skeleton, that is, the set of ridges, which are essentially the lines of maxima of the modulus of the WT (in the case of a monochromatic signal, the ridges become horizontal lines a = a r , as we have seen in Section 2).Another example is obtained by taking an appropriate discrete subset Γ = {a j , τ k } of the half-plane R 2 + , as it is necessary in any case for numerical evaluation of the integrals.However, for most wavelets g, the resulting family {g (a j ,τ k ) } is never an orthogonal basis (for the Morlet wavelet, for instance, the kernel K is a Gaussian, thus it never vanishes).At best, it is an overcomplete set of vectors, technically called a frame, provided Γ contains sufficiently many points (Daubechies, 1992).
A.2. Localization properties and interpretation
The main virtues of the CWT follow from the support properties of g.Assume g and G to be as well localized as possible (compatible with the Fourier uncertainty principle).More specifically, assume that g has an 'essential' support of width L, centered around 0, while G has an essential support of width Ω, centered around ω 0 .Then the transformed wavelets g (τ,a) and G (τ,a) have, respectively, an essential support of width aL around τ and an essential support of width Ω/a around ω 0 /a.This behavior is illustrated in Figure 21, which shows the Morlet wavelet in the time and frequency domains, for three successive scales a = 0.5, 1 and 2, from left to right.Notice that the product of the two widths is constant (we know it has to be bounded below by a fixed constant, by the (Fourier) uncertainty principle).Remember that 1/a behaves like a frequency.Therefore: • if a ≫ 1, g (τ,a) is a wide window, whereas G (τ,a) is very peaked around a small frequency ω 0 /a: this transform is most sensitive to low frequencies.
• if a ≪ 1, g (τ,a) is a narrow window and G (τ,a) is wide and centered around a high frequency ω 0 /a: this wavelet has a good localization capability in the space domain and is mostly sensitive to high frequencies.
Combining now these localization properties with the zero mean condition and the fact that g (τ,a) acts like a filter (convolution), we see that the CWT performs a local filtering, both in time and in scale.The WT S(τ, a) is nonnegligible only when the wavelet g (τ,a) matches the signal s(t), that is, it filters the part of the signal, if any, that lives around the time τ and the scale a.
Taking all these properties together, one is naturally led to the interpretation of the CWT as a mathematical microscope, with optics g, position τ and global magnification 1/a.In addition, the analysis works at constant relative bandwidth (∆ω/ω = constant), so that it has a better resolution at high frequency, i.e., small scales.This property makes it an ideal tool for detecting singularities (for instance, discontinuities in the signal or one of its derivatives), and also scale dependent features, in particular, for analysing fractals.
A.3. Implementation questions
Faced with this new tool, one must begin by learning the rules of the trade, that is, one must learn how to read and understand a CWT (Grossmann et al., 1990).The simplest way is to get some practice on very simple academic signals, such as a simple discontinuity in time or a monochromatic signal (pure sinusoid).We note that it is natural to use a logarithmic scale for the scale parameter a.The visual effect is that the lines, τ/a = constant, are not straight lines, but hyperbolic curves; at the same time, the horizon a = 0 recedes to infinity (see Figure 22).The analysing wavelet g is supposed to be complex, so that we may treat separately the modulus and the phase of the transform.The scale axis, in units of ln a, points downward, so that high frequencies (small a) correspond to the top of the plots, and low frequencies (large a) to the bottom.The results are presented by coding the height of the function by density of points (12 levels of gray, from white to black).The phase is 2π-periodic.When it reaches 2π, it is wrapped around to the value 0. Thus the lines of constant phase with value 2kπ are lines of discontinuity, where the density of points drops abruptly from 1 (black) to 0 (white).In The simplest signal is a simple discontinuity in time, at t = t 0 , modelled by s(t) = δ(t − t 0 ).
The WT is obtained immediately and reads S(τ, a) = a −1/2 g a −1 (t 0 − τ) .(A.12) The following features may be read off Eq.(A.12): • The phase of S(τ, a) is constant on the lines τ/a = constant, originating from the point τ = t 0 on the horizon.These lines point towards the position of the singularity, like a finger.
• On the same lines of constant phase, the modulus of S(τ, a) increases as a −1/2 when a → 0, so that the singularity is enhanced.The effect is even more pronounced if one uses the L 1 normalisation.This is illustrated on Figure 22, which presents the modulus and phase of the WT of a δ function, using a standard Morlet wavelet (but the result is independent of the choice of g).
The interesting point is that this behavior is extremely robust.For instance, the 'finger' pointing to a δ-singularity remains clearly visible when the latter is superposed on a continuous signal (even if the amplitude of the δ function is too small to be invisible on the signal itself), or even in the presence of substantial background noise.Similarly, the discontinuity corresponding to the abrupt onset of a signal is readily identified with the CWT.We refer to (Grossmann et al., 1990) for several spectacular examples.This is the origin of the edge or boundary effects that we have encountered in Section 4.1.The first notion is that of cone of confidence or cone of influence.Let the wavelet g vanish outside the interval I g = [t min , t max ].Then, given a point t 0 in the support of the signal, the region in which it influences the WT is the cone τ ∈ aI g + t 0 = [−at min + t 0 , at max + t 0 ].Thus the region of influence increases linearly with a.The effect is clearly seen in Figure 1: the cones of influence of the two endpoints of the spectrum are the regions where the phase of the WT differs from that of a pure sinusoid (see (ii) below).This is the region to be avoided, as discussed in Section 4.1.
(ii) A single monochromatic wave
A.4. The discrete wavelet transform
Notice that the discretized CWT which is used in practice, including in the present text, is totally different from the so-called discrete WT (DWT).Indeed, orthogonal bases of wavelets may be constructed, but from a completely different approach based on the notion of multiresolution analysis.
We emphasize that the DWT is totally different in spirit from the CWT, either truly continuous or discretized, and they have complementary ranges of applications: • In the CWT, there is a lot of freedom in choosing the wavelet g, but one does not get an orthonormal basis, at best a frame.This is a tool for analysis and feature determination, as in MRS or other problems where the scaling properties of the signal are unknown a priori, for instance in fractal analysis.
• In the DWT, one insists on having an orthonormal basis, but the wavelet is derived from the multiresolution analysis.This is the preferred tool for data compression and signal synthesis, and the most popular in the signal processing community.
More radically, one may even say that the kind of problems treated here can be solved only with the CWT, the DWT is simply not adapted to the underlying physics, although it has been proposed for MRS (Neue, 1996).For instance, the algorithm for detecting spectral lines, as as the ridge concept, rest upon a stationary phase argument.Similarly, the determination of fractal exponents exploits the scaling behaviour of homogeneous functions or distributions and the covariance properties of the CWT.All these notions are foreign to the DWT, which is more a signal processing tool.
Fig. 2. (a)The MWT of y(t) = exp(i55t) + exp(i60t) and (b) its instantaneous frequencies when using the iterative method.Here σ = 1, ω 0 = 5.5 rad/s, F s = 800 s −1 , l = 1024 points.(c) Comparison of the instantaneous frequencies by the non-iterative and the iterative method.The symbol • indicates an initial value of a.
Fig. 6.(a) The Fourier transform of a 3447-rad/s Lorentzian signal with baseline.The latter is modelled by large Lorentzian damping factors; (b) Its Morlet WT and the derived parameters:(c) damping factor and (d) amplitude.The actual parameters are 10 s −1 and 1 a.u.for the damping factor and amplitude, respectively.(ω 0 = 100 rad/s, σ = 1).FromSuvichakorn et al. (2009).
Fig. 7. (a) The signal of baseline + residual water (a) in time domain; and (b) in frequency domain.
Fig. 11.(a) The Fourier transform of a signal with different amplitudes and the spectrum extracted by the Morlet wavelet and (b) by a Hann window.
Fig.13.The Gaussian damping factor derived from the pure Gaussian signal and the Voigt signal considered in Figure12
|
2018-12-27T14:38:17.043Z
|
2010-03-01T00:00:00.000
|
{
"year": 2010,
"sha1": "d8be8970838a451f0030b7f61b33d3cd08321545",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.5772/8533",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "d8be8970838a451f0030b7f61b33d3cd08321545",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Physics"
]
}
|
202728882
|
pes2o/s2orc
|
v3-fos-license
|
Positioning Improving of RSU Devices Used in V2I Communication in Intelligent Transportation System
—In this work we present solutions which aim at enhancement of the localization precision of the road side unit (RSU) devices which will participate in vehicle-to-infrastructure (V2I) communication in future autonomous driving and intelligent transportation systems (ITS). Currently used localization techniques suffer from limited accuracy which is due to various factors, including noise, delays caused by environmental conditions (e.g. temperature variation) and differences in elevation between devices communicating with each other in the road environment. In case of application of the ITS, these factors can be the source of significant discrepancies between real positions of the RSUs and their estimated values provided by the V2I system. The proposed techniques, based on various approximation techniques, as well as linear and nonlinear filters, allow to improve the localization accuracy, reducing the positioning errors by more than 90 %.
I. INTRODUCTION
An intelligent transportation system (ITS) is a relatively new concept which embraces solutions whose aim is to optimize transport based on modern technologies.They include those in the field of artificial intelligence (AI) as well as information and communication technologies (ICT).
Functionalities considered under the framework of the ITS can be classified into several general classes.One of them are solutions responsible for providing appropriate, up-to date information for passengers of public transport and car drivers.Another group aims at increasing traffic flow in urban areas in order to eliminate or reduce the traffic jams, which can cause a significant reduction in pollution levels in the cities.Finally, one of the main ITS development directions are the vehicles themselves.It is supposed and hoped that the majority of the vehicles in the nearest future will be equipped with advanced driver assistant systems (ADAS).However, future development of the ITS may be understood as the development of autonomous vehicles which will enable traveling without human intervention.The purpose of the development of such vehicles is to improve road safety and ecological aspects.
Taking into consideration the last group of solutions, several essential challenges can be indicated here.As the car moves through the city or suburban/highway areas, the surroundings of the vehicle are dynamically changing.One of the main problems is high complexity of the environment seen by the vehicle.This makes the algorithms responsible for travel safety also complex.The complexity means, for example, a number of objects in the range of the vehicle's sensors, both moving and still objects, their trajectories, obscuring some objects by others, etc.One of the possible solutions in terms of the described problem can be intelligent road infrastructure providing direct support for the vehicle movement, in the form of special devices mounted at fixed points of the road and urban environment.
The support can be provided by the so-called vehicle-tovehicle (V2V) and vehicle-to-infrastructure (V2I) communication (V2X in short).For example, the V2I system facilitates the operation of traffic sign recognition (TSR) functions.In case of low visibility of traffic signs (TSs), the information passed to the vehicle wirelessly by devices (RSU -road side unit) associated with particular TSs, or groups of them, can significantly improve the performance of such functions.V2I techniques can be used to inform the driver about accidents ahead, bad road conditions, etc.In its most advanced form, a framework of the RSUs participates in building the so-called model (map) of the environment of the vehicle.In the currently used ADAS systems, the map of the environment is created by the vehicles themselves, which use their own on-board sensors (camera, radar, recently LiDAR) and appropriate algorithms.The V2I system can be treated here as an additional sensor which provides additional data, for example from "behind the corner".
One of the main challenges in the described problem is the need of very accurate localization of the RSUs by the passing vehicle.This can be based on a real time localization system (RTLS).Such systems are frequently used in indoor confined areas (buildings) for various purposes.They are usually composed of two types of devices: moving devices (active markers/tags) as well as devices mounted in fixed points of buildings (transponders/anchors).Trajectories of the markers are determined and recorded by the anchors on the basis of multiple distance measurements between particular markers and the framework of the anchors, supported by the trilateration computation techniques [1].
In the indoor systems, the positions of the anchors are usually well known.In case of the urban/road environment, on the other hand, the situation is different.The RSUs can be viewed as counterparts of the anchors, while the markers are the devices associated with the moving vehicles.Contrary to typical indoor applications, it is the device mounted in the vehicle which determines own position in the ITS.Another difference is the lack of fixed framework of the anchors, as the moving vehicle on its path is within the range of different RSUs.Due to these differences, the computation techniques used to determine the relative positions of the vehicles have to be different as well.
We assume that the moving vehicle is able to calculate its own trajectory, within given accuracy, on the basis of its own GPS unit and own on-board sensors (yaw rate and velocity sensors).Due to the noise as well as low precision of the applied sensors, the computed trajectory is also not precise.Constant communication with the RSU devices, which as we assume, know their own positions in the global coordinate system (GCS) increases the precision of the calculated trajectory of the car.The main issue here is how to precisely determine the trajectory in relation to the framework of the RSUs.It is the topic of the presented work.
II. STATE-OF-THE ART STUDY
In this part we briefly present state-of-the art study in the areas related to the presented topic.As an example of a system which can benefit from the proposed method, we consider the traffic sign recognition (TSR) system which is becoming a standard in the automotive industry, including European New Car Assessment Program (Euro NCAP) [2].Currently used systems of this type rely only on the on-board sensors (cameras) of a vehicle.Traffic signs (TS) are then identified and recognized using various signal processing and artificial intelligence (AI) methods.The computation scheme is in general similar in each system of this type.Firstly, the TSs are identified in the images taken from the camera, then they are cropped to smaller images and normalized in terms of sizes.On the basis of a series of frames obtained from a single camera or on the basis of stereo-vision cameras, the positions of the TSs in the environment of the vehicle are determined using, among others, the trilateration methods.The cropped and normalized TSs are then provided to an AI algorithm for identification.
A. Towards automotive TSR systems of the 2 nd generation
The TSR systems currently offered on the market recognize only selected traffic signs, usually those related to speed limitations, stop signs, etc.In the future, with the development of fully autonomous cars, one can expect that these systems will be able to recognize almost all TSs, similarly to the human driver.This will strongly increase the complexity of the implemented algorithms.The problem which can be frequently observed on the roads is the non-standard appearance of the road signs (damage, coverage, lack of full exposure, suspension at non-standard height, etc.), as shown in Fig. 1.Additionally, taking frequently observed dense arrangement of the TSs over a given area into account, it can lead to various safety issues if the vehicle will only rely on its on-board sensors.
Various efforts which aim at solving the problem of the road sign visibility can be found in the literature.One of the proposed solutions is inclusion of the TSs in the future system supported by the V2I communication.In practice, it means equipping the TSs with the RSU devices capable of transmitting relevant information to passing vehicles.A proposal of such a solution is described, for example, in [3], [4] and [5].These solutions can be regarded as a next important stage toward the development of active traffic signs.
One of the problems here is how to properly determine the positions of the RSUs associated with their respective TSs in the situation when many TSs are located in close proximity to each other.An exemplary situation of this type is shown in Fig. 2. If the positioning/localization of a TS is not perfect, it can lead to false assignments of the RSUs to the TSs seen by the vehicle and thus to false behavior of the active safety (AS) system of the vehicle.
Precise positioning of the RSU devices will be of great significance for applying them as support in creating a dynamic map of the environment of the moving vehicle.
B. Applications of the RTLSs -desired parameters
The RTLSs are being developed and the localization accuracy is being improved around the world.The majority of these investigations focus on indoor applications [6], [7].
The desired ranging precision (requirements) always depends on the application for which a given system is designed.In some medical applications, even millimeter precision is mandatory.Such a situation takes place in motion capture systems which aim at recording the motion pattern of a disabled person with very high accuracy [8], [9].Another example is warehouse application for which the precision of 20-50 cm is acceptable.An RTLS of this type is offered by the Ubisense Company [10], with the reported localization errors at the level of 15 cm.This system was designed to operate on relatively large areas.The reported localization errors in indoor conditions are usually below 10 cm [11].This however, is achievable for relatively small distances of below 10 m.The required precision of the ITS will be different, depending on the target application of the ADAS.The localization precision is not the only parameter to be considered and optimized.In the above-mentioned indoor human motion capture system the most important problems include improving the ranging precision toward even submillimeter range, and miniaturizing the markers to make them comfortable for the examined persons.As these devices are wearable, the energy consumption is also a crucial factor.On the other hand, such systems will be used in the indoor environment, in which temperature is at an almost equal level.Therefore, the robustness to a wide-range temperature variation is rather less important.Normally, in systems of this type, the RTLS tags are affixed to moving objects and tracked by the transponders installed in the fixed points of the environment, with precisely determined (and stored in the memory) positions.The net of transponders may thus create a precise frame of reference for the moving markers.The transponders have access to a power supply (or can be battery operated) and therefore a strong miniaturization is not a critical factor.Additionally, the framework of anchors can take a sufficiently long time to calibrate precisely.
When the RTLS is applied in the ITS, the situation is substantially different from the typical indoor conditions.In the road environment, the network of devices communicating with each other covers a much wider area than in the described indoor systems.Also, the behavior of such systems in cities can differ from the one in suburban areas, mostly due to different device densities in these areas.In the suburban areas, a rather sparse network of the RSU devices is expected, with a small number of devices in the range of the vehicle sensors.This creates some problems in terms of the localization precision, but in some situations it can also simplify the communication scheme (less interference).
C. RTLS-based solutions in Intelligent Transportation Systems
The use of the impulse-radio ultra wideband (IR-UWB) localization technology in an outdoor environment has already been proposed [12], [13], [14], [15].This technology provides high channel capacity (i.e. the data is transmitted at a high rate) which makes is suitable for use in the V2I communication [12].Also, propositions to employ the IR-UWB technology in the positioning of the objects on the road have been reported [13].
In sparse and open networks which are expected in the suburban areas, there will be only two devices communicating with each other in the worst case scenario.One of them will be an on-board unit (OBU) of a moving vehicle.The second one -the RSU -will be mounted in a fixed position of the road infrastructure.The trajectory of the moving vehicle can be determined, with a relatively good accuracy, on the basis of the vehicle velocity, acceleration, the yaw rate and other kinematic parameters.A single communication session between the OBU and the RSU will be quick enough to assume that both devices are in still positions during this period, which can be determined using only basis kinematic equations.Let us consider an example: if the distance between both communicating devices is even at the level of 100 m, the velocity is even 100 km/h (22.5 m/s), then the distance traveled during a single communication session does not exceed 20-25 µm.This value is far below the safety margin and thus can be neglected.
III. ENHANCEMENT OF THE RSU POSITIONING -PROPOSED SOLUTIONS
In the investigated method, the vehicle measures its distance to the RSU frequently (a time-of-flight (ToF) approach).On the basis of a single measurement session only the distance, r, can be determined, while the azimuth and thus the position of the RSU are not known.As a result, a circle with the radius r is obtained.The RSU is situated somewhere on it.The vehicle is located in the central point of the circle.Theoretically, on the basis of only two measurement sessions (for different car positions) and the trilateration computation, it is possible to determine the position of the RSU in relation to the vehicle.
In practice, various negative factors can interfere with the measurements.Among them are: delays caused by the communicating devices, noise resulting from imperfections of the vehicle on-board sensors, unknown height over the road surface, at which the RSU device is mounted, etc.All these factors result in the measurements being subject to errors.Depending on the source of these errors, they can be either systematic or random.
The problem is illustrated by an exemplary simplified trajectory of the vehicle in Fig. 3.The left diagram shows selected positions of the vehicle, for which distance measurements are performed.The results for different delays introduced by both of the communicating devices are shown.Large black circles illustrate an ideal case, in which delays are known (nominal values).In this case, all obtained circles (marked as "real distances") after a correction by a known factor intersect at a single point, which is the real position (x R , y R ) of the RSU device.
Theoretically, basing on the trilateration method, the position of the RSU device can be determined from the results of only two measurements, as mentioned above.However, under real conditions, unknown signal delays cause that the resultant circles (marked as "measured distances") intersect in other points (seeming positions).The right diagram in Fig. 3 shows the results for delays deviating from the nominal value towards both positive and negative values.Both types of deviations are shown here for illustration only.In the real situation, only one type will occur.Therefore, in the following figures only single circles are shown for each distance measurement.
The obtained points of intersection (A, B, . . ., J) for given values of delay form the area of uncertainty, with the unknown position of the RSU within it, as shown in the right diagram of Fig. 3.The uncertainty at the level of 10 ns can cause the localization error of even ±1.5 m, assuming the speed of light v c ≈ 30 cm/ns, and two-way communication.Assuming that an error of this magnitude is the case in a suburban area and that the considered system recognizes TSs, then the error could be neglected as the distance between any two TSs is assumed to be significantly greater than the error.In contrast, such error would be too big to be neglected in urban areas or when the RTLS is used to build a dynamic model of the environment.
A. Steps of the proposed enhancement algorithm
The proposed method for estimating the RSU position consists of several stages.In practice, it is an iterative method, where particular steps are performed cyclically and alternately for every new measurement session.The obtained data sets can be kept in memory as a whole or be realized as a delay line (a shift register), as for example in finite impulse response (FIR) filters.
The method described below was implemented in the Octave environment and verified for different values of particular factors and different trajectories of the vehicle.In order to model real conditions in a better way, which is expected during the driving, the measurement values were artificially disturbed by noise with different amplitudes, A, as presented below.
Stage 1: Distance measurements → data set with seeming positions of the RSU For every new, i th , measurement and the new resultant circle with the radius r i , new intersection points are computed using the circle determined in the previous iteration.
The objective of this stage is to compute the seeming position of the RSU device (i.e.spatial x and y coordinates).Let us denote the seeming position of the RSU as SPR.x, y.The determined SPRs can be expressed in the global coordinate system (GCS), which simplifies the computations.However, any coordinate system used consistently can be applied as well.The vehicle position can also be expressed in GCS.
It is worth to notice that this method returns a pair of intersection points, so it is necessary to specify which of them is the one reflecting the real position of the RSU.Basing on only two measurements, it is not possible.However, for a larger number of measurements, and thus a larger data set of calculated SRSs, the RSU position can be deduced.If the trajectory of the vehicle is not a perfectly straight line, then the points which are the real equivalents of the RSU position are spatially more focused and form an area which reflects a regular circle (or arc), as shown in Fig. 3. On the other hand, if the vehicle's trajectory would be a straight line, then even larger number of measurements do not resolve the ambiguity problem.Such a situation may happen in urban areas, where straight line movements of the vehicle will be common.Certain amount of curvature in the vehicle's movement is therefore needed.The described ambiguity problem may be solved in other ways, for example through content of messages provided by the V2I system to the vehicle.The vehicles may also intentionally focus only on TSs on one of the road sides.The seeming positions of the RSU (SPRs) form an area resembling a circle, with the radius r SC .What makes the approximation task more difficult is the fact that only a portion of the circle is obtained.How well the points match the circle/arc depends on several factors.A good-quality image of the circle is obtained in the absence of noise and with sufficiently dense measurements.As already described, the objective is to obtain a high density of the SPRs relative to the azimuth angle.If the RSU is far ahead of the vehicle and the vehicle is apporaching it, then the azimuth angle α does not change substantially, and thus the measurements can be less frequent in time.On the other hand, if the vehicle is in a closer proximity to the RSU mounted at the side of the road, then the azimuth varies faster for a given velocity of the vehicle.The investigation results show that ∆α at the level of 10 • is sufficient.
The impact of the selected frequency of the measurement on the accuracy of the computed positions of the SPRs is illustrated in Fig. 4 for constant velocity of the vehicle.In this case the measurements are performed at equal time intervals.In Fig. 4 (a) the measurement density is too small, causing the image of the circle to be distorted.This leads to larger final errors, as marked with the arrow.For a better illustration we set the noise amplitude to zero in this case.
In real situations the noise is non-zero and the SPRs are spread across the edge of the circle.In this situation the parameters of the circle (its radius, as well as the x and y coordinates) have to be determined.This is performed in the second stage of the proposed method.The tests were performed with the noise of different amplitudes.Noise samples were generated according to the uniform distribution with the extreme values between −A and A.
Stage 1 provides the data set with calculated SPRs.Stage 2 introduces an approach which relies on estimating the circle parameters basing on three selected points, which are assumed to be located on this circle.In case of a high angular density of the distance measurements, particular SPRs can be located very close to each other.To minimize the impact of the noise on the estimation results, the points are selected so that they are not too close to each other.For the purposes of the presented method, existing methods of this type were adapted and some modifications were introduced.In this (final) stage various operations can be performed including: sorting, filtering, truncating, etc.Multiple tests (10,000+) allowed us to determine a method and its settings
Fig. 1 .
Fig. 1.Examplary real road situations illustrating: (a, b) road signs deviating in appearance from standards or invisible, (c, d) untypical elevation at which a TS can be mounted (own source).
Stage 2 :
Seeming positions of the RSU → data set with estimated positions of the RSU
Fig. 3 .Fig. 4 .
Fig. 3. Determining the RSU position on the basis of multiple measurements -a general idea.
Stage 3 :
Filtering over the data set of the estimated positions of the RSU Stage 2 provides a new dataset with computed intermediate seeming circles, representing estimates (x E , y E ) of the real position of the RSU (x R , y R ).The size of this set can increase with each new distance measurement or be constant as in the delay line used in filters.Depending on the amplitude of noise, the obtained intermediate circles can significantly differ from each other, in terms of both the position and the radius.The objective of this stage is to find an appropriately averaged circle, with a supposed real position of the RSU.
|
2019-09-24T13:06:22.856Z
|
2019-09-26T00:00:00.000
|
{
"year": 2019,
"sha1": "1bc160629bd8c6b0cff0569663e7731cdcad8c5b",
"oa_license": "CCBY",
"oa_url": "https://annals-csis.org/proceedings/2019/drp/pdf/288.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b7ad99d91d1ed6a9f43e89e088782d6e657d9342",
"s2fieldsofstudy": [
"Engineering",
"Computer Science",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
62816302
|
pes2o/s2orc
|
v3-fos-license
|
An Introduction to the Comparative Study of Indian and European Philosophy
This text is the introduction to Čedomil Veljačić’s (1915–1997) doctoral thesis defended at the University of Zagreb in 1962 under the title Komparativno proučavanje indijske i evropske filozofije (Comparative Investigation of Indian and European Philosophy), which was never published. The author, who was a pioneer of comparative philosophical research in the region of Southeast Europe, assesses three separate fields connected with conducting comparative philosophy: archaeology, language studies, and philosophy, whilst concentrating on methodology (methodological criteria for the comparative approach and doxographic methods). He argues towards a general revision of the criteria posited for the study of the history of philosophy, but the sine qua non within the stimuli will still be the discovery of immediate and initial values that comparative philosophizing and an applied comparative method can offer through the doxographic method, so that the author’s study remains within the frame of a preliminary critical work meant to encourage a systematic discussion on comparative philosophy seen as a specific discipline in keeping with Paul Masson-Oursel. The issue of the comparative method in his previously unpublished study was applied to the study of European philosophy in relation to Eastern traditions of thought.
The expansion of archeological studies that took place in recent centuries challenged Hellenic culture and the aristocratically autochthonous position it held in the eyes of European scholars.Archeology enabled us to reevaluate tenets of objective research and increase our clarity.Initially, the operating hypotheses for a reconstruction of antiquity were restricted by the need for historical authentication of gathered fragmentary texts.The method was based only on data that were connected to their origins both in time and space.But, the turn of the century inaugurated a shift, and the German philosopher E. Zeller became its most prominent representative.He emerged within a field still delineated by the objective values that European cultural history had assigned to the classics.From such a standpoint it was almost impossible to conduct an adequate research of a world that extends beyond the borders of Greece.Nonetheless, sources were available, gathered and even organized, the most significant being those that stem from the days of Alexander the Great and his immediate successors.A bibliography here presented as attachment provides ample proof. 1 For the purpose of this study, however, these 1 Editor's note: Bibliography is not attached to this text.sources are offered in a somewhat perfunctory manner, translations are free and often abridged.The entry on "India" in the Pauly-Wissowa Real-Encyclopädie, XI, 2, provides the thorough documentation.Schliemann's nineteenth century archeological findings and his discovery of Troy had an immediate influence opening new horizons.By the end of the century the German philosopher Th.Gomperz was able to expound on a philosophy of historiography as the discovery of Mohenjo-daro and Harappa further contributed to such efforts.Thus the need for a methodological approach to classical texts gained complexity.On the Indian side, the importance of ancient texts and the task of connecting and familiarizing oneself with their content in spite of the hardships involved was not so easily ignored, although difficulties in regard to their authenticity were acknowledged."Stunning analogies" recorded by the early Enlightenment and Romanticism of Europe were hurried attempts to directly connect East to West without offering a critical thinking approach to the matter.Today these attempts are gaining relevance by stepping into our focus without anyone even trying to give them structure or their form a frame.An enlarged platform from where we could observe such issues has not yet reached a balanced state conducible for serious research.As much as one can discuss prerequisites for such an endeavor, we can now assess that three separate fields are in play: archeology, language and philosophy.Until recently it was the philologists who found themselves obligated to do the heavy lifting (see: W. Ruben, Die Philosophen der Upanishaden, Bern, 1947).Due to the inevitable expanse of philosophical problematics, I attempted to include in this study the necessary assumptions that are posed by general history and are relevant to our major theme -the tradition of the cult of Dionysius and Heracles, here presented as a universal source and taken as a protohistorical marker, a place from where philosophically relevant positions converge and diverge.For the purpose of a research not satisfied with superficial shuffles and a lack of systematic goals that the broader approach inevitably requires, one that would be greater than the particulars and independent of the issues under consideration, it is necessary to concentrate on methodology.It is noteworthy to point out that difficulties that arose from studies of Indian philosophy were due to a lack of basic criteria.These criteria, even when only implicit, do remain clear.They usually arose from emphatic opposing views of individual scholars.Today's cultural, social and political atmosphere forces us to encompass a wider logical scope when focusing on contemporary cultural studies and its traditions.The field was already delineated thanks to the "Oriental Enlightenment" that sprung in relation to Western Enlightenment and Romanticism.It is still casting a shadow over recent philosophical history.Today it is possible to assume that a useful and direct introductory research could provide the immediate example needed for focusing a thematic approach, albeit not extensive and still too superficial for the purpose of expressing ensuing concepts and producing extensive surveys.Such research assumes a collection of data gathered from three separate fields -archeology, language studies and philosophy.Specific problems appropriate for the comparative approach and more specific to a philosophical standpoint are discussed in the third and fourth part of this study. 2They are presented in a formal manner with no need for references to meanings taken from the whole of a specific historical period, nor to an ad hoc gathered fragment, as was customary when presenting doxographic analogies.Neither did I intend to use these analogies for exemplifying an overall history of philosophy in Hegel's or Spengler's sense.The scope of a research not satisfied with the nitpicking of systematic analysis, nor with the application of an independent system that is wider than the specific issue, commands that we focus attention on methodology.Today, when considering Indian philosophy, we can no longer claim that a lack of basic criteria constitutes problems.The third part of this study discusses the fact that these criteria, even when implicit, are usually clearly expressed and emerge out of the different opinions that authors stress.All this made it necessary to touch upon the somewhat broader logical outlook of our contemporary cultural sciences.Historically speaking, the attitude toward the field of comparative studies, the theme of this study, as it evolved in the last 150 years, can be divided into three well balanced phases: during the first phase there was a tendency toward the romantic outlook, typical for the romantic enthusiasm of that period.This was somewhat hastily brushed aside and introduced through a backdoor as it were as mere eyewitness stories of interactions recorded as representatives of the Hellenic and Indic age of antiquity.Such eyewitness documentation was not critically examined, although their authenticity was in great measure anticipated and often derived from secondary sources.Conclusions drawn from doxographic analogies were based on idealist, as well as realistic chronological underpinnings.Even Schopenhauer, as we shall see in this study, represented an extremely uncritical position.His successor Paul Deussen, when judged according to methodological criteria, represents the other extreme.In the meantime, in the mid-nineteenth century, classical studies began creating a critical tool for the research of antiquity's historiography.On that score, Zeller's valuable input to the field of philosophical history is noteworthy.Deussen, heavily influenced by Zeller's authority, highly praised Indian philosophy, raising it almost in a physical sense to high heavens.His wish to save the philosophical value of doxographic analogies was based on the assumption that the development of Indian and Hellenic thought should be observed as if coming from two "different planets".This kind of stress on philosophical analogy was very convincing.It created the impression that the development of comparative philosophy could have great potential when viewed from the standpoint of European idealist philosophical awareness, particularly in the Germany of that time, and of neo-Hinduist aspirations that simultaneously flourished in India.However, on the European side, such a materialistic restriction imposed on comparative philosophy and encouraged by the decadent mood of the turn of the century, soon began losing value.Yearnings for fresh directions were pushed aside and the goal of a universal integration of Indian philosophy was not achieved.Nonetheless, we have seen that even as early as at the end of the nineteenth century, archeology unearthed new historical sources igniting the field with a revolutionary fervor relevant for our thesis.This turn, however, did not apply directly to philosophy, rather, it was founded on philological research that demanded a further development of archeology.Undoubtedly more conducive circumstances for the study of the comparative themes present in the expanses of cultural history entered the work of Th.Gomperz (under the influence of Rhode).Thus it gained some momentum on the German side in the twenties.For this we can thank the works of Jaeger.In 2 Editor's note: The author points to the chapters of his doctoral thesis, while we publish only its introduction.
the last fifty years, however, specialized cultural historical studies remained active mostly in France, gradually breaking ground with ever more expressive comparative themes that focus on Iranistics and Indology.Among those rare authors who approached this thematic whole during the two decades sandwiched between the two world wars would figure P. Masson-Oursel and S. Radhakrishnan.Their work can be divided into two distinct phases, whereby they started out in the twenties by assuming that doxographic methods, if they were to be considered as more or less pure, should abstract from problems of direct influences and indirect connections.Both were deemed to be necessary documentation recorded in chronological order.In the thirties, however, the concentration falls on the less direct influences.In this manner two different methodological possibilities crystalized.They developed successively and separate from each other, and gradually gained an even and objective status.A confrontational attitude based on extreme opposites can no longer be the question, rather both sides need to take their legitimate place that is systematically accorded to them within a comparative analysis of philosophical problematics.Today it is necessary to balance the input of given authors on such convergent aspects also within the framework of their life's work.It is clear that although the method of chronological documentation has gained importance, it still remains a tool if observed within the actual interest of comparative philosophy.The stimulation that it provides today both in the East and the West, aims towards a general revision of the criteria posited for the study of the history of philosophy, but the sine qua non within the stimuli will still be the discovery of immediate and initial values that comparative philosophizing can offer through the doxographic method.It should be stressed that the thematic material discussed in the fourth part of this study required that my selection of texts not be complete nor an exhaustive.Solutions arrived at through this applied comparative method, served this author only for schematizing and fulfilling formal obligations.The texts are mere examples illustrating how methodological criteria can be applied and derived from historical analysis.Due to the importance placed on methodological problems, the conclusion of this study consists of a summary of the methodological criteria that arose from the critique of previous developmental positions.Again, my aim was not to expound on a systematical methodology.In that sense this study remains within the frame of a preliminary critical work meant to encourage a systematic discussion on comparative philosophy seen as a specific discipline, the possibilities and needs of which were initially pointed out by Masson-Oursel.
Conclusion: On the problem of a comparative method
The problem of a comparative method applied to the study of ancient European philosophy in relation to Eastern thought traditions, arose toward the end of the nineteenth century in opposition to two well-known criteria that had already gained a sound standing: 1. chronological documentation -its aim being to check a possibility for documenting thought analogies within the development of cognition and link the two with historically direct and indirect ties; 2. doxographic interpretation -its aim being to find an analogy that need not recognize the possibility of historical influences.
Logical assumptions do not exclude the possibility that the two criteria can join to form a single methodological unit.Limiting philosophical interest to doxographic content need not exclude the importance of chronological data connected to the circumstances of their development.In concrete situations, however, such issues can be overwhelmed by historical and technical difficulties.Therefore, it is inevitable to take into account the factual existence of the two methods, whereby the tendency of exclusion can gradually diminish, even though the tendency for connectivity is not yet sufficiently visible.It remains then as an implicit problem and its existence is testified through the critical analysis performed by individual authors.
It is in view of the achieved results that I brought forth this problem as an issue primarily in the work of Masson-Oursel, and less so in Radhakrishnan.
The author would discuss a comparative issue from one aspect and then from the other without explaining the relation between the methodological criteria, nor warning about the different results that this could produce.It is obvious that such standpoints were not intentional, nor systematic.They arose due to the different material conditions that manifested thanks to a sudden expansion of documentary material within the historical field.This growth of general cultural-historical evidence simply overshadowed other criteria that seemed more prominent and better suited for research even as late as in the twenties.
On the other hand, as mentioned in the critique of Deussen's comparative philosophy, the exclusivism that doxographic materials encountered at the time, when methodologically viewed, created an imminent crisis due to the fact that comparative problematics became restricted by some materialist assumptions brought forth by specific philosophical currents.Deussen maintained that these currents of the new philosophy of consciousness coincide in great measure with their Indian analogies and are of central historical importance.European philosophy did not succeed in maintaining that position.Perennial philosophy was thus applied to our contemporary thought processes, their possibilities and interests, but it did not blossom as hoped, although it was an inevitable reflection of general interest in the comparative problems that the doxographic method had initially embraced.The major difficulty for a conducive and balanced development of a comparative method seems also linked to the accidentality of historical development.Data collection spread unexpectedly over three random fields -archeology, linguistics and philosophy.The first of these will remain a major shelter and hideout for unknown facts.For Deussen it did not even exist.Within the history of philosophy the problem of separating fictitious philosophical from pre-philosophical thought was limited to the narrow peripheries of the Ionian shores.
It is not unusual that methodological criteria of a new discipline in their initial developmental phase rely on the empirical circumstances of heterogeneous fields.Even methodologically established areas have to account for the revolutionizing problematics brought forth through changes that lead to an unexpected expansion of knowledge.Today these are the heterogeneous technical means, scientific discipline and the initially intended service.
Still, the principles of philosophical research and the philosophical aspects of its interests do form a specific thematic unit.This unit is subjected to chance and empirical change in the same manner as the peripheral disciplines.The merits of positivism, particularly its French school, lie in the fact that it takes technical development into maximal consideration.It attempts to place technology at the center of its philosophy in the hope of confirming the specifics of philosophical interests and elicit the impossibility of reducing them to the mere recording of historical facts.Positivism underscores the need to develop rational methods applicable to the extensive empirical material, while sheltering them from the process of identification and from the possibility of being confused with historiographical methods.The major merit of French positivist rationalism of the twentieth century then, is the identifying of the dangers posed by such equivocations.
In the same fashion we can attest that the doxographic method when applied throughout our territories will inevitably remain philosophical in its narrow scope, while a chronological documentation will remain as a tool that gained unique importance in the last thirty to forty years when it was used as a means for accruing data.It is a tool that attempts to conform to specific philosophical interests.Historical and linguistic disciplines that start from archeological data cannot be used for direct philosophical purposes without adjusting methodological criteria in a way that points at the relation between the critique of known methods and the logical development of cultural sciences.From that aspect, the well-intentioned works of orientalists pose an ever growing danger for confusion, and the danger of burying authentic philosophical problems under a barrage of heterogeneous facts unearthed by the archeological finds of hitherto unknown cultures.Various cultural-historical and sociologically interesting conclusions, often construed from secondary documents, can both cast a dark shadow as well as illuminate adequate philosophical problems.An even greater danger can be foreseen if these problems remain discontinued, abandoned on the garbage heaps of classifying logic.Torn to pieces, they would hinder instead of aid the interconnectedness of important elements relevant to historical or linguistic documentation.
In order to clarify these issues it may be useful to summarize a specific example as earlier discussed.From a doxogaphic perspective, even from a homologous development of concrete philosophical studies or disciplines, a chronological sequence has minimal importance.If we designate a historic basis within the limits of a specific circumstance for two analogous directions, such as Indian and European skepticism, nominalism and the science of epoché, or the attempt to "plagiarize" one side according to historical precedence, this could easily lead to a priori falsification.This is a serious danger that doxographic integration poses.Its immediate opposite doxographic differentiation, however, is worse.From one aspect it is important to consider that elements of doxographic integration are not primarily chronological facts, although liminal circumstances of their chronological givens may well help in determining the existentially specific breadth of the area where their doxographic direction aims.On the other hand, even the circumstances of doxographic differentiation within the chronological development cannot be taken as a proof for a groundless existential analogy.The fact that Philo of Alexandria had already used skeptical argumentation to ground the apologetics of his mystical views did not present an obstacle for Zeller in his comparative determining process for finding a common source.An analogy can be provided by taking into consideration the existential connection of the ethical meaning of Buddha's and Pyrrhon's epoché.In both cases the determining of a chronological sequence of historical examples remains outside the pale of existential relations of an analogical method, and, therefore, we cannot conclude that factual coincidences do not give us the right to come up with some adequate answer, particularly when viewed within the limits of authenticity.If we acquaint ourselves with the coincidences, or learn about them later, their abstractions can no longer be considered.And therein lies the stimulative value of compara-tive philosophy, the value of not validating an exclusive search that is limited only by integrative or differential methods.Analogous analysis in its existential sense, when applied to philosophical tenets, is not limited, in principle, to their delving within pre-established systematic limitations; it can unearth some unexpected register within the thought modification process that possibly took place in some distant past in quantitative measures which for us could remain irrelevant.
Here the problem of a comparative method brings us to the wider issues of comparative philosophy.As long as the tertium comparationis is limited to Hellenic philosophy the problems remain implicit.However, even the narrowed down problem of a comparative method becomes impossible to discuss as a single whole without stepping over the boundary of the historical period of our specific example.Apart from this, we also saw that Masson-Oursel already in the title of his main opus identified the problem of methodology with the problem of a philosophical discipline that does not remain only methodological, but foresees a sui generis system of material insights.For neo-Hinduism the problem of method is implicit analogous to Masson-Oursel who did not treat it separately, but used various methodological ad hoc tools.Finally, even Deussen's research expounds on the problematics of a comparative method that gained integrative value by becoming one of the basic theses of neo-Hinduist universalism.Taking all this into consideration, it is necessary to cast a final glance at our problematics of comparative philosophy.It is clear that it cannot be limited to the constituent question of a positivistic discipline, as it may seem when viewed from the point of a study which has hitherto been directed explicitly to such issues from the methodological aspect.Concurrently it is imperative to pay special attention to conscious universalist inclinations of a contemporary open-ended European philosophy.Neither the Western nor the neo-Hinduist universalism of today is exposed to the dangers of falsification.From the methodological point of view, it is characteristic for comparative philosophy, if considered as an independent discipline, to gain special value as it searches for "foreseen registers" both in the quantitative sense and in the historical.What poses the main danger is a lack of adequate critique both of the expounding as of the applying of methods.This could lead into syncretic historicity.A comparative universalism may, to a great extent, avoid such a danger by carefully testing the stimulative values of a research that centers on existential areas and allows chronology to take a secondary position.Under such scrutiny a tendency, be it major or minor, along with a critical sense for doxographic or chronological research of individual problematics could bring forth the necessary formal differentiation of comparative philosophical standards.From the materialist side, we may assume that a development of such a comparative discipline may enrich the possibilities of finding the sources of systematic thinking while developing a scholarly method that eases our initial cognitive discernment.
|
2018-12-16T14:31:12.104Z
|
2017-02-10T00:00:00.000
|
{
"year": 2017,
"sha1": "d48e1653ddad1089e06942f117bd2a4b0a402a80",
"oa_license": "CCBYNC",
"oa_url": "https://hrcak.srce.hr/file/273297",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d48e1653ddad1089e06942f117bd2a4b0a402a80",
"s2fieldsofstudy": [
"Philosophy"
],
"extfieldsofstudy": [
"Sociology"
]
}
|
52129803
|
pes2o/s2orc
|
v3-fos-license
|
Phytoremediation of Formaldehyde from Indoor Environment by Ornamental Plants: An Approach to Promote Occupants Health
Nowadays, indoor air pollution has become a major concern due to its known harmful effects on human health.[1] With the onset of the energy crisis, changes in the building’s design owing to energy‐efficient strategy, a confined space for house and workplace is provided which reduce the air exchange rate (AER) and increase indoor air pollution.[2‐6] The environmental protection agency (EPA) of the United States has mentioned that indoor air pollutants can be found at a higher concentration than outdoor.[7] However, monitoring and regulating of indoor air pollutants have been neglected behind the outdoor air pollutants.
Introduction
Nowadays, indoor air pollution has become a major concern due to its known harmful effects on human health. [1] With the onset of the energy crisis, changes in the building's design owing to energy-efficient strategy, a confined space for house and workplace is provided which reduce the air exchange rate (AER) and increase indoor air pollution. [2][3][4][5][6] The environmental protection agency (EPA) of the United States has mentioned that indoor air pollutants can be found at a higher concentration than outdoor. [7] However, monitoring and regulating of indoor air pollutants have been neglected behind the outdoor air pollutants.
One of the major indoor air pollutants is formaldehyde with a chemical formula of HCHO. It is one of the most well-known volatile organic compounds (VOCs) associated with indoor air pollution which is attracted public attention worldwide due to This is an open access journal, and articles are distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 License, which allows others to remix, tweak, and build upon the work non-commercially, as long as appropriate credit is given and the new creations are licensed under the identical terms.
For reprints contact: reprints@medknow.com its adverse health effects. [3,8] Formaldehyde is a colorless gas with a strong odor which is soluble in water, as well as it can be smothering at room temperature. [7] The main indoor sources of formaldehyde are from furniture and materials which widely used in the construction of inside the house such as fiberboard and laminated wood, carpets, curtains, rubber, oil-based paint, adhesive materials, cosmetics, electronic devices, and paper products. [2,9,10] Furthermore, people are exposed to formaldehyde from combustion sources such as tobacco smoke, gas, petrol, and solid fuels. [11] Typically, in newly built or refurbished residences the levels of formaldehyde are often so high compared to old buildings. [12,13] Formaldehyde levels generally decrease with the product age, [2,11,14] however, according to Wolverton, 10 years is too much time to breath this carcinogenic chemical into lungs. [15] The World Health Organization has reported that the health effects associated with acute exposure to indoor concentrations of formaldehyde include eye irritation, eye redness, frequent blinking, and irritation in the upper respiratory system. [16] Furthermore, it has been reported that formaldehyde can cause long-term effects such as cancer, leukemia in children, premature birth, low birth weight, congenital anomalies, genotoxicity, and Alzheimer's disease. EPA considers formaldehyde as a probable human carcinogen (Group B1). [2,7,17] It has been suggested that occupational exposure to formaldehyde may increase the risk of nasopharyngeal carcinoma. Thus, physician working in the operating theater remains alert to formaldehyde hazards among health-care workers. [18] At present, there are some techniques for eliminating formaldehyde from the indoor air such as biological methods, adsorption on activated carbon fibers, photocatalytic oxidation, and biofiltration; nevertheless, none of them are fully satisfactory due to low concentrations as well as the volatile characteristic of this chemical. [19][20][21][22] Beside this, increasing the ventilation rate is difficult and not economical for public. Phytoremediation has attracted much consideration in recent decades probably due to it's environmental, economic and social benefits. In addition, it is potential to help zero emission in both traditional and new buildings. [23,24] Numerous plants can remove formaldehyde from indoor air. [2,25,26] Plant leaves uptakes formaldehyde through stomata and the cuticle, and younger leaves readily absorb the formaldehyde vapors. [3,27] Besides, some researches have shown that soil microorganisms are capable of degrading pollutants and this degradation is suggested to be encouraged by root exudates. [28][29][30] When Formaldehyde is absorbed, one part of it is oxidized into carbon dioxide in the Calvin cycle while the other is combined into the organism such as amino acids, lipids, free sugars, organic acids, and cell-wall components. [3,10] This study was conducted with the aim of determination of formaldehyde removal efficiency from indoor air by a potted plant using a pilot scale chamber made of Plexiglas. For this purpose, Nephrolepis obliterata plant (sword fern) from Lomariopsidaceae's family was used. This plant is hugely available throughout Iran and can be acclimatized with the indoor environment. In this work, formaldehyde was used as a common VOC contaminant in indoor, but these methods can be practical to other VOCs. [31]
Test chamber and experimental setup
Experiments were conducted in a Plexiglas chamber with a volume of 375 L (84 cm length × 62 cm width × 72 cm height) which was made perfectly air-tight. A door was provided in front of the chamber which was sealed by adhesive foam-rubber insulation tape and adjustable metal clips. Two PC fan (Model: 350 XA, 2.03P4) fixed inside the chamber to provide complete mixing of fumigated air.
The temperature and relative humidity (RH) of the chamber were controlled by a digital thermometer [ Figure 1]. The light intensity supposed to be natural indoor environment light which was measured around the chamber in five directions (west, east, north, south, and above the chamber) four times a day over experimental period using a YF-170 digital light meter (Tenmars Electronics Co., Ltd, Taiwan). Figure 1 shows the experimental setup for this study. The system was consisted of three main parts including (I) the chamber for placement of plants to contact with air stream containing formaldehyde; (II) air pump connected to a flow meter and impingers system which supplies air, water vapor and formaldehyde gas mixture with desired concentration; and (III) sampling system from chamber inlet and outlet for analysis of formaldehyde concentration that include a vacuum pump, flow meter, dual impingers containing liquid absorbent. Stainless steel and silicon tubing were used to connect the system compartments.
Formaldehyde measurement
Formaldehyde vapor was introduced to the chamber by a gas bubbler containing 37% formaldehyde solution. [2] Air was provided by a vacuum pump (Model: ACO-5504, 5w), and the air flows were measured by needle valve glass flow meter (CT Platon, France). In addition, air stream was passed through an activated carbon column to adsorb any potential contaminants. The formaldehyde concentration was measured according to the NIOSH-3500 method, a visible absorption spectrometry technique, using a DR5000 Spectrophotometer (DOC022.53.00654-HACH Lange, Co. USA). This is the most sensitive formaldehyde analysis method capable of detecting as low as 0.1 ppm which is best suited for the determination of formaldehyde in the environmental samples. [32]
Plant materials
In this research, one of the fern species from Lomariopsidaceae's family, Kimberly Queen Fern (N. obliterata) was used. This species was selected because they are one of the common indoor plants used in Iran as well as they are economical and easily accessible. Pots of the plants were bought from commercial distributors (flower
Experimental procedures
To investigate formaldehyde removal potential of the plants, experimental procedures were designed and carried out in four stages: (I) "empty chamber tests" without potted plants with a known amount of formaldehyde inlet to determine any combined chamber losses due to (e.g. leakage, absorption and chemical reactions); (II) "whole plant absorption tests" including soil and areal part of the plant by introducing different formaldehyde concentrations to the chamber; (III) "darkness test" to distinct light intensity effects on formaldehyde removal efficiency of the plants; and (IV) soil absorption test (including roots).
Empty chamber losses were assessed before the other above-mentioned experiments. The chamber's combined loss was tested with inlet formaldehyde concentration ranges of 4.5-7 mg/m 3 under two different RH of 40% and 80% for 6 days. Then, two pots of the plants with an average height of 48.4 cm areal part and 17 cm of root part (pot and soil) were placed inside the chamber, to provide sufficient leaf area for optimum air purification. The plants were continuously exposed to formaldehyde vapors with inlet concentrations ranging from 0.5 to 12.0 mg/m 3 . [3] The tests for each concentration were carried out 2 days. Among the exposure periods, sampling from inlet and outlet of the chamber was performed every early morning and late evening (4 times for each inlet concentration), and the averages of them were reported. It should be noted that the plant rested for 24 h before starting the next concentration test.
Darkness tests were taken place by covering whole the chamber (entire plants [EPs] inside it) with a black cloth. This test was also carried out for 2 days but only for one of the inlet concentration which laid the median of tested concentrations range (e.g., 4.7 mg m -3 ). Hereafter, aboveground part of the two other plants with the same pot and areal sizes of those used in the previous tests was surgically removed and the pots containing only soil and roots were put back into the chamber, and then experiments were repeated for an inlet concentration of 5.23 mg/m 3 for 2 days. [3,27] A new set of plants were used in this stage of experiments to avoid confounding errors as a result of prior formaldehyde exposure.
Plant morphology and physiology
Key characteristics of the plants including morphology and physiology (plant height, leaf area, dry weight, fresh wet weight, chlorophyll content and carotenoid) were evaluated before and after fumigation to assess the effects of formaldehyde on these features as plant growing indices. Chlorophyll content and carotenoid were determined according to Lichtenthaler and Wellburn method. [33] For determination of the individual leaf area, the leaves were counted and categorized as large, medium, and small. Six samples were taken from each category, and their area was measured by a leaf area meter (ΔT Area meter MK2). The average surface area for each category was multiplied by the number of the leaves counted in each category, and the total surface area of each plant was calculated. [2] Furthermore, plant height was measured before and at the end of the experiments. The fresh wet weight of the leaves was determined by the analytical scale and reported in mg/cm 2 of leaf area. Thereafter, the leaves dry weight was measured by drying them in the oven under 80°C for 24 h, weighing out by an analytical scale and reporting in mg/cm 2 of leaf area.
Data analysis
The concentration of formaldehyde in the air flowing to the chamber (C T ) was calculated using the following formula: Where C 2 is formaldehyde concentration in the air bubbled from the impinger containing formaldehyde solution, Q 1 is the air flow needs for dilution and Q 2 is the air flow passing through the formaldehyde solution [ Figure 1]. The removal efficiency was calculated using the formaldehyde concentrations entering and leaving the chamber as follow: The elimination capacity (EC), the amount of formaldehyde vapor removed per unit surface area of plant leaf (mg/ m 2 /h), was calculated as follow:
EC Q C C S
Where C in and C out are the inlet and the outlet concentrations of formaldehyde (mg/m 3 ), respectively, Q is the inlet polluted air flow (m 3 /h) to the chamber and S L is total leaf area (m 2 ). Finally, the statistical analyses, drawing the graphs and tables were carried out under Excel software.
Results
Averages of temperature inside and outside the reactor during the experiments were 26.99°C ± 0.84°C and 26.84°C ± 0.81°C, and those for RH were 78.94%±2.25% and 18.885 ± 1.54%, respectively. Background light approaching to the chamber coordinates during daytime was measured and their averages at the measuring time and for whole the study period were calculated. The light intensity was 1795.56 ± 259.29 Lux. There was a difference between the RH inside and the outside the chamber. For the temperature and light intensity, the difference was negligible. Table 1 represents the average outlet (C out ) formaldehyde concentrations achieved during the experiment with N. obliterata plant under various inlet formaldehyde concentrations (C in ). Total reduction of formaldehyde by the EP, and by root and soil with and without considering chamber combined losses were also examined. The empty chamber's combined losses tested with inlet formaldehyde concentration ranges of 5.01-6.11 mg/m 3 under two different RH of 40% and 80% were 5.11% and 14.04%, respectively. Thus, in spite of reporting whole removal efficiency, the loss of 14% was deducted from all the results achieved during the experiments and reported as net removal efficiency. Due to a limitation in flow rate for keeping the chamber AER near to 1 time per hour (1 n/h), it was impossible to reduce the chamber RH down to 75%, so we tried to carry out the experiments under an RH of 80% ±5%. EP removal efficiency was examined with ascending inlet formaldehyde concentrations ranging from 0.6 to 11.2 mg/m 3 , each for 2 days and the averages of the results were reported [ Table 1].
Formaldehyde removal efficiency by potted N. obliterata plant-soil system with and without chamber combined losses, as affected by different inlet formaldehyde concentrations are shown in Figure 2a. About 81%-100% of formaldehyde was removed from the polluted air flown into the chamber. The EP net removal efficiencies were calculated by subtracting the chamber combined losses from the whole removal percentages. Figure 2b shows the formaldehyde EC of the EP without and with considering the chamber losses.
Additional experiments were conducted under a thoroughly dark environment to compare the removal efficiency under light versus dark conditions. The plant was exposed to a formaldehyde concentration of 4.7 mg/m 3 for 2 days in a dark environment. The effluent concentrations were measured in the morning and evening of each day, and the average of the results was reported in Table 1.
The results of plant growing characteristics and their percentage changes after contact with the pollutant were represented in Table 2. The most important effects of formaldehyde on the plant were a reduction in the plant wet weight and water content which were reduced by 27% and 5%, respectively. However, the tested concentration of formaldehyde could not abort the plant growth, whereas the chlorophyll content, carotenoid level, and average height of the plants were increased by 9.58%, 21.79%, and 6.46%, respectively, during the fumigation.
Discussion
The results of this study showed that N. obliterata plant-soil system considerably removed formaldehyde vapors from the polluted air during continues long time fumigation. As shown in Figure 2, about 90%-100% of formaldehyde was removed from the polluted air flown into the chamber with an inlet concentration range of 0.63-9.73 mg/m 3 . However, increasing the inlet concentration to 11.09 mg/m 3 within 48 h the removal efficiency was decreased. This shows that the plant could not tolerate with concentrations higher than about 10 mg/m 3 . By increasing the inlet formaldehyde concentration and by extending the exposure time the EC was increased. This increase in the elimination rate might be occurred by attribution of plant and soil surface, roots, degradation by microorganisms or bacterial adaption and uptake by the stomas of plant. [27,33,34] It has been suggested that when formaldehyde enters the plant through the leaves is firstly detoxified by oxidation then transformed into CO 2 and built into the plant material via the Calvin cycle. [35] Depletion of formaldehyde like other VOCs in the chamber which results in slower diffusion rate into the plant is likely to be happened. [27] However, a breakpoint was attained with an inlet concentration of 9.7 mg/m 3 . Whereas experiments with an inlet concentration of 11.09 mg/m 3 , the EC was not promoted.
In a similar study but with different plants, Xu et al. reported formaldehyde removal efficiencies of about 95% for spider plant-soil system, 53% for Aloe vera-soil system, and 84% for golden pothos-soil system with an inlet concentration range of 1-11 mg/m 3 and at the light intensity of 240 µmol/m 2 /s in daytime. [3] It has been reported that the EC by which formaldehyde is removed increases on repeated exposure which is in accordance with our results. [6,21,33] According to Table 1, contributions of the potted soil along with roots in the formaldehyde removal accounted for 26.39% of the total removal by EP. The capacity of potted soils for removal of formaldehyde in the present study was considerably similar to those has been already reported in the literature. [3,34,36] This achievement may be attributed to the abundance of soil microbial activity stimulated by root exudate which acts as nutrient for the soil microorganisms. [37] Furthermore, formaldehyde removal capacity increases by the increasing of exposed surface of potted plant. [3] The results also showed that similar to other studies with the same inlet formaldehyde concentration, removal efficiency under natural daylight was higher than the dark environment. Furthermore, under the same condition but in the days with higher light intensity the formaldehyde removal was higher. [3,27] Both the stomata and cuticle in the plant leaves could be the pathways for VOC removal. However, it is probably upon on the properties of the VOCs. Formaldehyde is a hydrophilic VOC, therefore could not diffuse through cuticle easily because it consists of lipid. [38,39] It was, therefore, concluded that formaldehyde was taken up through the stomata as stomata are open in light and closed in darkness. [40,41] Another explanation can be the increase in the photosynthesis and metabolism rate in daytime leads to more formaldehyde removal compared to night time. [42,43] It has been reported in some studies that the removal rate in the first ours is higher and decreases by the passage of time. [44] However, this is only accurate in the batch system not in continues flow system which we applied in our study. This could be an explanation for lower removal efficiency by the EP in higher inlet concentrations after prolonged exposure in our study compared to studies which showed good formaldehyde removal efficiency at high inlet concentration for this species plant-soil system after the shorter exposure period. [10] Increasing of the plant growing characteristics here in our study represented that formaldehyde with an inlet concentration up to 11 mg/m 3 could not stop the plant growth during the fumigation tests. This is likely to be ascribed to the high resistant of the plant against formaldehyde.
Conclusions
Formaldehyde is mainly released to the indoor environment from building materials, home furnishings, and tobacco smoking. The potted N. obliterata plant-soil system examined in this study was talented to the removal of formaldehyde from polluted air in a long time exposure.
Although the EP had more contribution in the removal of formaldehyde, the influence of potted soil and roots was considerable which can be attributed to the pollutant absorption and metabolism by the microorganisms in the soils. Formaldehyde EC by the plant increased with elevating the inlet concentrations and reached a plateau with concentrations upper than 11 mg/m 3 . EP showed more removal in day time rather than night time and darkness. Examination of the plant morphology and physiology showed that N. obliterata is very resistant to the formaldehyde, whereas long-term exposure could not stop the plant's growth. It is evident from our results that, phytoremediation is one of the most effective, economically and environmental friendly indoor air purification methods which can help improve physical and psychological health.
|
2018-09-15T22:43:47.302Z
|
2018-08-14T00:00:00.000
|
{
"year": 2018,
"sha1": "047ee94600ef048094c620a5f2a36383c741ee0a",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/ijpvm.ijpvm_269_16",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f9f3582d89a18a5dd564f109494a25a6c21c4da3",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
271387803
|
pes2o/s2orc
|
v3-fos-license
|
Global, regional, and national analyses of the burden of colorectal cancer attributable to diet low in milk from 1990 to 2019: longitudinal observational study
Background Globally, diet low in milk is the third greatest risk factor for colorectal cancer (CRC). However, there has been a lack of detailed worldwide analysis of the burden and trends of CRC attributable to diet low in milk. Objective We aim to assess the spatiotemporal trends of CRC-related mortality and disability-adjusted life-years (DALYs) attributable to diet low in milk at the global, regional, and national levels from 1990 to 2019. Methods Data of mortality, DALYs, age-standardized mortality rate (ASMR), and age-standardized DALY rate (ASDR) of CRC attributable to diet low in milk were extracted from the Global Burden of Disease (GBD) 2019 study. The burden of CRC attributable to diet low in milk was estimated using the ASMR and ASDR, while accounting for sex, age, country, and socio-demographic index (SDI). From 1990 to 2019, the estimated annual percentage change (EAPC) was calculated to clarify the temporal trends in the ASMR and ASDR attributable to diet low in milk. Results In 2019, there were 166,456 (95% UI = 107,221–226,027) deaths and 3,799,297 (95% UI = 2,457,768–5,124,453) DALYs attributable to diet low in milk, accounting for 15.3 and 15.6% of CRC-related deaths and DALYs in 2019. CRC-related deaths and DALYs attributed to diet low in milk increased by 130.5 and 115.4%, from 1990 to 2019. The burden of CRC attributable to diet low in milk varied notably among regions and nations. High-middle SDI regions had the highest ASDR and ASMR of CRC linked to diet low in milk, while there was a slight downward trend high SDI regions. Among geographical regions, East Asia had the highest number of CRC-related deaths and DALYs attributable to diet low in milk. Notably, the burden of CRC was highest in males and the elderly. With coefficients of −0.36 and −0.36, the EAPC in ASMR and ASDR was significantly inversely correlated with the Human Development Index in 2019. Conclusion Globally, the number of CRC deaths attributable to diet low in milk has continued to increase over the last 30 years. Therefore, government and authorities should conduct education campaigns to encourage individuals to increase daily milk intake.
Introduction
As the second most common malignancy and the third most common cause of cancer-related death globally, colorectal cancer (CRC) accounted for more than 2.1 million (7%) of all new cancer cases and 1.0 million (11%) of all cancer-related deaths worldwide in 2019 (1).There are considerable geographic variations in the burden of CRC, which is strongly associated with socioeconomic status (2).Traditionally, the incidence and mortality rates of CRC are highest in Europe, Oceania, and North America (3,4).However, the incidence rates are rising in high-middle socio-demographic index (SDI) regions, notably in East Asia, Eastern Europe, Asia, and South America, as a result of economic improvements and shifts in dietary patterns and lifestyles (5,6).Consequently, CRC remains a significant economic and medical challenge globally.
Multiple risk factors would promote the malignant development of CRC, such as genetic, age, environmental, lifestyle (dietary habits and activities) and metabolic risks (7).However, the exact cause of CRC is still unknown, and these risk factors may lead to potential confounders that could lead to spurious relationships in the observed data.In the GBD study, the three highest attributable proportions of risk factors of CRC in 2019 were diet low in whole grain diet (15.8%), diet low in milk (15.3%), and smoking (12.9%) (8).The risk attributable to diet low in milk have exceeded the risk attributable to diet high in processed meat.Meanwhile, the proportion of CRC attributable risk attributable to diet high in processed meat diet decreased (9).Milk, as a cheap and commonly consumed food worldwide, can provides a variety of macronutrients, micronutrients, and bioactive components that are crucial to growth and development (10).In addition, milk conveys various potential health advantages, including anti-cancer, anti-inflammatory, antioxidant, anti-fat, antihypertension, anti-hyperglycemia, and anti-osteoporosis activities (11).A recent meta-analysis of 15 cohort studies involving 11,733 individuals found that higher consumption of total dairy products and milk may be associated with a decreased risk of CRC (12).Moreover, the American Institute for Cancer Research and the World Cancer Research Fund found that drinking milk may reduce the incidence of CRC (13).However, the majority of the population does not reach the recommended daily milk intake, and there is a lack of updated epidemiological research analyzing colorectal cancer due to low dietary milk levels from a global perspective.Therefore, it is necessary to systematically explore the burden of colorectal cancer attributable to diet low in milk.
The Global Burden of Disease (GBD) 2019 study provides comprehensive and up-to-date data of 369 diseases and injuries, as well as 87 risk factors, from more than 204 countries and territories worldwide (14).Several recent studies have utilized the GBD database to investigate the global, regional, and national burdens of CRC and identify associated risk factors.The results of these studies revealed that the primary risk factors for disability-adjusted life years (DALYs) associated with CRC across all countries and regions were diet low in milk, smoking, low calcium diets, and alcohol intake (7)(8)(9).A prior study (15) was conducted to analyze the burden and trend of CRC attributable to diet low in milk in China from 1990 to 2017, but this investigation only focused on a single country.To the best of our knowledge, there has not yet been any detailed worldwide analysis of the burden and trends of CRC attributable to diet low in milk.
Therefore, the aim of this comprehensive review was to quantified the global, regional, and national disease burden associated with CRC attributable to a low milk diet in reference to the most recent data from the GBD 2019 study.In addition, potential correlations among the human development index (HDI), sociodemographic index (SDI), and CRC burden were assessed.Moreover, the estimated annual percentage change (EAPC) was calculated in order to quantify the trend of the age-standardized rate (ASR) over time.The results of this study will help to better understand the impact of diet low in milk in order to decrease the incidence and burden of CRC.
Data source
All data were derived from the GBD 2019 study, which was a cooperative international research project to estimate the burden of 286 causes of death, 369 diseases and injuries, and 87 risk factors in 204 countries or territories, 21 regions, and 5 SDI regions from 1990 to 2019 (16).The GBD 2019 research team collected raw data from civil registration, vital statistics, hospital records, and household surveys in each country, providing reliable estimates about the burden of colorectal cancer.DisMod-MR version 2.1 was used to adjust for bias in the raw data to provide internally consistent estimates of prevalence by age, gender, location, and year (17).With the use of the Global Health Data Exchange website, 1 the number of CRC deaths and DALYs attributable to diet low in milk, as well as additional age-standardized mortality rates (ASMRs) and age-standardized DALY rates (ASDRs) of 204 countries and territories between January 1, 1990 and December 31, 2019 were obtained from the GBD 2019 study and stratified by sex, age, GBD region, and SDI quintile.The detailed methods for data input, mortality estimation, and modelling of the GBD 2019 study were obtained from earlier published articles (18).The searched terms included "colorectal cancer, " "diet low in milk, " "death" and "DALYs, " in addition to the years "1990-2010, " "1990-2019, " and "2010-2019, " as well as the metrics "number, " "percent, " and "rate." The HDI values were retrieved from the United Nations Development Programme database.
Definitions
The GBD 2019 study defined diet low in milk as an average daily intake of less than 360-500 g of whole, skim, and semiskim milk, excluding soy milk and other plant-based products (7).DALY is the sum of all healthy life-years lost between the onset of disease and mortality.The ASR was calculated based on age groups of the standard population.Because the total population mortality and DALY rate are influenced not only by the level of mortality and DALY rate of each age group, as well as the age composition of the population, the ASR provides the ability to eliminate the effect of the age composition of the population to enable more accurate comparisons of total mortality and DALY rates of different regions and time periods.
In addition, the correlation between the burden of disease and the SDI was analyzed.The SDI is a composite indicator of the average per capita income, fertility, and educational level of each country and region.The 204 countries and regions were divided into five categories based on the SDI: low (<0.45),low-middle (≥0.45 and <0.61), middle (≥0.61 and <0.69), high-middle (≥0.69 and <0.80), and high (≥0.80).
Statistical analysis
The burden of CRC attributable to diet low in milk was assessed by SDI, region, country, sex, and age group based on the number of deaths, DALYs, ASDR, and ASMR.The following formula was used to calculate the ASR: 1; Figure 2).
Global CRC burden attributable to diet low in milk by sex and age
Globally, sex inequality continues to influence the burden of CRC attributable to diet low in milk, with a greater impact on males, which increased with age.In 2019, CRC-related deaths and DALYs were more common in males than females [92,097 (95% UI = 59,298-125,756) vs. 74,360 (95% UI = 46,931-99,957) and 2,201,663 (95% UI = 1,426,199-3,006,596) vs. 1,597,635 (95% UI = 1,013,555-2,129,091), respectively] (Table 1 In 2019, the ASMR of CRC attributable to diet low in milk increased with age, with the largest number of deaths occurring in those aged 70-74 years (Figure 4A).Meanwhile, the ASDR decreased progressively after peaking in the group aged 90-94 years, and the largest number of DALYs in the group aged 65-69 years (Figure 4B).From 1990 to 2019, ASMR increased globally in almost all age groups, except for a decrease in groups aged 25-29 and 35-39 years.The group aged >95 years experienced the greatest growth, while the group aged 25-29 had the greatest decline.The ASMR increased in low, low-middle, middle and high-middle SDI regions from 1990 to 2019, with higher EAPCs in mortality in low-middle SDI regions (Figure 5A).The age-standardized DALY rates were similar for EAPCs and ASMR (Figure 5B).The distribution of deaths and DALYs S1).Cluster analysis revealed that 121 countries or territories, most notably North Macedonia, Benin, and Burkina Faso, were classified as having "remained stable, " while 20 countries or territories were classified as "minor increase, " which included Singapore, Israel, and Portugal.Additionally, 56 countries or territories, including Paraguay, the Dominican Republic, and Mozambique, were classified as "increase." Albania was the only country to be classified as "significant decrease." The remaining six nations or territories, including Finland, Kazakhstan, and Australia, were classified as "decrease" (Supplementary Figure S2).
Factors associated with the burden of CRC burden attributable to diet low in milk
Overall, there was a nonlinear "S"-shaped association between the overall ASMR and SDI, with the ASMR rapidly increasing when the SDI was greater than 0.75 and progressively decreasing at less than 0.45.Among the different regions, the highest ASMR related to diet low in milk were observed in Southern Latin America, followed by high-income Asia Pacific and Central Europe, while the lowest ASMR and ASDR related to diet low in milk occurred in Australasia and Central Asia (Figure 6A).A similar association occurred between ASDR and SDI (Figure 6B).In 2019, across 204 countries and territories globally, the relationships between ASMR and ASDR attributable to CRC and SDI initially increased and then decreased as SDI increased (Figures 7A,B).
Discussion
This analysis comprehensively summarized the global epidemiological trends of CRC attributable to diet low in milk.The findings showed that incidences of deaths and DALYs attributable to CRC were 15.3 and 18.6%, respectively.On a global scale, the number of CRC-related deaths and DALYs attributable to diet low in milk increased by 130.5 and 115.4% from 1990 to 2019, respectively.The spatial distribution of the burden of CRC varied significantly among different countries and regions.The ASMR and ASDR associated with CRC attributable to diet low in milk were more significant in highmiddle SDI regions.East Asia, especially China, had the highest number of CRC-related deaths and DALYs attributable to diet low in milk.The burden of CRC was higher in males than females and the elderly than younger populations.These filled the gap in the global burden of CRC attributable to diet low in milk, help raise awareness of the importance of increasing milk intake, and provided evidence for policy makers to adapt appropriate dietary strategies to better manage CRC patients.Diet low in milk is the leading risk factor for the burden of CRC worldwide.A prior study indicated that individuals who consumed ≥250 g of milk per day had a 15% lower risk of CRC as compared to those who consumed <70 g, and each increase of 500 g per day in milk intake reduced the risk of CRC by 12% (19).However, the mechanism underlying the reduced risk of CRC associated with increased milk consumption remains unclear.As a possible explanation, milk contains a large amount of calcium, which may be responsible for the reported inverse association between milk intake and CRC.Through colonic sequestration of secondary bile acids, including deoxycholic acid and phospholipids, calcium could provide protection against CRC (20,21).Moreover, additional nutrients or bioactive substances found in milk, such as vitamin D, lactoferrin, and the short-chain fatty acid butyrate, might also act to prevent CRC (22,23).Alarmingly, the number of individuals who consume an insufficient amount of milk has increased significantly (24).Globally, there was a substantial disparity between the current and optimal milk intake in 2017, with an average optimal consumption of 16% (25).The 2020-2025 Dietary Guidelines for Americans recommend 3 cup equivalent servings of skim or semiskim milk daily for all adults.However, adults aged ≥20 years consume only 1.5 cup-equivalents of dairy products daily (26).The major reasons for this are rapid economic development and changing dietary patterns, as milk consumption has decreased in favor of sweetened beverages and fruit juices.Sweetened beverages include regular sweetened carbonated soda, sports drinks, energy drinks, and non-pure fruit drinks.Sugar beverages and fruit juices often contain high levels of added sugars, which could potentially contribute to weight gain, inflammation, and metabolic dysregulation, all of which are implicated in CRC development (27).Due to the complexity of diet, these substitutes may confound or interact with the relationship between milk consumption and colorectal cancer risk.It is necessary to conduct further in-depth research on the complex interactions of dietary risks in the future.In addition, education and food security have also been associated with insufficient milk intake.So, education and awareness campaigns should be launched to encourage daily consumption of milk and other dairy products.Variable correlations may exist between the risk of CRC and the consumption of whole, skim, and semiskim milk and related fat components (28).The consumption of whole milk was positively linked with CRC mortality, while consumption of skim milk was negatively associated with CRC mortality, possibly due to fat-induced inhibition of other bioactive components in skim milk (29, 30).However, the GBD database did not further classify milk types, so the impact of different types of milk deficiency on the burden of CRC remains unclear.
This study investigated sex differences in the incidence of CRC attributable to diet low in milk.Since 1990, the increases in ASDR and ASMR were more pronounced in males than females with a notable difference in the contribution of diet low in milk to CRC (11.1% vs. 4.6%, respectively) (31).As a potential explanation for this finding, women are more likely to receive recommendations to increase milk intake than men because of the greater risk of osteoporosis (32,33).Furthermore, women tend to be more concerned and consciousness of health status.Sex hormones have been recognized as a factor in sex differences in the incidence and mortality of CRC (34).Furthermore, risk behaviors, including drinking alcohol and smoking, are more common in men (35).Milk consumption decreases in the elderly due to various factors, such as decreased concern for personal health, higher rates of milk intolerance and digestion problems, changes in dietary preferences attributable to taste and palatability, and efforts to reduce fat intake (36).Therefore, initiatives are needed to increase awareness of milk intake to reduce disparities and decrease the incidence of preventable cancers.There was an "S"-shaped correlation between ASMR or ASDR attributable to diet low in milk and SDI worldwide.The study revealed that diet low in milk remained common in Central sub-Saharan Africa, Southeast Asia, and South Asia.Conversely, milk intake has dramatically increased in North America, Central Asia, and Australasia, especially Australia (7).The primary factors contributing to these geographical disparities include behaviors related to lifestyle and diet, levels of socioeconomic development, and local medical conditions (37, 38).With the regional development, increased household income, and improvements in education, the consumption frequency and daily intake of milk have increased (39).For example, daily milk intake ranged from <200 g to >600 g in Western populations and from <42.4 g to >82.6 g in Asian populations (40).Notably, lactose intolerance is the major cause of milk restriction globally (41).The prevalence of lactose intolerance exhibits racial variation, from 64% in Asia to as low as 28% in Northern Europe (42).Even though it is difficult to encourage immediate change to dietary habits, education and propaganda campaigns are recommended to gradually increase daily milk intake.
China is the most populous nation globally and also has the highest number of CRC-related deaths and DALYs attributable to diet low in milk (43).Consumption of milk and dairy products continues to increase in China, but remains relatively low due to traditional dietary habits (44).A comprehensive prospective study reported that the average daily intake of milk and dairy products in China has increased from 2.06 g in 1989 to 26.47 g in 2011, which is still significantly lower than in European and North American countries (39).The recommended intake of milk and dairy products was revised in the latest version of the Chinese Dietary Guidelines for Residents (2022), from no less than 300 g per day to 300-500 g per day.
Colonoscopy is considered the gold standard for CRC screening.However, due to a lack of a comprehensive national screening program and health resource restrictions, population-based screening of CRC has not yet been implemented in China (45).
As this study is a longitudinal observational study, although the modeling approach used estimates risk based on available dietary risk data but does not establish causation and should be regarded as approximations of risk (46).The GBD study had some deficiencies.Primarily, underdeveloped countries lack adequate cancer registries, thus estimates were based on predictive covariates or trends in neighboring countries, which likely biased the results.Second, the impact of different types of milk deficiencies on the burden of CRC remain unclear due to the lack of relevant data.Furthermore, the interrelationship between diets may have influenced the estimated burden of CRC attributable to diet low in milk.Although many of these dietary relative risks have been adjusted for the major confounders (e.g., age, sex), the possibility of residual confounding cannot be excluded.This study comprehensively analyzed the association between the global burden of CRC and diet low in milk.The results indicated that the number of CRC-related deaths and DALYs due to diet low in milk continued to increase globally.There were significant differences in the burden of CRC linked to diet low in milk among countries and regions.Notably, high-middle SDI regions had the highest ASDR and ASMR of CRC attributable to diet low in milk.East Asia, especially China, had the highest number of CRC-related deaths and DALYs attributable to diet low in milk.In addition, diet low in milk has been associated with a greater burden of CRC in males and older individuals.These findings provided critical temporal and geographic data to assist policymakers to develop targeted dietary strategies for CRC patients, as well as raise public awareness regarding the necessity of increasing milk intake.Due to the interrelationships between diets and potential confounding factors that may affect the estimated burden of CRC attributed to diet low in milk, further research is needed in the future to understand the complexity of dietary alternatives.that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
deaths and DALYs linked to low milk consumption in 1990, while Eastern Europe, Australasia, Central Asia and High-income North America had the lowest percentages.This disparity was roughly three times larger in 1990.Similar trends were observed in the proportion attributable to diet low in milk in 2019 (Figure1).
FIGURE 1
FIGURE 1 Proportion of colorectal cancer deaths and DALYs attributable to diet low in milk globally and in 26 GBD regions in 1990 and 2019.DALYs, disabilityadjusted life-years; GBD, Global Burden of Disease Study.
FIGURE 2
FIGURE 2 Number and rate of colorectal cancer deaths (A) and DALYS (B) attributable to diet low in milk from 1990 to 2019 by SDI level.The bars represent the number of colorectal cancer deaths (A) and DALYS (B) attributable to diet low in milk from 1990 to 2019 colored by SDI level.The line represents the mean ASMR (A) and ASDR (B) (per 100,000) attributable to diet low in milk at the global level.The shaded area represents the 95% UI for the mean rate.ASMR, age-standardized mortality rate; ASDR, age-standardized DALY rate; DALYs, disability-adjusted life-years; SDI, socio-demographic index; UI, uncertainty interval.
FIGURE 3
FIGURE 3The spatial distribution of the colorectal cancer ASMR (A) and ASDR (B) attributable to diet low in milk in 2019, and the EAPC in colorectal cancer ASMR (C) and ASDR (D) attributable to diet low in milk.ASMR, age-standardized mortality rate; ASDR, age-standardized DALY rate; EAPC, estimated annual percentage change.
FIGURE 4
FIGURE 4 Number and rate of colorectal cancer deaths (A) and DALYs (B) attributable to diet low in milk by age group and SDI level in 2019.The bars represent the number of colorectal cancer deaths (A) and DALYs (B) attributable to diet low in milk colored by SDI level.The line represents the mean ASMR (A) and ASDR (B) (per 100,000) attributable to diet low in milk at the global level.The shaded area represents the 95% UI for the mean rate.DALYs, disability-adjusted life-years; ASMR, age-standardized mortality rate; ASDR, age-standardized DALY rate; UI, uncertainty interval; SDI, sociodemographic index.
FIGURE 5
FIGURE 5 Annual percentage change in (A) and DALYs (B) between 1990 and 2019 by age group and region.EAPC, estimated annual percentage change; SDI, socio-demographic index; DALYs, disability-adjusted life-years.
FIGURE 6
FIGURE 6Correlation between diet low in milk-attributable colorectal cancer in ASMR (A) or ASDR (B) and SDI globally in 21 GBD regions between 1990 and 2019.ASMR, age-standardized mortality rate; ASDR, age-standardized DALY rate; GBD, global burden of disease study.
TABLE 1
Global burden of colon and rectum cancer in 1990 and 2019 for both sexes and all locations, with EAPC.
|
2024-07-24T15:22:05.671Z
|
2024-07-22T00:00:00.000
|
{
"year": 2024,
"sha1": "462869f7251732eb36578b0e1de2e6ad5a98385f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3389/fnut.2024.1431962",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "724c4a5aee2dea21b69adb074853fb3dc725fd2c",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
83459671
|
pes2o/s2orc
|
v3-fos-license
|
Shoulder Joint Infections with Negative Culture Results: Clinical Characteristics and Treatment Outcomes
Background The incidence of septic arthritis of the shoulder joint is increasing as the population ages. The prevalence of shoulder infection is also increasing because of the growing use of arthroscopy and expansion of procedures in the shoulder. However, cultures do not always identify all microorganisms, even in symptomatic patients. The incidence of negative cultures ranges from 0% to 25%. Few studies have reported clinical features and treatment outcomes of culture-negative shoulder infections. This cohort study addresses culture-negative shoulder joint infections in nonarthroplasty patients. This study aimed to compare clinical characteristics and treatment outcomes of patients with culture-negative results to those with culture-positive results. Our hypothesis was that culture-negative infections would have more favorable outcomes than culture-positive infections. Methods We retrospectively reviewed data of 36 patients (17 culture-negative and 19 culture-positive) with shoulder infections between June 2004 and March 2015. The minimum follow-up duration was 1.2 years (mean, 5 ± 3.8 years; range, 1.2-11 years). We assessed preoperative demographic data and characteristics, laboratory markers, imaging and functional scores, intraoperative findings, and postoperative findings of both groups. Results Culture-negative patients (17/36, 47.2%) had a significantly lower occurrence of repeated surgical debridement (culture-negative vs. culture-positive: 1.2 ± 0.4 vs. 2.4 ± 1.7, p = 0.002) without osteomyelitis. In the multiple logistic regression analysis, the presence of osteomyelitis [odds ratio (OR) = 9.7, 95% confidence interval (CI): 1.0-91.8, p=0.04)] and the number of surgical debridements (OR = 5.3, 95% CI: 1.3-21.6, p=0.02) were significantly associated with culture-positive infections. Conclusions Culture-negative infections without osteomyelitis are less severe than culture-positive infections. Culture-negative infections can be controlled more easily and are not necessarily a negative prognostic factor for shoulder joint infections.
Introduction
The prevalence of shoulder infections has increased recently due to the frequent use of arthroscopy and the aging population [1]. Currently, primary shoulder joint infections account for 10%-15% of all joint infections [2]. Although septic arthritis of the shoulder is rare in young and immunocompetent people, it is frequently found in the elderly [3]. Most patients who develop infections have chronic, systemic, and immunocompromising conditions, such as diabetes mellitus, blood dyscrasia, renal failure, malignancy, malnutrition, and rheumatic arthritis with a long history of corticosteroid use [1,[4][5][6]. The prognosis for septic arthritis of the shoulder joint is highly dependent on prompt diagnosis, cause of infection, and patients' immune system. Septic arthritis can lead to irreversible bone destruction and joint dysfunction and is occasionally a life-threatening condition, particularly in debilitated patients, making accurate diagnosis critical [7][8][9]. Although differential diagnosis is broad, the most serious potential cause of septic arthritis is bacterial infection [10]. Withholding antibiotic administration before culture is important to identify the causative organism from joint fluid aspirates and tissue biopsies. Despite extensive and adequate clinical, radiographic, and surgical suspicion for joint infection, the incidence of negative culture results ranges from 0% to 25%, and management with tailored antibiotics is difficult [11][12][13][14][15][16]. This cohort study aimed to assess the clinical characteristics and treatment outcomes of patients who contracted culture-negative infections after nonarthroplasty shoulder surgery. Our hypothesis was that culture-negative infections would have more favorable outcomes.
Materials and Methods
All patients provided written informed consent prior to the initiation of this study. The inclusion criterion was presenting in at least three out of the following classic joint infection symptoms: pain, redness, swelling, heat, and impaired range of motion. After nonarthroplasty shoulder surgery, magnetic resonance imaging (MRI) scans were performed for all patients to exclude potential structural causes of their symptoms. Synovial biopsies were harvested using punch forceps inserted through an arthroscopic cannula from representative areas of the shoulder (rotator interval, anterior capsule, and posterior capsule) to ensure equal geographic distribution. Three samples were placed in each sterile specimen container, for a total of nine specimens (3 × 3 samples per container), with removal of any foreign bodies from previous surgeries. The specimens were transported immediately at room temperature to the microbiology department, and routine culture was carried out under aseptic conditions inside a class II laminar flow biological safety cabinet to prevent aerosol contamination.
All specimens were inoculated on blood agar, Mac-Conkey agar, and chocolate agar (Synergy Innovation, Seongnam, Korea) and were incubated in 5% CO 2 at 35 ∘ C for 48 hours. Brucella agar, phenylethanol agar (ASAN Pharmaceutical, Hwaseong, Korea), and thioglycolate broth (Becton, Dickinson and Company, Sparks, MD, USA) were used for anaerobic cultures. The thioglycolate broth cultures were examined for turbidity daily for 14 days after inoculation. Culture plates were examined at 24 and 48 hours. The identification of microbial isolates was performed using the Vitek 2 phenotypic identification system (bioMerieux, Durham, NC, USA) and Microscan (Dade Behring, West Sacramento, CA, USA) from 2004 to 2013, and matrix-assisted laser desorption ionization-time of flight mass spectrometry (Bruker, Billerica, MA, USA) from 2014 to 2015.
We evaluated demographic data, patient characteristics, preoperative standard radiographs, antibiotic administration, functional shoulder scores, arthroscopic evaluations for articular cartilage destruction, bone destruction, rotator cuff tendon degeneration, foreign suture material removal, previously used anchors, and postoperative functional scores, including the American Shoulder and Elbow Surgeons (ASES) and constant shoulder scores [10]. Patients were considered to have an infection when one of the following criteria was met: (1) microorganism growth from two separate joint tissue biopsies or joint fluid specimens, (2) presence of a communicating sinus tract with the joint, (3) histopathologic evidence of acute inflammation consistent with infection ( Figure 1), or (4) when four of the following six criteria were reported: erythrocyte sedimentation rate (ESR) ≥30 mm/h, C-reactive protein (CRP) ≥10 mg/L), synovial leucocyte (WBC) count ≥2000/ L, synovial neutrophil percentage (PMN%) ≥65%, presence of purulent fluid in the affected joint, microorganism isolation from a single culture of tissue or fluid, or histopathologic examination showing more than five neutrophils per high-power field [17][18][19][20][21]. Quantification of biomarkers in the blood (CRP, uric acid) and in the synovial fluid (lactate, uric acid) was performed to exclude gouty arthritis, which may resemble septic arthritis clinically. Synovial lactate levels above 10 mmol/L are strongly suggestive of septic arthritis, while lactate levels lower than 4.3mmol/L make it very unlikely [22,23]. Rheumatoid factor was also measured to rule out rheumatoid arthritis.
Cultures were considered positive if organisms grew on solid media within two weeks. Growth in liquid media only was not considered consistent with infection. Joint infections were considered culture-negative if cultures obtained intraoperatively failed to grow within two weeks.
Patients were also screened for osteomyelitis, a serious disease with a variety of clinically and microbiologically distinct subsets, characterized by an infection of the bone and bone marrow. We identified osteomyelitis based on the following diagnostic criteria: typical radiological findings (abnormality of the bone marrow, deep soft-tissue swelling, and/or periosteal reaction, and/or bony destruction) using standard X-rays or MRI, and pus in the bone and/or joint space [24] (Figures 2 and 3).
Prescribed antibiotics were suspended for all patients when shoulder joint infection was suspected, and arthroscopic debridement with synovectomy was performed within one week (Figure 4). Postoperative broad spectrum antibiotics were given empirically, according to the recommendation of a microbiologist. Adults were given first-generation cephalosporin (2 g cefazolin by IV every 8 hours) until culture results were available. Vancomycin (15mg/kg by IV twice daily) was used as an alternative therapy for patients allergic to cephalosporins. Repeated arthroscopic debridement was used when uncontrolled infection (e.g., persistent fever, painful effusion, laboratory signs of systemic inflammation, or positive drainage fluids) was evident, followed by concomitant antibiotic administration for at least six weeks. Fever was defined as a single oral temperature >37.8 ∘ C, repeated oral temperatures >37.2 ∘ C, or an increase in temperature of >1.1 ∘ C above the baseline temperature [25]. Treatment courses were documented for each patient. Criteria for infection improvement were lack of pain, swelling, and wound drainage; normal serology (ESR <20 mm/h, CRP level <0.5 mg/dL); synovial leukocyte differential counts of <65% neutrophils or a leukocyte count<1.7 × 10 3 /Ul; and fewer radiographic characteristics of osteomyelitis [26].
Statistical Analysis.
All statistical analyses were performed using SPSS Version 21.0 (SPSS Inc/IBM, Chicago, IL, USA). Chi-square and Fisher's exact tests were used to determine differences in proportions for each variable. The Shapiro Wilk test was used to check for normal distribution of data. The Independent Samples t-test was used to compare the means of continuous variables between the two groups. The Mann-Whitney U test was used for continuous variables that did not satisfy parametric assumptions. The Wilcoxon signed rank test was used for related groups of quantitative variables that were not normally distributed. Multivariate logistic regression analyses were performed to identify predictors of culture-negative joint infections. Two-tailed values <0.05 were considered statistically significant.
Results
In the culture-negative group, there were nine males and eight females, with a mean age of 63.7 years (range, 50-77 years). In the culture-positive group, there were nine males and 10 females, with a mean age of 63 years (range, 38-82 years). All infections were successfully cured, regardless of culture status. Methicillin-resistant Staphylococcus aureus (MRSA) (9/19, 47.3%) was the most common cause of culturepositive infections, followed by methicillin-susceptible S. aureus (MSSA) ( pelliculosa (1/19, 5.2%) ( Table 1). Eleven patients had a history of ultrasound-guided injection at the shoulder joint, eight of whom were diagnosed with culture-positive infections, and three with culture-negative infections. Rotator cuff tears were found, but the exact cause was unclear. They may have been preexisting or a tenolysis effect of the infection. There was no significant difference in age, gender, host conditions, initial diagnosis, preoperative physical symptoms, previous antibiotic treatment, synovial lactate concentration, or other laboratory data within the patients studied ( Table 2).
There was a significant difference between pre-and postoperative ASES and constant shoulder scores in the culturenegative group (P=0.04), indicating that the culture-negative group showed significantly improved shoulder function postoperatively. There was no significant difference between the culture-negative and culture-positive groups, or between preand postoperative ASES and constant shoulder scores in the culture-positive group (Table 3).
In terms of intraoperative findings, no osteomyelitis was observed in the culture-negative group. No significant difference in the presence or absence of rotator cuff tears or foreign bodies, such as anchors ( Figure 5) used in previous operations, was found between the two groups. Arthroscopic debridement was effective in 29 patients. Open surgery was performed in five cases due to persistent infection.
In terms of treatment results, the culture-negative group showed significantly lower number of repeated surgical debridements compared to the culture-positive group (culture-negative vs. culture-positive, 1.2 ± 0.4 vs. 2.4 ± 1.7; (Table 4).
Discussion
Although determining the causes of joint infection is the standard procedure for diagnosis, lack of growth in routine aerobic and anaerobic cultures is frequently encountered. Negative culture results have been reported in many joint BioMed Research International 7 Figure 5: Failed anchors previously used for rotator cuff repair in some cases. infection series, but the clinical characteristics of such infections have not been established. Therefore, this study compared clinical characteristics and treatment results of shoulder joint infections with positive and negative culture results.
In the current study, the incidence of negative culture results was 47.2% (17/36). This was relatively high compared to incidence rates reported in other joints [11,14]. Such differences might be due to the fact that the clinical microbiology laboratory only archives tissue samples for a short time, although this time does allow clinicians to request fungal and mycobacterial cultures if aerobic and anaerobic cultures fail to determine a pathogen [27].
All cultures in this study were monitored for 14 days to allow sufficient time for the majority of infectious organisms to grow. However, low-grade infections, such as Propionibacterium acnes, that are present more frequently in the shoulder joint may need prolonged incubation times of more than two weeks to yield positive results [28][29][30][31][32][33].
It is possible that the culture-negative results in this study resulted from biofilm-producing microorganisms. It is known that these microorganisms are difficult to grow under standard culture conditions [27,34]. Recent studies reported that the demographics and outcomes of culturenegative and culture-positive patients were similar, leading to the presumption that these infections were caused by similar microorganisms [24,25].
In the present study, there was no significant difference in age between culture-negative and culture-positive patients. Interestingly, Khan et al. found a reduced risk of infection with increasing age [35].
There was also no significant difference in sex, host conditions, infection cause, clinical symptoms, laboratory findings, or illness duration, between the two groups in this study.
Both groups had a similar history of previous antibiotic use, which was consistent with previous reports [11,36]. It has been reported that prior antimicrobial use can reduce the sensitivity of tissue cultures [34].
The most intriguing finding of this study was that the culture-negative group showed a significantly lower need for surgical debridement, as well as a lower frequency of osteomyelitis compared to the culture-positive group. After medical and surgical management, conducted under the assumption that the culture-negative infection was due to typical bacterial pathogens, patients with culture-negative infections were more easily cured than those with culturepositive infections. These results suggest that culture-negative infections may not invade the neighboring bone, making them easier to cure.
Choi et al. also reported favorable treatment results for culture-negative patients [37]. They suggested that high vancomycin use contributed to the favorable outcome of culture-negative infections. We found that patients treated with first-generation cephalosporin had better outcomes than those treated with broad spectrum antimicrobial agents. This discrepancy indicates that the optimum therapy for culture-negative joint infections remains unknown. However, this study provides important information for patients and physicians when they encounter culture-negative results.
This study had several limitations. Due to its retrospective design and limited patient numbers, we were unable to analyze data stratified by infecting organisms. It should be noted that empirical antibiotic regimens should be evaluated based on which microorganisms are frequently causing infections. Nonetheless, the present study suggested that there was no significant difference in clinical characteristics between culture-negative and culture-positive groups. Another limitation was the culture period, which was insufficient for isolation of Propionibacterium acnes and other slow-growing organisms; we recommend increased incubation times of more than two weeks to isolate slow-growing organisms. Following the recommendations of our microbiologist, we did not use any local antibiotics, which could have impacted the progression of the infection and subsequent treatment.
Conclusions
Culture-negative shoulder joint infections are not necessarily a negative prognostic factor. They can be controlled more easily than culture-positive infections. Further prospective studies are required to gain additional insights into clinical characteristics and treatment outcomes of patients with culture-negative results.
Data Availability
Datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request.
Conflicts of Interest
The authors declare that they have no conflicts of interest related to this paper.
|
2019-03-21T13:02:52.747Z
|
2019-02-12T00:00:00.000
|
{
"year": 2019,
"sha1": "7508bb5fb9cd560a5d2f3b51f73cfc5051381d2f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2019/3756939",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "628e53c9a507342b7fd1f7d0b66c14ac5b24b7ed",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
104317042
|
pes2o/s2orc
|
v3-fos-license
|
Storage of Food Waste: Variations of Physical–Chemical Characteristics and Consequences on Biomethane Potential
Food waste (FW) storage influences its physical–chemical characteristics and anaerobic digestion (AD) performance. In this work we present the results of two weeks long experiment where two types of FW were stored in dedicated cells (10 L and 300 L). Air was evenly flushed on the top surface of the substrates and then analyzed to identify and quantify possible gaseous emissions. Solid and liquid fractions were also periodically sampled and analyzed for total solid, volatile solid, ammonia and VFA contents. Results showed that storage initiated a hydrolysis process that modified the physical structure of FW, leading to the production of gases (CH 4 , CO 2 and ethanol) and a partly liquefied FW. Depending on experimental conditions, a fraction between 61 and 70% of the initial substrate remained solid at the end of the storage period. In the liquid phase, a large proportion of lactic acid was measured with maximum contents of 5.9 and 14.8 g/kg vs for the small-scale experiments with two different FW types and 3.0 g/kg vs for the large-scale experiment, leading to inhibition of the biomethane potential (BMP) tests. In conclusion, this work showed that when storage of FW is needed before AD, the optimal time recommended to keep a high methane yield is one week.
Introduction
In 2015, about 45% of municipal solid waste (MSW) generated in the European Union (EU) was recycled. By 2030, the objective is to reach 65% of MSW recycling and preparation for reuse [1]. Depending on the country, the amount of food waste (FW) generated by households represents 22 to 33% of the MSW produced [2] i.e. about 47 million tons [3]. Because it is an important source of organic matter and nutrients, this biomass can be recycled through many processes to produce added-value compounds such as bioethanol, enzymes, compost, biodegradable plastics or biogas through anaerobic digestion (AD) [4][5][6].
AD is a particularly interesting technology as it shows better environmental performance compared to the main current biowaste management i.e. landfilling, incineration, or even compared to composting, especially when focusing on global warming potential [7][8][9]. Indeed, this technology allows renewable energy production while greenhouse gas emissions are largely avoided. Taking advantage of these characteristics, several companies developed large scale centralized AD processes able to valorize the organic fraction of MSW [10][11][12]. More recently, micro-scale AD processes were developed and are being tested to locally valorize urban biowaste, including food waste [13,14].
However, regardless of the AD technology used, FW has first to be collected from production sites before being valorized. In Europe, three main FW collection strategies exist: (i) the door-to-door model, corresponding to all systems with collection devices such as buckets, bins, bags or containers belonging to one single building or house; (ii) the bring point model, equivalent to the door-to-door model by using the same kind of collection devices but accessible for several buildings and houses; and (iii) the civic amenity sites model, where citizens bring the waste to enclosed sites operated by qualified staff [15]. The latter is quite rare for the disposal of FW, but few cases were identified in Belgium [15]. The collection frequency of these systems ranges between once a day to twice a month depending on the city considered [15,16] implying a nonavoidable storage period of the FW. After being delivered to the treatment plant FW is usually stored on site before AD for a time ranging between few hours and ten days [17]. Therefore, several storage steps occur from FW generation to valorization, and the spontaneous degradation of the initial waste [5] may affect the AD process.
Recent studies investigated the effect of storage on AD performance. Nilsson Påledal et al. [16] tested the effect of temperature (6 or 22 °C), collection method (use of plastic or paper bags) and storage time (0-21 days) on biomethane potential (BMP) of source separated household FW. They found that after 21 days of storage, all conditions led to a slight decrease of BMP values. The worst situation was observed with the use of paper bags at a storage temperature of 22 °C. In this case, BMP value decreased from 135 to 75 NmL CH4 /g of fresh FW, a trend that was linked to the loss of water and volatile solid contents during storage. Lü et al. [17] also studied the effect of storage time (0-12 days) on BMP performances with FW stored at 35 °C in small polystyrene centrifuge tubes (50 mL), without bags nor lids, but covered by perforated Parafilm to mimic anoxic conditions. In this study, a significant increase in BMP values was observed after 12 days of storage (from 285 NmL CH4 /g-VS added for unstored FW to 639 NmL CH4 /g-VS added ). However, results showed that during the first week of storage BMP values were very unstable. Another work, performed by Fisgativa et al. [18], studied the effect of FW storage on the performance of a pilot-scale solid-state anaerobic digester (10 L) with leachate recirculation. Results showed that when using FW stored during 2 days before AD, the process was completely inhibited after 4 days of digestion. After 2 weeks of inhibition, the anaerobic digestion started over again without any external intervention.
These results demonstrate that FW storage influences AD performance. However, no clear correlation between physical-chemical characteristics of the stored FW and methane production were highlighted. The aim of this work was thus to study the variations of physical structure (i.e. density) and chemical characteristics of FW during storage to better understand the degradation mechanisms that could influence the performance of FW anaerobic digestion afterwards.
Materials and Methods
Variation of food waste characteristics during storage was studied during two weeks at two different scales to mimic the storage practices of different FW producers. For practical reasons, each scale was studied independently and two different trials were set-up.
Trial 1
First trial aimed at reproducing storage conditions at household scale (small-scale), with a door-to-door collection system.
The experimental device, developed by Portejoie et al. [19], consists in six independent storage cells of 10 L where ambient air was evenly flushed on top (Fig. 1). Before entering one of the six cells (denoted 2 in Fig. 1), ambient air is humidified with a water trap (1) to avoid unnatural and excessive water evaporation of the sample. Inside the cell, gaseous emissions due to sample degradation are captured in the air. When exiting the cell, the air passes through an acid solution ((3)-50 mL of 2N sulfuric acid) where ammonia compounds are trapped. Then, the air is dried with a silica gel column (4) and its volume recorded with a gas meter (5). Finally, the flow rate is controlled (6) before leaving the system through the pump (7). Periodically, some air is automatically sampled between the cell (2) and the ammonia trap (3) and analyzed for CO 2 , N 2 O and CH 4 contents with combined infrared and photoacoustic gas analyzer (INNOVA 1412). Every two days, acid traps were replaced with fresh H 2 SO 4 and ammonia content was measured. 1 Schematic diagram of the experimental device used to reproduce storage conditions. 1-Distilled water; 2-Storage cell; 3-Trapping solution; 4-Silica gel column; 5-Gas meter; 6-Air flow meter; 7-Pump [19] For the experiment purpose, 3 kg of fresh FW sampled in a collective restaurant were loaded in each of the six cells without any pretreatment. A grid, placed at the bottom of the cell, efficiently separated the solid FW from the leaching liquid that was produced during the two weeks storage period. Every two or three days, one cell was stopped to perform the physical-chemical analysis. The atmospheric conditions of the experimental room were recorded all along the experiment to control the effect of temperature on FW degradation.
Trial 2
Second trial consisted in a scale-up of the first experiment to mimic the bring point and civic amenity sites collection models, where the amount of stored FW is much higher than in the door-to-door model. For this part, two large-scale reactors (300 L) were filled with about 150 kg of mixed FW from four different producers: 3 canteens (1 school, 2 traditional restaurants) and a central kitchen that provides food for several school restaurants. Compared to the 10 L-cells used during the first trial, the 300 L-cells had, proportionally, a smaller compartment dedicated to the storage of the leaching liquid. Thus, when necessary, the liquid phases of the large-scale trials were unloaded over the course of the experiment to avoid possible mixing of the different fractions. Physical-chemical analyses of the liquid were performed after its discharge. The solid fractions, on the other hand, remained in the two storage cells until the end of the experiment to avoid process disturbance. Physical-chemical analyses were done only at the final stage of the storage period. In addition to atmospheric conditions of the experimental room, the temperature inside the 300 L-cells was recorded. The gaseous phase was analyzed with the INNOVA 1412 with an additional filter that also allowed the detection of ethanol while N 2 O emissions were not recorded during this trial.
As the trials 1 and 2 were not performed at the same period of the year and with the same FW, three additional 10 L-cells were conducted in parallel to trial 2 as a control to compare the influence of FW type on storage performances. In this case, all reactors were stopped at the end of the experiment and physical-chemical analyses were only done at this time.
Density Measurements
For each storage cell, the height between the grid and the top of the reactor was measured and denoted h tot . At the beginning and at the end of the storage period, the height h i between the top surface of the waste and the top of the reactor was measured in four different points: one in the middle of FW's surface and three in the border of FW's surface that were spaced 120° apart. The average height was calculated and denoted h i . The density of one sample was then calculated as follow: where, d is the density of the sample (kg/L); m is the mass of FW (kg) at the beginning or the end of the storage period; r is the cell's radius (dm); h tot is the height between the grid and the top of the cell (dm); h i is the average height between FW's surface and the top of the cell (dm).
Biomethane Potential Measurements
The biochemical methane potential (BMP) of samples were measured in triplicate using hermetically closed 572 mL-bottles. Each replicate was filled with inoculum, water and 3-5 g of ground sample (2 mm) to reach a total volume of 140 mL and an inoculum to substrate ratio 1:1 of volatile solid (VS) content. The inoculum was obtained from a well-established anaerobic pilot digester (87 L) acclimated to degrade pig slurry supplemented with horse feed. Before their incubation at 38 °C for 40 days, bottles were flushed with N 2 to ensure anaerobic conditions. Internal pressure of bottles was measured daily to quantify biogas production. This biogas was periodically sampled to determine methane content by gas chromatography.
To avoid possible measurement variation due to inoculum differences, each grounded sample was stored at − 20 °C prior to BMP test and all BMP measurement were performed at the same time. Freezing can affect BMP values. However, because all samples were prepared following the same method, the possible artifact is equally transmitted to samples, making the comparison between samples possible. Moreover, this method was already tested on FW [20] and showed consistent results compared to the average BMP values of 460 L/kg VS calculated worldwide [21].
Analytical Methods
Total solids (TS) and volatile solids (VS) were measured using standard methods [22]. TKN and total ammonia nitrogen concentrations were determined by distillation (TKN after mineralization) on wet and grounded samples [23]. Total carbon of solids was measured on finely ground dried samples by elemental analysis according to the manufacturer's instructions (Thermoflash 2000, Thermoscien-tific®). On liquid samples, total carbon was measured on the raw material using a Shimadzu TOC-L analyzer following the manufacturer's instructions. The volatile fatty acid (VFA) contents and assimilated (acetate, propionate, butyrate, isobutyrate, isovalerate, lactate and succinate) of the liquids were analyzed by high performance liquid chromatography (HPLC) [24]. The average values of initial FW physical-chemical characteristics are given in Table 1.
Calculation of Hydrolyzed Nitrogen Accumulation
The accumulation of hydrolyzed nitrogen gives useful information about protein hydrolysis during storage. It was calculated according to the following equation: where, NH 3 t is the amount of ammonia at a defined period of storage; NH 3 t0 is the amount of ammonia initially present in the fraction considered (equals to NH 3 i for the solid fraction and 0 for the liquid fraction); NH 3 i is the total nitrogen content initially loaded in the reactor cell.
Statistical Method
Analysis of variance (ANOVA) method was used to set the significance of physical-chemical variations along experiments. In this test, the null hypothesis H 0 is that all sample means are equal and the alternative hypothesis is that at least one mean is different. If the probability value (p-value) of H 0 is less than 0.05, the samples are considered as significantly different. That p-value depends on the degree of freedom between and within samples. The degree of freedom between groups of samples (i.e. number of samples assessed minus one) is denoted Df1 and the degree of freedom within groups of samples (i.e. total number of values measured minus the number of samples) is denoted Df2. The results of ANOVA are presented between brackets in the results section using the following annotation: F (Df1, Df2) = F-value, p < 0.05.
Physical Variations
During storage, the physical structure of FW changed from a completely solid product (wet but without free water) to a three-phase one, with the production of liquid and gaseous fractions. Depending on the experiment, between 31 and 39% of the initial solid mass was transformed into liquid and gas, and was potentially lost ( Fig. 2A).
The degradation of FW started with a rapid liquefaction of the substrate during the first days of storage (Fig. 2B). In the first trial (10 L-FW 1 ), 10% of the initial FW mass was transformed into liquid within the first 3 days of storage and only 5% during the rest of the experiment. The increase of liquid proportion at day 10 was probably due to the heterogeneity of the substrate. Indeed, in trial 1, cells were loaded with FW from the same initial pool but, due to the normal heterogeneity of FW, the substrate loaded can slightly differ from one cell to the others, leading to different degradation behaviors. In the case of the large-scale storage method (300 L -FW 2 ), the rapid liquefaction occurred within the first two days, where 15% of the initial mass became liquid, while only 5% more was transformed until the end of the storage. The dynamic liquid production of the control (10 L-FW 2 ) cannot be discussed as it was not monitored along the experiment. However, it showed a much higher total liquid production than the two other trials with a total of 27% of the initial FW mass transformed compared to 15 and 20% for the 10 L-FW 1 and the 300 L-FW 2 experiments, respectively (Table 2). These results show that FW degradation depends on both FW characteristics and collection system.
The rest of the total mass loss was assumed to be due to gaseous emissions. These assumed losses through gaseous emissions significantly started after two days of storage, i.e. after the rapid liquefaction step (Fig. 2C). Between day 0 and 2, 0.6 and 0.7% of the initial substrate mass was lost as gases in the small scale (10 L-FW1) and the large scale (300 L-FW 2 ) storage, respectively. These gaseous losses attained 15 and 12% at the end of the experiment. The total gaseous emissions of the control (10 L-FW 2 ) were equivalent to the large-scale ones, with a recorded proportion of 12% at the end of the experiment.
The temperature of the substrate inside the large scale reactors progressively increased during the first 3 days of the experiment (Fig. 3), starting at 13 °C and increasing until 32 °C. After having peaked, the temperature slightly decreased and stabilized at 27 ± 1.3 °C after day 5, while the ambient temperature was about 22 ± 3.3 °C. This raise in temperature is typically observed during the first stages of composting and is directly linked to the microbial activity and degradation of rapidly biodegradable compounds [25,26]. The temperature increase also correlates with the rapid liquefaction of FW. This latter is thus probably due to the degradation of the rapidly biodegradable fraction of FW during the first 2-3 days of storage and the growth of aerobic microorganisms. The temperature inside the small-scale reactors was not measured. However, considering the volume occupied by the FW at small-scale compared to largescale (about 4.5 L and 177 L, respectively), it is assumed that the temperature of FW inside the 10 L-cells follows the ambient air temperature variations.
Density of the FW did not show the same trend of variation in the two trials and was affected by the size of the reactors. No significant change in density was observed during the storage at small-scales: the initial densities of trials 1 and 2 were 0.717 and 0.771 kg/L respectively and the final ones were 0.658 and 0.770 kg/L (Table 4). Conversely, at largescale the density of the substrate almost doubled, starting at 0.771 and ending at 1.340 kg/L. This difference is probably due to the weight of the large-scale FW pile combined with the leaching phenomena that must have weakened the physical structure of FW, leading to the compaction of the substrate.
Gaseous Emissions
Three main carbon-based compounds were detected during the storage: carbon dioxide, methane and ethanol.
In the first trial (10 L-FW1), a peak of methane was observed between day 1 and day 3 with a maximum methane production rate of 4.1 mgC/kg VS /d on day 2 (Fig. 4B). After that, the production of methane was steady and close to 0.1 mgC/kg VS /d. While methane emission was very punctual, CO 2 emissions were observed all along the experiment. During the first day, CO 2 production rate was about 3.8 mgC/kg VS /d and rapidly increased to reach a maximum value of 25.5 mgC/kg VS /d at day 2. Then the production rate progressively decreased until the end of the experiment to reach a value of 7.6 mgC/kg VS /d (Fig. 4A). This behavior typically indicates strong aerobic microbial activity: the exponential increase of carbon dioxide production rate during the first days corresponds to the degradation of the easily biodegradable fraction of the substrate and to the aerobic growth of microorganism. Then the production rate of carbon dioxide decreases proportionally to the degradability of the remaining organic matter and environmental conditions (i.e. temperature and moisture). This emission trend thus confirms that storage occurs under aerobic and anoxic conditions.
For the two scales of trial 2, peaks of methane and ethanol were observed simultaneously at the beginning of storage (Fig. 4B, C). At its maximum (day 1), methane production rate was about 10.1 and 1.2 mgC/kg VS /d for the small and the large scale, respectively, while ethanol production rate was about 0.35 and 0.05 mgC/kg VS /d, respectively. After day 2, each production rate decreased down to values lower than 0.1 mgC/kg VS /d. Compared to methane and ethanol, carbon dioxide production was quite low during the first day of storage with an average production rate of 3.7 and 0.6 mgC/kg VS /d for the small and the large scale, respectively (Fig. 4A). During the second day of storage, CO 2 production rate rapidly increased to reach 13.9 and 1.9 mgC/ kg VS /d at day 3. Similarly to the first trial, after the initial rapid increase the carbon dioxide production rate slowly decreased to reach 3.8 and 0.9 mgC/kg VS /d at day 10. Gaseous emissions profiles of trial 2 were not recorded after 10 days because of power shut down during the week-end. However it is assumed that the gas concentration did not peak during the last 4 days of storage.
The carbon dioxide production rate profiles confirm the first observations done on the physical evolution. Indeed, the raise in internal temperature of the large scale reactor follows the raise of carbon dioxide content which is also directly linked to the microbial activity inside the substrate. However, it was also found that the ratio of emitted gases par kg of organic matter initially put in the storage cell was lower at large-scale compared to small-scale pilot. Such deviation could be due to differences in specific exchange surfaces applied in each scale. The specific exchange surface is calculated as the ratio between the top surface of FW that is in contact with the air flushed in the storage cell and the volume of FW initially introduced in the cell. Then, the large-scale cell has much lower specific exchange surface than the small-scale with a value of 2.1 m 2 /m 3 FW compared to 29.5 m 2 /m 3 FW , respectively. This means that in the The presence of methane during the first two days of storage shows that anaerobic conditions settled inside storage cell for a short interval. With anaerobic conditions, an alcoholic fermentation also started inside the storage cells and led to the production of ethanol with the degradation of easily degradable sugars present in FW [27]. Since air was continuously flushed in the reactors aerobic conditions may have settled back allowing the growing of aerobic microorganisms and the production of carbon dioxide.
The overall carbon content lost by gaseous emissions was calculated and is summarized in Table 3. Depending on the experiment, the amount of volatilized carbon ranged from 0.5 to 2.1 gC/kg ww with the lowest value found for the largescale pilot. These values are very low compared to those usually measured on manure for the same storage period (42.5 gC/kg WW [28]) and represent about 0.7 and 2.3% of carbon initially introduced in the storage cell. According to these results, most of the initial carbon present in FW remained in the liquid or solid fraction of the substrate after two weeks of storage, and should be available for a possible anaerobic digestion process. It also means that the mass losses that were considered to be due to gaseous emissions (Fig. 2C) are composed of other components than carbonaceous gases, as water. For the trial "10 L-FW1" for example, 15.5% of the total solid mass was lost as gases and from these 15.5%, only 0.9% of the total solid mass were emitted as carbon-based compounds.
The nitrogen-based emissions, such as ammonia and nitrous oxide, were also monitored during the storage period in Trial 1. However, nitrogen was only found as traces of ammonia in acid traps without any N 2 O emissions detected, which was thus not analyzed in Trial 2.The total nitrogen content loss was below 0.02 mgN during the entire storage period, regardless to the considered trial, proving that it can be neglected.
The assessment of gaseous emissions during FW storage highlighted the start of organic matter degradation through aerobic and anoxic conditions. In the following part, chemical evolution of the solid and liquid phases will be discussed to better understand the mechanisms of organic matter change during storage.
Solid Phase
Variations of total solid amounts contained in the solid fraction are presented in Fig. 5A. In the first trial (10 L-FW1) TS amount drastically decreased during the first week of storage and stabilized during the second week. At the end of storage, about 34 w% (w% = % of the initial wet weight) of the initial TS introduced in the reactor was lost. Similar behavior was observed for VS amount variations (data not shown) with a final VS loss of 33 w%. In the second trial and at small scale (10 L-FW 2 ), the proportion of TS and VS losses were equivalent to the first trial with values of 32 and 30 w%, respectively. As a negligible part of carbon was transformed into gases (cf. results on gaseous emissions), it was assumed that the lost TS and VS amounts were transferred in the liquid fraction. At large-scale (300 L-FW 2 ), about 21 and 23 w% of the initial TS and VS were lost, respectively, which is less than results observed at smallscale. This shows a limited hydrolysis performance probably due to the limited aerobic activity at large-scale (compared to small-scale) as it was previously demonstrated with gaseous emissions results.
Nevertheless, no significant differences in TS contents were observed neither over the course of the experiments nor at the end of the storage periods at small-scale with average values and standard deviations of 169 ± 10 g/kg ww for FW 1 and 178 ± 11 g/kg ww for FW 2 (Fig. 5B). It means that for each experiment, the ratio between the total mass loss that contains both water and total solid ( Fig. 2A) and TS amount loss (Fig. 5A) kept constant over the course of the experiment. At large-scale, on the other hand, storage significantly affected TS content (F(1, 7) = 11.41, p < 0.05) and a raise in TS content value was observed from 170 g/ kg ww at the beginning of the experiment to 192 g/kg ww after 15 days of storage (Tables 1, 4). In this case, the total mass loss recorded (32 w%) was much higher than the sole TS loss (21 w%) traducing a high loss of water and leading to an increase of TS content. This drying of the solid phase is in agreement with a higher temperature increase and a higher compaction. In the same way, even though VS amounts decreased over the course of the experiment, instantaneous VS concentration did not significantly varied during storage and remained constant at small and large-scale. The average values and standard deviations were 148 ± 9, 157 ± 11 and 159 ± 12 g/kg ww for trials 10 L-FW1, 10 L-FW 2 and 300 L-FW 2 , respectively. Lü et al. [17] observed different results when studying FW storage. After 12 days, TS and VS contents were significantly higher than at the beginning of the experiment with respective values of 241.3 and 212.9 g/kg ww initially and 281.5 and 249.0 g/kg ww at the end of the experiment. In their experiment, the combination of storage room temperature (35 °C) and the shredding of the substrate prior to storage may have eased the degradation of FW and consequently favored water evaporation and concentrating dry and organic matter. When using paper bags for FW disposal at 22 °C and after 3 weeks of storage, Nilsson Påledal et al. [16] showed a drastic increase of TS content while it remained stable when using plastic bags. The paper bags may have facilitated water evaporation while it was not the case in plastic bags. Ammonia content in the solid fraction of FW was also monitored for each trial as it gives useful information on the protein hydrolysis proportion during storage. The accumulation of hydrolyzed nitrogen was calculated over the course of the experiments and is given in Fig. 5C for each trial. At the beginning of the experiments, FW samples showed similar and low ammonia content with values of 0.04 and 0.05 gN/kg ww for FW 1 and FW 2 , respectively. During the first three days of storage (trial 1, 10 L-FW1), the accumulated proportion of hydrolyzed nitrogen remained stable and quite low with an average value of 0%. After that, protein hydrolysis proportion progressively and significantly increased to reach 9%. These results must be interpreted in the light of carbon dioxide profiles. Indeed, during the first days of storage, microbial activity was exponentially increasing (Fig. 4A) and thus microorganisms needed nitrogen to grow. The soluble nitrogen, i.e. ammonia produced with the protein hydrolysis, was then rapidly consumed by microorganisms resulting in steady ammonia content in the solid. At day 3, microbial activity reached its maximum and started to progressively decrease until the end of the experiment. Hydrolyzed nitrogen then started to accumulate in the solid as microorganisms consumed less and less soluble nitrogen showing a higher nitrogen hydrolysis proportion. At the end of the second trial, hydrolysis efficiency was about 5 and 2% for the small-scale (10 L-FW 2 ) and the largescale (300 L-FW 2 ) experiments, respectively. The lower proportion of hydrolyzed proteins in the larger scale confirms the hypothesis of limited hydrolysis of organic matter in the 300 L-cells.
Regardless to the scale, pH of the solid fraction was lower than 5 during the entire storage period (Fig. 5D), which explains the absence of ammonia in the gaseous fraction. Indeed, the pKa of NH 4 + /NH 3 is 9.2 and most of the ammonia content was on its soluble and acidic form (NH 4 + ) at pH < 5. However, pH was not stable along the experiments. During the first trial, pH of FW started at 4.69 and decreased during the first three days to reach a value of 3.88 indicating a production of organic acids because of FW hydrolysis. During the rest of the experiment, pH progressively increased to reach a value of 4.49 which could be due to the consumption of the organic acids previously produced. Because of sample heterogeneity (see section 'physical variations'), the sample at day 10 was probably slightly different from the others as it showed a pH value much higher than the others and was not taken into account to discuss the pH variation dynamics. The second small-scale trial (10 L-FW 2 ) showed similar acidic condition at the end of the storage period than in the first experiment (10 L-FW1) with a final pH of 4.41. Compared with the latter, large-scale process had a much lower final pH value (3.81) than the two other experiments with probably more organic acid concentration after 2 weeks of storage due to a lower microbial activity.
Liquid Phase
The chemical characteristics of the liquid fractions produced during storage were also monitored all along the experiment. In the first trial (10 L-FW1), TS amount slightly increased during the first three days of storage (Fig. 6A). At day 1, about 2% of the initial TS amount of the FW was accumulated in the liquid fraction and 3% at day 3. From day 3 until day 8, TS amount remained stable and finally dropped to 2% at the end of the experiment. VS amounts followed the same trend along the storage (data not shown). The decrease in TS and VS amounts in the liquid fraction indicates a consumption of organic matter during the second week of storage. Trial 2, on the other hand, showed different trends. At small-scale (10 L-FW 2 ), the final TS and VS amounts of the liquid fraction represented 7 and 8% of the initial dry weight loaded in the storage cells, respectively. These values are much higher than those recorded during the first trial, confirming that degradation of FW during storage strongly depends on the initial composition of FW. A previous work showed that geographical origin, source separation and seasonality significantly influences physical-chemical characteristics of FW [21]. Consequently, storage may differently affect physical-chemical characteristics of FW from different origins. These results cannot be compared to the large-scale values as the experimental sampling was different. Because leachate was periodically removed from the reactor at large-scale, organic matter was not subjected to possible consumption mechanisms. Indeed, the instantaneous TS amount in the liquid fraction at large-scale did not significantly varied as values ranged from 1.5 to 0.4% of the initial TS amount, and reached a final value of 0.9%. This indicates that the mass of TS leaching out the storage reactor was stable along the experiment. After two weeks of storage, about 6% of the initial dry weight loaded in the storage cells leached out the solid.
In terms of concentration (Fig. 6B), TS content of the liquid fraction decreased progressively from 78 to 22 g/kg ww during the first trial (10 L-FW1). As the total solid mass was stable during the first storage week, this decrease in concentration is mostly due to a dilution phenomenon because of water leaching. During the second week, the decrease in TS mass highlighted the consumption of the organic matter present in the liquid fraction that intensified the decrease in TS concentration. In the case of trial 2 and at small-scale (10 L-FW 2 ), the final TS content was 42 g/kg ww which was higher than trial 1 confirming the importance of FW composition on the storage behavior.
Part of the organic matter present in the liquid fraction was composed of volatile fatty acids and assimilated, mainly acetic, propionic, malic and lactic acids. In the first trial, total VFA content progressively increased to reach 16 g/ kg of initial VS (i.e. g/kgVS in ) at day 6 and then started to decrease and reached 3 g/kgVS in at the end of the experiment (Fig. 7A). The accumulation of VFA during the first week of storage confirmed the start of FW hydrolysis and acidification. In the light of VS amounts results, VFA were probably consumed by adapted microorganisms during the second week of storage. Depending on the scale, trial 2 showed very different trends with the highest VFA content recorded for the small-scale at a value of 29.4 g/kgVS in at the end of the experiment. In this case, FW hydrolysis seemed more efficient than in the first trial probably because of FW initial composition. At large scale (300 L-FW 2 ), FW hydrolysis started similarly to the first trial (10 L-FW1) until day 2, but was rapidly stopped probably because of limited aerobic conditions that slowed down the development of hydrolytic microorganisms.
A special attention was paid to the variation of lactic acid content during storage as it is proven to be inhibitory for anaerobic digestion when reaching concentrations of 6-8 g/L [29,30]. Overall, lactic acid content variations were similar to total VFA contents. The highest concentration recorded for the first trial (10 L-FW1) was 5.9 g/kgVS in at day 6, 14.8 g/kgVS in at day 13 for the experiment 2 at small-scale (10 L-FW 2 ) and 3.0 g/kgVS in at day 2. The production of lactic acid during storage of FW is due to the initial presence of lactic acid bacteria (LAB) in the substrate [31] and was already reported in several studies [5,30,32]. This suggests that lactic acid and VFA are also present in the solid fraction of stored FW, which is consistent with low pH of the solid content measured at the end of all experiments. Moreover, a comparison between the pH of the solid fraction, VFA and lactic acid profiles in the liquid fractions indicated that the organic acids of the solid most likely followed the same trend as organic acids of the liquid fraction. Wang et al. [32] reported trends similar to what was found during the first trial for what concerns the lactic acid production during FW storage, with a maximum concentration of 19 g/L found at day 8 considering at 2 weeks storage period. Zhao et al. [30] reported a maximum concentration of 10 g/L after 1.5 days of storage for a total storage period of 2 days. Comparatively, raw maximum lactic acid concentration were 7.4 g/L at day 6, 7.4 g/L at day 14 and 4.6 g/L at day 9 for experiments 10 L-FW1, 10 L-FW 2 and 300 L-FW 2 , respectively. The lower range of concentration recorded is probably due to differences in pre-treatment. Indeed, Wang et al. [32] and Zhao et al. [30] worked on 500 g and 100 g of shredded FW respectively, which might have eased the organic matter degradation process compared to FW without shredding pretreatment (i.e. as in our study). However, the raw concentrations of lactic acid at small-scale are considered as potentially inhibitory for AD if the ratio substrate/inoculum is not adapted [29,30].
At small-scale, the variations of VFA contents directly impacted pH of the liquid fractions. For the first trial (10 L-FW1), pH rapidly dropped at day 1 with a value of 3.80 compared to the initial pH of the FW that was 4.69 (Tables 4, 5). After 5 stable days, pH increased at day 6 when VFA content started to decrease and reached a final value of 5.44. The small-scale second trial (10 L-FW 2 ), had the lowest pH at the end of the experiment (3.83) and also the highest VFA content. On the other hand, the large-scale experiment did not show a clear correlation between pH and VFA content as VFA content was at its minimum level at day 9 while pH was also at is minimum (3.99). This was probably due to the presence of other organic acids that were not monitored in this study, such as malonic and oxalic acids that are produced during FW composting, i.e. under aerobic conditions [33]. However, at the end of the storage period, pH increased and reached a value of 4.81 (Table 5), indicating the probable consumption of the acidic compounds in the liquid fraction.
In terms of total carbon, the compounds included in the liquid fraction corresponded to 2.3, 6.3 and 0.8% of the carbon initially introduced in the storage cells for trial 10 L-FW1, trial 10 L-FW 2 and trail 300 L-FW 2 , respectively ( Table 5). In total (gas + liquid), the carbon loss represented 4.6, 8.0 and 1.5% of the carbon initially stored. According to these results, storage led to a carbon loss that represents less than 10% of the initial amount available for valorization, regardless of the scale and type of FW used. However, the carbon mass balance (Table 6) showed a lack of carbon recovery ranging between 18 and 30%, depending on the experiment. One reason is probably correlated to the composition of FW at the end of the storage. Considering the low pH values recorded, it is assumed that the solid fraction contained a large amount of VFA that might have volatilized during samples drying before VS content measurement. As the VFA content of the solid was not directly measured in the solid, they were estimated according to VFA content in the liquid fraction. Results showed that VFA volatilization in the solid fraction represent about 2% of the mass balance gap. The remaining gap is partly attributed to measurements uncertainties that represent about 12% of this gap.
Accumulations of hydrolyzed nitrogen were also calculated for the liquid fraction as defined in Eq. (2). Regardless to the experiment, ammonia content was lower in the liquid fractions than in solid fractions (Tables 4,5). The proportion of accumulated hydrolyzed nitrogen (resulting from protein hydrolysis) that leached out the solid represented less than 1% (Fig. 6C). The lowest value corresponded to the largescale experiment (300 L-FW 2 ), with a final accumulation of hydrolyzed nitrogen less than 0.1%. Such low percentage is consistent with the hypothesis of low hydrolysis efficiency of the FW at large-scale compared to small-scale. Indeed, at small-scale the accumulation of hydrolyzed nitrogen were 0.3 and 0.9% for trial 1 (10 L-FW1) and trial 2 (10 L-FW 2 ), respectively. The chemical assessment of solid and liquid fractions showed significant differences that were correlated with the initial composition of FW or with the scale of storage. The last part of this work consisted in studying the impact of physical-chemical variations of the samples on biomethane potentials.
Biomethane Potentials
According to the previous results, the amount of organic matter in the liquid fraction after two weeks of storage represents less than 10% of the organic matter contained in the FW before storage. For this reason, BMP tests were performed only on the solid fraction of FW.
In the first trial (10 L-FW1), BMP values varied depending on the storage duration applied (Fig. 8A). Initially, BMP value and standard deviation of FW 1 was 62 ± 1 N L/ kg ww (Table 1). During the first 3 days, BMP significantly decreased to reach a value of 50 ± N L/kg ww (F (1, 4) = 82.31, p < 0.05) and increased again until day 8 to 72 ± 6 N L/kg ww (F (1, 4) = 149.77, p < 0.05). During the last week, BMP values slowly and significantly decreased to 59 ± 3 N L/kg ww (F (1, 4) = 129.53, p < 0.05). It can be seen that the standard deviation on BMP results varied from 2 to 26%. The high values recorded are probably due to the heterogeneity of the samples inside a given storage cell. Because the mass of substrate used to perform BMP test is quite low (3-5 g), and even if samples were ground to 2 mm, the perfect homogeneity of the sample is not guaranteed. However, the variations of standard deviations reflect how anaerobic digestion can be affected by the intrinsic heterogeneity of the substrate. On the other hand, the variations of the BMP values along storage are strongly correlated to the VFA and lactic acid contents in the liquid fraction. Indeed, between days 3 and 6, the VFA and lactic acid contents recorded were at their highest level and corresponded to low levels of BMP. At day 8, VFA and lactic acid contents started to decrease while BMP value was at its highest level. The methane production was probably partially inhibited by the presence of VFA and, especially, lactic acid [5,30,32] that are assumed to be present in the solid. During the second week, data were not correlated anymore as VFA and lactic acid contents were very low (> 3 g/kg VS ) and BMP did not increase as it did during the first week. According to Fig. 5B, the VS amount of the solid fraction was stable during the second week of the storage period suggesting that the decrease in BMP is not correlated to a loss of VS, probably because only the slowly biodegradable material of the FW remained at the end of the experiment. In the second trial, regardless to the scale, BMP values significantly increased during storage starting at 57 ± 4 N L/kg ww and ending at 71 ± 4 N L/kg ww and 77 ± 8 N L/ kg ww at small (10 L-FW 2 ) and large-scale (300 L-FW 2 ), respectively (F (1, 7) = 26.88, p < 0.05 and F (1, 7) = 15.24, p < 0.05). At small-scale, the difference between final BMP values confirmed that the hydrolysis of FW 2 was more efficient than in the case of FW 1 leading to the presence of more easily biodegradable compounds.
Compared to Nilsson Påledal et al. [16] and Lü et al. [17], the effect of storage on BMP was quite different. In the case of Nilsson Påledal et al. [16], BMP values of stored FW remained stable along the experiment when the storage occurred in plastic bags or at 6 °C. The results observed at 6 °C are related to the low biological activity at low temperatures that reduced organic matter degradation. At 22 °C in plastic bags, and according to the authors, the stability of BMP can be explained by a "pre-hydrolysis" step occurring in anaerobic conditions that led to a decrease in pH and an increase in VFA and preserved organic matter content. The phenomenon is close to silage mechanisms used in agriculture to conserve organic matter content of cereals [34]. At 22 °C, for FW stored in paper bags, BMP values significantly decreased from 135 to 75 NmL CH4 /g of fresh FW. This trend is due to a loss of water and VS content in Fig. 8 Variation of A BMP during storage and B real volume of methane potentially produced after storage samples because of the high permeability of water for paper (compared to plastics or glass) which eased water evaporation during storage. Lü et al. [17], on the other hand, found that storage increased BMP values from 61 to 159 NmL CH4 /g of fresh FW. Because their experiment was performed in small-scale reactors (50 mL centrifuge tube) with grinded FW, the results are hardly comparable to the present study and differences are attributed to the scaling. Storage of FW at equivalent scale was studied by Fisgativa et al. [20] and showed consistent results with the present study. In this case, FW was stored during 4 days at room temperature at four different oxygen concentrations (0, 5, 10 and 20%) and BMP values significantly decreased of 7% after 4 days of storage.
However, even if storage seemed to have improved BMP values, the loss of solid mass during storage has induced a loss in the initial maximum volume of methane that could have been produced. Figure 8B represents the real proportion of methane volumes that would have been produced if the solid fraction of stored FW had been digested without mass adjustment (conversely to BMP tests where the same mass is applied in each trial). Results showed that for the first trial (10 L-FW1) the lowest volume of methane would be produced with FW stored for 3 days and 2 weeks as only 69 and 68% of the initial methane volume would have been produced. After one week of storage, 90% of the maximum volume of methane could be produced with the corresponding FW. In the light of standard-deviations values, the optimum storage period prior to AD is between 6 and 10 days. The small-scale experiment of the second trial (10 L-FW 2 ) showed that about 75% of the initial methane volume possibly available could be produced after two weeks of storage. The large-scale experiment (300 L-FW 2 ) showed a methane recovery yield of 95% after storage which was not significantly different from the volume of methane potentially available initially (F (1, 7) = 0.30, p > 0.05). This trial also displayed the lowest loss of initial carbon because of a low hydrolysis performance compared to the small-scale experiments. These results confirmed that small-scale FW storage leads to higher levels of aerobic degradation.
Conclusions
Storage significantly influenced the physical-chemical characteristics of FW due to the spontaneous hydrolysis of the substrate. During the storage, an aerobic biological activity was recorded. Such aerobic activity triggered the fast liquefaction of the FW, the release of gaseous compounds and supported the production of organic acids including lactic acid.
Depending on the substrate and the scale of storage, the hydrolysis was more or less effective and part of the initial carbon present in the FW was lost. Overall, the loss of initial carbon was less than 7% (Table 5) and attained lowest values in the case of large-scale storage, with a total loss of 0.7%. The limited loss observed at large-scale was caused by raise in temperature, which dried out the FW and limited the microbial activity. Nevertheless, considering the gap in mass balances and measurement uncertainties, an acceptable carbon loss value during storage should be around 10%.
The change in physical-chemical characteristics also influenced the BMP because of the presence of lactic acid that is inhibitory for methane production. If impossible to avoid, one week represents, in the condition tested in this study, an optimal storage time as the increase in BMP value compensates the loss of organic matter. Large-scale storage methods, such as the bring point model, are recommended as they exhibit lower loss of subsequent methane volume potential compared to small-scale storage.
Beyond the impact of different storage methods on anaerobic digestion that was studied in this article, the data of gaseous emission collected could be of valuable interest to study the environmental impact of biowaste collection systems using the life cycle assessment (LCA) method.
|
2019-04-10T13:13:02.588Z
|
2019-01-10T00:00:00.000
|
{
"year": 2020,
"sha1": "ecf1944a90af1fab5a3d239b572179f78ba19b73",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12649-018-00570-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "f6b8d6e8a218df5cec51b09f0ced7d6039ed9429",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
216171826
|
pes2o/s2orc
|
v3-fos-license
|
Reflections on the Changes of Court Painting and Painting Mounting of Song and Yuan Dynasties*
China’s court painting and decoration matters can be traced back to the Zhou Dynasty from the cultural relics. Due to the improvement of the system and economy, especially in Sui and Tang Dynasties, there were quite a lot of mounted books related to painting and calligraphy. Through the development of the dynasties, the court painting and decoration reached the peak in the Song and Yuan Dynasties. This article starts with the relationship between the development of court painting and the mounting of calligraphy and painting to tease out how the development of court painting drove the corresponding mounting development, digs out how court painting affected the Song and Yuan's mounting, finds the deep reason between the two, and clarifies the context for the development of China's mounting business.
INTRODUCTION
Court painting is an artistic creation activity led by the service of the ruling class. Its emergence coincided with the regime. From the Pre-Qin to the Six Dynasties, it was an important period for the emergence, establishment and development of court painting in China. Some systems and features formed by court paintings during this period had a direct and profound impact on subsequent court paintings and mounting frame designs. Mr. Fu Baoshi mentioned in "The Difficulty of Mounting": "As a work of art, in addition to the artistic level of the picture determined by the artist, mounting is the most important point." [1] As a complete painting and calligraphy work, the technical and aesthetic characteristics of mounting play a very important role. As calligraphy and painting master, Mr. Fu attaches great importance to mounting. Court painting has been one of the main issues in daily life since, and Emperor Huizong of Song Dynasty had also developed painting into one of the important national cultural and national policies of the Northern Song Dynasty. Due to the attention of the ruling class, the matching mounting was also included in the development agenda, emerging a new pattern.
Regarding the art of mounting, there is a clear record that it started from Yu He's "On the Book Mounting" during the Song Taishi period in Southern Dynasties, which summarized the decorative styles and methods of painting and calligraphy during the Wei, Jin, and Northern and Southern Dynasties. Later in the Tang Dynasty, Zhang Yanyuan also mentioned the importance and methods of decoration and protection in paintings and calligraphy in detail in his work "Famous Paintings of Past Dynasties". During the Song and Yuan Dynasties, the courts had strong interests in calligraphy and painting. As a result, the mounting frame form of calligraphy and painting was constantly changing. In the Song and Yuan Dynasties, Mi Fu's "Book History" and "Painting History", and in Zhou Mi's "Unreliable Talk", they also mentioned the relevant content of decorative mounting, and made certain mounting requirements in form, describing two court paintings of "Xuanhe Mounting" and "Shaoxing Imperial Palace Paintings and Calligraphy Style" in the Song Dynasties in detail. The "Painting and Moulding Records" of the Yuan Dynasty described the process of painting and mounting configuration. And on the basis of "Xuanhe Mounting" and "Shaoxing Imperial Palace Paintings and Calligraphy Style", a unique mounting style had been formed. Since then, there had been more studies on painting and calligraphy mounting in the dynasties.
MOUNTING
Archeological findings can trace the prototype of court paintings. The "Picture of A Person Driving the Dragon" of the Warring States Period and the silks paintings buried in the coffin of Mawangdui No. 1 in Changsha during the Western Han Dynasty are the prototypes of the court-like paintings that were found on the silks until now. They were in the form of scroll paintings of later generations. The top of the painting is wrapped with a thin bamboo pole, and the brown silk rope is suspended between the bamboo poles. There are suspenders hanging on both sides of the lower end. This way is conducive to unfolding and storage. The reason of such a mounting design is also related to the ancient people's great living basis for the belief in heaven and earth awe. These evidences show the prominent identity of the owner of the tomb. It can be seen from the identity that court painting was very popular at that time, and there was a preliminary mounting frame. The appearance of this frame played a certain role in the continuation of later paintings and mounting forms. The scroll paintings of later generations were based on this evolution and development process.
In terms of the nature of the mounting, it is the protection and re-creation of painting, which makes the artistic conception continue in a deep level. The Wei, Jin, Southern and Northern Dynasties was a period of great cultural integration, and the first peak of Chinese painting was formed. But at that time, the painting materials were mainly the silks. The biggest problem was that the preservation was very poor. Over time, the work would be seriously damaged. Zhang Yanyuan's comments on previous mountings in "Famous Paintings of Past Dynasties" were not very favorable. "Every painting made by me is two feet and three inches wide. Its silk is not available for other. After a long time, it will be incompetent." [2] In order to facilitate longterm preservation, later generations improved the painting mounting technology, especially in court painting, and formed their own decorative mounting style.
The first chapter of the third volume of "Famous Paintings of the Past Dynasties", "On Mounting the Back Scroll", recorded the improvement of the paintings of the previous generation in Tang Dynasty. "In the past dynasties, they mostly used the miscellaneous treasure as the decoration, which were easy to be damaged. Therefore, in the year of Zhenguan Kaiyuan, the books in the imperial storehouse were all decorated with symplocos paniculata as main body, red sandalwood as the head, and belting with purple edges as the official mounting." [3] Documents confirm that since the Sui Dynasty, there have been court-related paintings and gorgeously decorated mounting institutions. It can be inferred that there was a person in charge of supervision and mounting in the royal family. In Tang Dynasty, the requirements and attitude of beautifying painting and calligraphy mounting had been far beyond the requirements and attitudes of previous dynasties and generations. According to the "Six Standards of Tang", it is said that "At that time, there were 5 decorators in Chongwen Museum, 9 decorators in Hongwen Museum, and 10 decorators in the Province of Secretaries ..." [4]. It can be seen that the relevant institutions for mounting were clearly recorded in the Tang Dynasty. But unlike the Song and Yuan Dynasties, the institution was not specialized in painting and calligraphy mountings, but more in book binding and beautification, and decoration and mounting of palace frescoes. Nevertheless, compared with the previous generation, the Tang Dynasty had greatly improved its mounting technology in terms of aesthetics and preservation. Court painting had entered the political level to describe the records of royal daily life, and this perspective had also become an indirect cause of changes in mounting style and system. The Northern Song Dynasty was a dynasty that emphasized more on literary than military skills. Royal children had a deeper understanding of literature and calligraphy, especiallly the attainments in painting. Emperor Huizong of Song Dynasty had a style of his own. The people in the bottom will imitate what the people of the upper class like, which produced far-reaching "Xuanhe mounting", and led to the further development of court paintings and mountings. "Xuanhe Painting Book" records the painting conditions of the court in the Northern Song Dynasty and the importance of painting and calligraphy mounting technology. It also stipulates a unified style for the painting, material, craftsmanship, and appropriateness of the court paintings, on which base produced the "Xuanhe Mounting" that influenced later generations in China. The style innovation of "Xuanhe Mounting" was unprecedented in the past. There were more mature hand scrolls and vertical shaft mounting styles. The emergence of these innovative styles had profound impacts on the mounting styles in the Southern Song, Yuan, Ming and Qing Dynasties. "Xuanhe Mounting" achieved the ultimate from many aspects such as style, material, process and visual experience. In "Record of Decoration", Zhou Jiazhou gave a very high evaluation, and gave a detailed explanation of this decoration style. In long-term practice, the Song people concluded and summarized the aesthetic characteristics of court painting, such as neatness, symmetry, balance, proportion, and harmony. These forms and characteristics gave people an aesthetic pleasure. Decoration is a re-creation in the art of painting and calligraphy. Innovative mounting works are inseparable from the profound understanding of the painter's creative intentions. Only to understand the connotation, meaning, artistic conception and artistic conception of the artist from the perspective of the painter, and finally to think in terms of the aesthetic thinking and image of the mounting, can it achieves the perfect unity of calligraphy, painting and decorative mounting in the choice of mounting materials, and how to deal with colors and the style of mounting, in order to realize the best decorative visual effects.
A detailed introduction to "Shaoxing Imperial Palace Paintings and Calligraphy Style" is given in the sixth volume of Zhou Mi's "Unreliable Talk" in the Southern Song Dynasty. In the book, the decorating condition, the form of mounting, and the mounting on the materials can be seen: "In the whole picture, the top covered three inches and the bottom covered two inches ... which are four and a half inches wide (the tallest one will be five inches) ..." [5] There were strict rules for the process of mounting and reviewing, but the "Shaoxing Imperial Palace Paintings and Calligraphy Style" mounting method was very harmful to the old paintings. In fact, this was a practice of seeking the surface to abandon interior for the "Xuanhe Mounting" of the Northern Song Dynasty.
In the Yuan Dynasty, while still advocating the mounting technology of the Song Dynasty, the people continued to strive for excellence. According to the "decoration style" of Wang Sishan, a person of Yuan Dynasty in "Paintings of the Liuru Jushi", it was exactly the same as that in the wellknown Zhou Mi's "Shaoxing Imperial Palace Paintings and Calligraphy Style", as well as Tao Zongyi's "Farming Records", Wen Zhenheng's "Records on Objects", Xia Wenyan's "The Catalogue of Paintings", etc., which were both recorded the "Shaoxing Imperial Palace Paintings and Calligraphy Style". "Shaoxing Imperial Palace Paintings and Calligraphy Style" became the setting of later generations. A
Advances in Social Science, Education and Humanities Research, volume 416
special organization was set up to manage and approve calligraphy and painting collections. Emperor Renzong of Yuan Dynasty was an important representative figure in the Yuan Dynasty calligraphy and painting collection. He made the art of calligraphy and painting in the Yuan Dynasty highly developed. He was close to Confucianism and attached great importance to Taoist culture, "sending the messengers all over, seeking scriptures, and using jade to carve seals ..." [6]. According to the records, Emperor Renzong's emphasis on culture and art contributed to the painting and painting mounting and art beautification. After the reunification of the Yuan Dynasty, although the Han Dynasty system was inherited, on the one hand, the Han system was promoted, and on the one hand, there was serious racial discrimination, especially the exclusion of the Han people, which also brought a certain obstacle to the painting and calligraphy mounting. The painting styles of the Four Schools in the early Yuan Dynasty also represented the development trend of painting styles at that time. Since the Yuan Dynasty was a special historical environment of multiethnic integration, various expressions and the intersection of various elements made the development of calligraphy and painting in the Yuan Dynasty to a certain extent more diverse. The extensive area of the Yuan Dynasty, the large number of nationalities, and a large number of genre painters have created a variety of painting and calligraphy styles in the Yuan Dynasty. Influenced by the painters' styles, the court paintings and mounting frames were also greatly affected.
A. The influence of the Song and Yuan court painting mounting mechanism on future generations
At the end of the Tang Dynasty, attention was paid to the set of institutions of court paintings. These institutional settings had a profound impact on the establishment of the court painting mounting system in the Song Dynasties and even the Yuan, Ming, and Qing Dynasties. In the Southern Tang Dynasty, the "Hanlin Calligraphy Academy" and "Hanlin Painting Academy" were established. They were the most influential institutions at that time, and specialized officials were set up to manage these institutions. At this time, the main functions mainly focused on the integration of politics, education, and entertainment. The establishment of these institutions directly affected the changes of the Song and Yuan court painting institutions.
Since the beginning of the Sui and Tang dynasties, the court painting of Song Dynasty had been continuously improved on the basis of the Sui and Tang dynasties. Among them, the concept and preference of the rulers of the country had played an important guiding role, especially the preferences and advocacy of Emperor Huizong and Emperor Gaozong. The setting-up agency and officials were dedicated to the mounting of court paintings, which provided an important reference for the setting-up of mounting institutions in the later generations. The relevant functions and systems were basically continued after the Song Dynasties. Although the relevant institutions were set up in the Yuan Dynasty, there were still some differences from the Song Dynasties. The institutions were not fixed in the setting, and the treatments of the mounting craftsmen were very different, not as balanced as the Song Dynasties. For reasons of ethnic discrimination, the Yuan Dynasty abolished this "enlisting talents through the old civil service examination" system, and the favors on the painters and artisans were decided by the mood and preferences of the ruling classes. The painting and calligraphy mountings of the Ming and Qing Dynasties were far less influential and larger in scale than in the Song and Yuan dynasties. Although they were still imitating the Song and Yuan dynasties' mounting mechanism, in terms of scale, the artistry of painting, and the organization and manpower distribution settings were extremely chaotic. They cannot be comparable to the Song and Yuan Dynasties. Even some of the official positions of some painters were the positions related the Imperial Guards. Obviously, the court painters of the Ming Dynasty were in a position where they had nowhere to go, and this phenomenon severely affected the development of calligraphy, painting and mounting.
B. The influence of Song and Yuan court painting and
mounting style on later generations "Xuanhe Mounting" and "Shaoxing Imperial Palace Paintings and Calligraphy Style" as the representatives of the painting of the court in the Song Dynasty, can be said as reaching the peak in terms of accessories, materials, processes and various styles, which can be a comprehensive reflection of Chinese mounting skills, art, materials, degree of exquisiteness, and thoughts. Zhou Jiayu specially discussed "Xuanhe Mounting" in "Records on Decoration", which provided people with a historical document basis for knowing the process and style at the time. The two decoration styles in the Song Dynasty directly affected the mounting of court paintings of Jin Dynasty. During the reign of Emperor Zhangzong of Jin Dynasty, many painting styles and mounting techniques originated from "Xuanhe Mounting" and "Shaoxing Imperial Palace Paintings and Calligraphy Style". This continuation also had a profound impact on the painting and calligraphy decoration of the Ming and Qing dynasties. In addition, during the Dade period of the Yuan Dynasty, the famous paintings of the past dynasties were specially sent to Suzhou and Hangzhou for re-mounting. As the Suhang of the past Song Dynasties, the mounting technique and mounting form at that time were higher than those of the Yuan Dynasty mounting mechanism. The development of paintings in Song and Yuan Dynasties was relatively mature. Due to this factor, there must be higher requirements for the mounting technology and aesthetics to a certain extent, and special attention was paid to the preservation of calligraphy and painting. No matter it was silk, paper and other mounting materials, or the decoration and landscaping style, it was very elegant.
However, due to the political, cultural, and economic disruption of the original development ecology in the Yuan Dynasty, Chinese culture and art were restricted by political Advances in Social Science, Education and Humanities Research, volume 416 factors. The periodization was relatively serious and the style of mounting was very rich. However, the development of painting and mounting lagged behind its predecessors, and there was limited room for improvement in terms of innovation. At the same time, it was common to save labor and materials in the production process. Therefore, after the Yuan Dynasty, the development of painting mounting began to lag behind. Although the development of calligraphy and painting in the Ming and Qing dynasties was also relatively prosperous, it was far worse than the peak period of the Song and Yuan Dynasties. In the "Record of Decoration", Zhou Jiazhou had comprehensive affirmation and evaluation of the painting and calligraphy decoration since the Song Dynasty, "Every time I see a famous volume in Song Mounting, it's all paper-edged. It hasn't taken off till now. The silk edge used in current days, will be piled off no more than a few years. I truly don't like it. The ancients will pass on forever in all matters, and current people like to take a moment of glory and do things confinedly ... " [7]. At the same time, he expressed his disappointment on that the Ming dynasty mounting was only in pursuit of the beauty of form, but abandoned the preservation of the calligraphy and painting itself. After the Ming dynasty, both the form and technology of the calligraphy and painting mounting were declining. Especially in Jiaqing of the Qing Dynasty, there was no new form at all, and this change deserves people's deep consideration.
IV. CONCLUSION
The development of court paintings in the past has played an important role in supporting the development of painting and calligraphy mounting in politics and economy. The Song and Yuan Dynasties were the peak period of Chinese court paintings and mountings, which was based on the inheritance of court paintings and mountings in the past dynasties. The mounting techniques and styles produced by the two Song Dynasties played an irreplaceable role in the decoration of court painting and calligraphy in the later dynasties. Because Song coexisted with Liao, Jin, Mongolia, the Western Xia regime and other ethnic groups, various cultural factors of the Song Dynasty had a profound impact on the surrounding minority regimes. These minority regimes inherited the Song system to a large extent. The development and change of Song and Yuan painting and calligraphy mounting were formed by many factors of historical inheritance, economic development, and the promotion of those in power. Organizing the relationship between court painting and painting and calligraphy mounting is very necessary to the study of the changes of Song and Yuan painting and calligraphy mounting. The changes have brought reference and inspiration to the later Ming and Qing dynasties and contemporary painting and calligraphy mounting, and at the same time have played a certain reference role in studying contemporary calligraphy and painting and mounting art.
|
2020-04-02T09:33:33.661Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "cf4a7e6366be0a02be49a0de1f9ada7cb89965c3",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.2991/assehr.k.200316.045",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "6d58c5fad1f93d8988e2d3271b0c3b5306921b9e",
"s2fieldsofstudy": [
"Art",
"History"
],
"extfieldsofstudy": [
"Art"
]
}
|
252890396
|
pes2o/s2orc
|
v3-fos-license
|
Experimental Investigation on Water Seepage through Transparent Synthetic Rough-Walled Fractures
: One of the impacts of climate changes nowadays is the increase in the frequency of high-intensity rainfall events alternating with extreme dry periods, which affect the components of the hydrologic cycle, such as runoff, infiltration, and aquifer recharge. Several experimental investigations and theoretical studies have demonstrated that infiltration flow in fractured media can develop along preferential pathways. However, the prediction of infiltration phenomena in fractured media still remains an open issue. This, together with erratic rainfall patterns due to climate changes, affects the prediction of aquifer recharge and contaminant transport in fractured aquifers. The present work contributes to reducing this research gap by means of experimental investigation and forecast analysis, with a focus on the geometrical properties of single fractures and their influence on flow patterns. Several fracture surfaces based on different fractal dimensions, standard deviations, and mismatch lengths were designed using the SynFrac model and were generated by 3D printing technology. The results revealed that the fracture’s fractal dimension has a significant impact on the number of flow paths, while the fracture inclination only increases the number of intermediate preferential channels, and, hence, modifies the flow rate distribution over the fracture outlet. Additionally, the change in the inclination angle of the dry fracture from 55 to 65 degrees resulted in an 8% reduction in the mean width of first flow path. A sensitivity analysis using an M5 tree indicates that there is a linear relationship between flow rate and the exponential form of the fractal dimension. The location of flow channels is a function of fracture fractal dimension, and the influence of mismatch length on their location is negligible. Finally, an accurate prediction algorithm with a Nash value of 0.81 was developed using Wavelet transform in order to estimate the time series of periodic flow rates over the fracture outlet.
Introduction
Climate change has a strong impact on renewable groundwater resources; altered rainfall patterns due to climate change can reduce the ability to predict infiltration phenomena in soil and rock formations, giving rise to an erratic estimation of aquifer recharge and contaminant transport [1]. This can have serious implications on groundwater supplies, food production, and storm water runoff, as well as biodiversity and ecosystems.
Hirmas et al. [2] showed that macropores in which water infiltrates mainly by the influence of gravity play an important role in the total water infiltration, affecting regional and global water cycle.
According to Salve et al. [3] in semi-arid climate regimes, where a soil mantle covers the underlying rock, precipitation saturates the overlying soil before infiltration into the bed rock commences. This can take several weeks to months. Moreover, recent observations and predictions of extreme rainfall events associated with climate changes suggest the inevitability of prolonged flooding and, subsequently, infiltration events that can continue for months.
In fractured rocks, a variety of processes may affect infiltration and, thus, aquifer recharge, including gravity, capillarity, surface tension, viscosity, entrapped air, and biological activity [4]. Thus, the prediction of the flow rate and the pathways in rough fractures with different geometries is of great importance [5,6].
The simplest model of flow through a rock fracture is the parallel plate model [7,8]. This theory depicts the system as two parallel surfaces separated by a constant aperture; hence, the flow between the two surfaces is laminar [9]. Nevertheless, fluid flow in a real rock fracture bounded by two irregular surfaces is complex even under a laminar flow regime. The major factor causing deviation of predicted fracture flow behavior from the ideal parallel plate theory is the nature of non-parallel and non-smooth geometry of fracture surfaces [10]. Snow [7] found that neglecting the impact of roughness in parallel plate theory caused a significant overestimation of flow quantity through self-affine fractures.
Although a real fracture surface has a complex aperture and roughness, extensive studies have attempted to simulate the fracture geometry by means of statistical distributions and effective parameters. Generally, these studies used cubic law theory [11] based on the Reynolds equation to simulate single phase flow between two parallel plates.
Kishida et al. [12] suggested that the applicability of the Reynolds equation is limited to low Re conditions and smooth fracture walls, so that the flow rate and local velocity components perpendicular to the nominal fracture plane are sufficiently low. In order to solve this problem, some researchers [13][14][15] have investigated the role of the aperture length and its spatial variation on the distribution of flow rates along the fracture. They reported that the characterization of fracture geometry, such as mismatch length and fractal dimension, led to the change in flow rates within the fracture, which can be visualized using a transparent fracture. Moreover, many numerical studies investigated the influence of tortuosity and roughness of rough-walled fractures on preferential flow pathways. Javadi et al. [16] evaluated the effect of roughness on the distribution of the Reynolds number over the fracture. Liu et al. [14] highlighted the main importance of fractal dimensions on equivalent permeability of a fracture, where neglecting it results in a 17.64-19.51% error in the predicted flow rates. Zou et al. [17] simulated velocity vectors in a rock fracture using COMSOL software, and showed that the equivalent permeability of rough-walled fractures for small values of Reynolds numbers depends on the local distribution of apertures. Their results state the significant impact of fractal dimensions on transmissivity as reported by Crandall et al. [13]. Zhang et al. [18] predicted a linear regression model between the permeability of rough walled fractures and fluid velocity, considering a high constant head between fracture inlet and outlet. They found that the heterogeneous behavior of preferential channels under normal distribution of the aperture is a function of aperture heterogeneity. Due to the complexity in experimental tests setup and implementation, general numerical methods were used to analyze the impact of fracture geometry on fluid flow [19,20]. They indicated the significant influence of fracture roughness on fracture permeability.
Extensive experimental studies have also attempted to visualize the flow seepage process in rough-walled fractures using a transparent replica. Su et al. [21] conducted the first experimental study of fracture inclinations on preferential flow pathways in transparent plates of natural granite. They highlighted the great influence of fracture angles and fracture geometries on capillary region and preferential flow path locations, respectively. Watanabe et al. [22] performed a numerical model and experimental test of fracture permeability and preferential flow under different confining pressures on granite samples. Their results illustrate the occupancy of preferential flow pathways for different value of fracture pressures. Noiriel et al. [9] numerically estimated a relationship between fracture dimensions and preferential flow channels during dissolution where fracture geometry varies through time. Several studies also experimentally visualized the influence of the aperture network on seepage flow in natural fractures [23][24][25][26]. Their results generally indicated that flow transport in an unaltered granite surface is mostly impacted by the aperture distribution.
Some recent studies implemented reconstructions of fracture surfaces with different fractal dimensions via 3D printing. Philips et al. [27] examined the effect of roughness on fluid flow properties by 3D printing seven self-affine fractures, each with controlled roughness distributions akin to natural fractures. They found that fracture contact area is a better permeability predictor than roughness when the mechanical aperture is below 20 µm.
Suzuki et al. [28] realized a complex fracture network by using a 3D printer. They obtained a tracer response curve from the flow experiment and applied a computational fluid dynamics (CFD) simulation based on the Navier-Stokes equations to model it, which showed consistency with the experimental result. In 2022 [29] they conducted thermal flow experiments using a 3D printed fracture network with known structural and physical properties. They estimated the flow channel surface area with an approximate Bayesian uncertainty quantification method. The estimated uncertainty bounds were in good agreement with the design of the 3D printed sample.
Yang et al. [30] designed and constructed physical models of fracture-vug media through 3D printing technology. By combining the LED (light-emitting diode) backlight visualization method (BVM) and the particle image velocimetry (PIV) technique, they carried out experiments of multiphase flow (i.e., oil-water and gas-oil) through the printed fracture-vug medium.
Yin et al. [31] experimentally investigated non-linear fluid flow through rough fractures. They employed 3D printing techniques and fractal theory to produce fractured specimens with desirable roughness. They found that the hydraulic aperture decreases with the fractal dimension and standard deviation, and that the surface roughness imposes an important impact on the nonlinear characteristics of fluid flow through fractures. Review of the previous literature reveals that most studies attempted to (1) evaluate preferential flow path distribution on a replica of a natural rock fracture with specified aperture geometry or (2) used 3D printing techniques to conduct flow or thermal experiments.
Though the 3D printing method has already been applied recently to prepare rock-like material-based specimens with different geometries, few studies up to now have focused on experiments to investigate and forecast infiltration phenomena in synthetic fractures, both in wet and dry conditions.
In this study, several fractures with different fractal dimensions, standard deviations, and mismatch lengths have been designed and printed using 3D printing technology. The influence of these parameters, together with fracture inclination on infiltration in both wet and dry fractures, has been investigated, specifically the outlet flow rates and preferential flow paths. Finally, the temporal variation in flow rate in five outlets from the beginning until the end of the experiment, as well as the total fracture flow rate, were predicted. The inflow from the tank and outflow in the outlets for every minute of the experiment are used as training data for this machine learning predictive model.
Parameters Affecting Fracture Flow
Natural fractures are extremely complicated systems, and there are many factors which must be correctly assessed upon parameter value selection. According to Brown [32], a simple mathematical model of rough-walled fractures requires the specification of only three main parameters: the fractal dimension, the roughness at a reference length scale, and a length scale describing the degree of mismatch between two fracture surfaces.
The mismatch length is a measure of rock fracture surface correlation. Correlation wavelengths describe the level of interaction/correlation between two fracture surfaces. A small wavelength indicates zero correlation; an increase in wavelength, therefore, relates to increases in correlation until a maximum degree of correlation is reached [33]. This correlation is often referred to as matching, which does not indicate a perfect fit between two fractures, as this is not indicative of a real situation. Brown [32] recognized this and renamed matching to a "mismatch length," above which the fractures were "matched" and below which the fractures behaved independently of each other.
Numerous researchers have reported on the fact that fracture surfaces are self-similar and can be analyzed by fractal geometry [34]. Since Mandelbrot's study [35], fractal geometry has been extensively applied to characterize the roughness of fracture surfaces and to correlate it with mechanical properties.
The surface roughness plays an important role and can lead to a significant departure from the parallel plate model. However, the measurement of the roughness of the fracture surface should include a description of both the topography of the individual surfaces and their degree of mismatch. The fractal dimension describes the scale dependence of fracture roughness.
Surface roughness can be defined as geometrical irregularities on a smooth reference surface. The classic definition of roughness, R, is given by the ratio of the real surface area, A R , to the projected area, A 0 (smooth reference surface), which can be expressed as: Fractal geometry can be applied to describe roughness because the real area of fracture surfaces has fractal characteristics, such as the fractal dimension, D f and the fractional part of the fractal dimension, D*. The following theoretical relationship between R and D* has been proposed by Mandelbrot [35]: where η is a non-dimensional parameter that equals less than one and is related to the size of the ruler used to measure the length (or area). The fractional part of the fractal dimension, D*, is defined as the fractal dimension, D f , minus the value two of the Euclidian dimension of a smooth surface [36]. Fractal dimension as the representative of the geometric variation of fractures can also be expressed as: where σ 2 is the variance of fracture increments, s is the distance from the base point s 0 , and D f represents the surface fractal dimension. The standard deviation is the mean-square value of the fracture surface deflections from the mean plane, and D f is a measure of roughness deviation with respect to the parallel plate model. The fractal dimension for each fracture surface is a value between two and three, which predetermines the roughness of the fracture surface. Several studies have also attempted to correlate mechanical properties with the fractal dimension, among which was that of Charkaluk et al. [34], who presented experimental data showing positive, negative, and no correlation between fracture roughness and fractal dimension. Nagahama [37] deduced theoretically that a positive or negative correlation is possible, depending on some microstructural parameters.
Therefore, roughness, fractal dimension, and mismatch of fracture surfaces (walls) are key hydromechanical rock properties that influence (or control) the ways in which fluids permeate the structure.
According to experimental evidence, water infiltrates within the inclined single fracture, generating a flow channel network. The shape and evolution of the channel network are governed by the interplay between capillary, viscous, and gravitational forces, respectively. Thus, the distribution of flow paths across the fracture is function of the relative magnitude of these forces, which can be characterized by the Bond number and the Capil-lary number [21]. The Bond number (Bo) is the ratio of the gravity force to the capillary force: where ∆ρ (ML −3 ) is the density difference between the infiltrated water and air, e (L) is the fracture aperture, g (LT −2 ) is the gravity acceleration, β is the fracture inclination, σ (MT −2 ) is the water surface tension, and γ is the contact angle. The Capillary number (Ca) is the ratio between the viscosity force and the capillary force: where µ (ML −2 T) is the fluid viscosity and u (LT −1 ) is the fluid velocity. As Bo and Ca increase, gravitational and viscous forces may become comparable with the capillary forces. Preferential flow paths and, thus, the relative permeability may vary.
The equivalent permeability k (L 2 ) of the preferential pathways along the single fracture can be related to the fractal dimension D f [38] and the mismatch length λ [39]. Eker and Akin [38] indicate that the relation between D T , λ, and K can be described as: where a and b are constant values. Zambrano et al. [39] found that the permeability is proportional to the minimum mismatch length following a power-law relationship (depending on the fractal dimension).
where a , b are constant values. In other words, higher values of fractal dimension imply a higher permeability for similar mismatch values. Therefore, the effective permeability of a fracture is function of e D f and λ, which can be determined using a pick-wise linear regression corresponding to different ranges of fracture inclinations (β), as written below:
Synthetic Fracture Designing
SynFrac software enables the numerical synthesis of fracture surfaces and apertures within prescribed parameters [40]. Synthetic fractures with the same basic geometry but with different physical topographies were generated using SynFrac software. The following parameters of rock fractures were varied to create synthetic fractures: (1) mismatch length (λ), (2) standard deviation of surface heights (σ), (3) fractal dimension of fracture surface (D f ), as detailed in Table 1. dimensions and the same standard deviations were generated. The statistical indexes of printed fractures are presented in Table 1. Figure 1 illustrates the geometry and aperture distributions of different synthetic fractures generated by Synfrac. For all fractures, the dimensions of the top and bottom increments are different, and there is no zero-aperture area. Moreover, the aperture distribution follows a Gaussian function, and the mean value of the aperture length along the cross section remains constant. Figure 1 illustrates the geometry and aperture distributions of different synthetic fractures generated by Synfrac. For all fractures, the dimensions of the top and bottom increments are different, and there is no zero-aperture area. Moreover, the aperture distribution follows a Gaussian function, and the mean value of the aperture length along the cross section remains constant. For each synthetic fracture generated, the fractured planes were built with a physical size of 200 mm × 200 mm. A 3D printer has been used to construct these profiles on two plastic sheets in order to represent the two fracture planes ( Figure 2). The thickness of printed fractures was 0.05 mm larger than the largest aperture, which reduced computational cost. The 3D printed fracture planes were then placed into formwork molds, whereby transparent epoxy resin was poured in order to create a transparent epoxy resin block with a single fracture. Upon setting of the resin, the two fracture surfaces were positioned together and sealed, watertight, down both sides. size of 200 mm 200 mm. A 3D printer has been used to construct these profiles on two plastic sheets in order to represent the two fracture planes ( Figure 2). The thickness of printed fractures was 0.05 mm larger than the largest aperture, which reduced computational cost. The 3D printed fracture planes were then placed into formwork molds, whereby transparent epoxy resin was poured in order to create a transparent epoxy resin block with a single fracture. Upon setting of the resin, the two fracture surfaces were positioned together and sealed, watertight, down both sides.
Experimental Setup
The flow experiment consists of letting water flow through each single fracture by means of a hydraulic system, which is composed of an upstream and a downstream tank. The upstream tank supplying water is a Mariotte-type water tank, which gives a flow rate at a pressure depending on the difference in height (h) between the inlet and the outlet of the syphon, and will allow the head to remain constant as the water level in the bottle drops. Moreover, it allows the inlet pressure to be varied by adjusting the height (h) of the Mariotte bottle. The 20-litre Mariotte-type water tank contains dyed water, which traces the flow paths through the fracture. For the visualization of preferential flow paths through fractures, a normal digital camera was used to record videos of the experiments.
The fracture inclinations (β) in the experimental set up were set as 45°, 55°, and 65° ( Figure 3). The flow experiment was performed in both wet and dry fracture conditions. The downstream tank is divided into five different sections by means of graduated rulers, in order to measure the hydraulic head variation. The flow volume in the five outlets was measured using one-minute time steps. The experiment was maintained until one of the outlet sections was filled with water. The upper part of the synthetic fracture was overlaid by a layer of natural soil mixed with gravel of 0.1 m thickness. Water flowed through this upper layer by means of five drippers, and then from the soil layer distributed to the fracture. In order to trace water movement in the dry condition, two molded resin blocks were dried about 50 min before the experiment. Then, the experiment was performed in wet conditions, in which the resin surface was washed in water. First, the
Experimental Setup
The flow experiment consists of letting water flow through each single fracture by means of a hydraulic system, which is composed of an upstream and a downstream tank. The upstream tank supplying water is a Mariotte-type water tank, which gives a flow rate at a pressure depending on the difference in height (h) between the inlet and the outlet of the syphon, and will allow the head to remain constant as the water level in the bottle drops. Moreover, it allows the inlet pressure to be varied by adjusting the height (h) of the Mariotte bottle. The 20-litre Mariotte-type water tank contains dyed water, which traces the flow paths through the fracture. For the visualization of preferential flow paths through fractures, a normal digital camera was used to record videos of the experiments.
The fracture inclinations (β) in the experimental set up were set as 45 • , 55 • , and 65 • (Figure 3). The flow experiment was performed in both wet and dry fracture conditions. surfaces angles were set at 45° for about 60 min, then the surfaces were dried and the experiment was repeated at 55° and 65°. In order to trace the temporal location of dyed water, about sixty photos were taken by a digital camera for each inclined fracture, in both dry and wet initial conditions. An image processing technique was performed using the edge function with a "canny" filter built-in MATLAB to extract the region of dyed water from the recorded images. The downstream tank is divided into five different sections by means of graduated rulers, in order to measure the hydraulic head variation. The flow volume in the five outlets was measured using one-minute time steps. The experiment was maintained until one of the outlet sections was filled with water. The upper part of the synthetic fracture was overlaid by a layer of natural soil mixed with gravel of 0.1 m thickness. Water flowed through this upper layer by means of five drippers, and then from the soil layer distributed to the fracture. In order to trace water movement in the dry condition, two molded resin blocks were dried about 50 min before the experiment. Then, the experiment was performed in wet conditions, in which the resin surface was washed in water. First, the surfaces angles were set at 45 • for about 60 min, then the surfaces were dried and the experiment was repeated at 55 • and 65 • .
Flow Rate Prediction
In order to trace the temporal location of dyed water, about sixty photos were taken by a digital camera for each inclined fracture, in both dry and wet initial conditions. An image processing technique was performed using the edge function with a "canny" filter built-in MATLAB to extract the region of dyed water from the recorded images.
Flow Rate Prediction
The relation between many independent inputs and outputs can be described using a machine learning algorithm. Among the different data mining algorithms, the tree algorithms are novel techniques which have been used to predict nonlinear processes [41]. The tree model partitions a complex problem into many sub-spaces and assigns regression relationships to them. In the tree model, nodes and leaves denote a selection and a decision, respectively. Among the tree algorithms, the M5 tree has been an efficient technique for the estimation of experimental results [42]. The M5 tree generates many linear relationships for different ranges of input data by dividing the input space into many sub-spaces.
The aim of the M5 tree is minimizing the cumulative error from the top to the leaf of the tree. The dividing process is terminated if the value of SDR varies slightly [41]: where P is a set of instances of node i, P i is the new instance after dividing the node, and sd is the standard deviation. After constructing the tree, a linear relationship is fitted to each leaf. In order to avoid the overfitting of the tree for unseen data, the tree is pruned from bottom to root [43]. In this study, the M5 tree in WEKA [44] software is trained with experimental data to analyze the sensitivity of the flow rate along the fracture outlet to the fracture parameters. The input parameters are time, fracture inclination, fractal dimension, and mismatch length of aperture distribution. The sensitivity of different inputs to the flow rate is determined using the aforementioned analytical model.
Wavelet analyses have been used to forecast the time series of different natural processes. The Wavelet uses a series of periodic functions to split time series into many scales, as written: where, ϕ x, y (t) is a Wavelet function, t is time, and x and y are the scale parameter and position, respectively. The coefficients of the input a(t) determined by using the ϕ x, y can be expressed as [44]: where, K x,y is a continuous function. The temporal variation of flow rate over the fracture outlet is non-continuous, and hence, a discrete function was defined as [45,46]: where n is a constant value.
In this study, a Wavelet transform model was developed in MATLAB to predict the flow rate time series along the fracture outlet. The Laplacian operator for predicting the flow rate in the fracture is selected as a capable function [46]: where x 2 = x T x, and c is a constant value determined by trial and error. The performance of the developed Wavelet algorithm is judged using the Nash-Sutcliffe index, as expressed [47]: where Q experimental represents the observed value of flow rate using experiment results, Q predicted is the average value of flow rate predicted by the Wavelet algorithm,Q predicted is the predicted value of the Wavelet algorithm, and n is the sample number. Additionally, in order to measure the error between observed and predicted values, the root mean squared error (RMSE) is utilized:
Flow Rate along the Fracture Outlets
In order to evaluate how fracture geometry and inclination affect flow distribution, the total flow volumes in each of the five sections of the downstream outlet tank were measured. The volume of discharged water from Fracture-1 for different inclinations is illustrated in Figure 4.
As shown in Figure 4a, the second section (Q2) has the maximum discharge rate. The first section (Q1) is empty. A similar trend is observed for inclinations 55 • and 65 • (see Figure 4b,c).
Although the influence of fracture inclination on Q1 and Q2 is negligible, the values of Q5 and Q3 become closer with increasing inclination angles. Additionally, Q4 does not show a clear trend regarding inclination. This variation in flow rates can be attributed to the diversity of preferential flow paths corresponding to large gravity forces for high inclination angles. However, there is no direct relationship between the inclination angles and the total time to fill section Q5. This can be attributed to the change in discharge rates of all five sections. Many preferential flow paths were created and subsequently disappeared during the experiment; hence, the discharge rate in the outlets is not uniform and has a nonlinear trend with time.
Water 2022, 14, x FOR PEER REVIEW 10 of 21 and the total time to fill section Q5. This can be attributed to the change in discharge rates of all five sections. Many preferential flow paths were created and subsequently disappeared during the experiment; hence, the discharge rate in the outlets is not uniform and has a nonlinear trend with time. The distribution of flow rates over the outlet sections is more sensitive to the change in fractal dimension than the fracture inclination angle. This can be justified by the fact that the fractal dimension, as a representative parameter of fracture aperture geometric distribution, changed preferential flow paths significantly, whereas the variation of inclination angle changed the values of the discharge rates.
The effect of mismatch length on the outlet volume can be analyzed by comparing Fracture-1 (Figure 4a) and Fracture-4 (Figure 5b) with the same inclination angle (45°) and fractal dimension ( 2.2 and different mismatch length values. According to Figures 4a and 5b, section Q2 presents the maximum flow rates for each of two different mismatch The distribution of flow rates over the outlet sections is more sensitive to the change in fractal dimension than the fracture inclination angle. This can be justified by the fact that the fractal dimension, as a representative parameter of fracture aperture geometric distribution, changed preferential flow paths significantly, whereas the variation of inclination angle changed the values of the discharge rates. The effect of mismatch length on the outlet volume can be analyzed by comparing Fracture-1 (Figure 4a) and Fracture-4 (Figure 5b) with the same inclination angle (45 • ) and fractal dimension (D T = 2.2) and different mismatch length values. According to Figures 4a and 5b, section Q2 presents the maximum flow rates for each of two different mismatch lengths. In addition, the mean value of Q2 decreased by about 12%. A similar trend could be observed for all outlet sections except Q4. This was confirmed by the fact that a reduction in mismatch length led to the decrease in mean aperture and corresponding fracture equivalent permeability, whereas the preferential channel locations remain constant.
The results of the flow rate distribution through the five sections confirm the findings of Li et al. [15], who carried out investigations of flow paths in concrete self-affine fractures. Figure 6 shows a comparison of the maximum discharge rates (Q2 and Q3) between two samples of Fracture-1. Though there is a high correlation (R > 0.9) between the two samples, the expected error is high (RMSE = 38.65), especially at high discharge rates. It is clear from Figure 6 that the epoxy resin samples tend to underestimate maximum discharge rates. The viscosity of epoxy resin is lower than the viscosity of concrete, and hence, the surface is more consistent with plastic mold geometry. A comparison between the flow path in the dry fracture and the aperture distribution indicates that the first capillary zones are distributed around the area with apertures less than 1.8 mm. In these areas, capillary forces are enough to overcome the gravitational force, whereas the thin flow channels are observed near the largest apertures. Thus, the width of these capillary zones is influenced by the ratio of gravity force to capillary force, which is a function of fracture inclination. The Bond number (Bo) corresponding to the ratio of gravity force to capillary force, considering the epoxy resin surface tension (0.066 ), contact angle (63°), viscosity of dyed water (1.01 10 ), and fracture inclination angle (β = 45°) is approximately 0.7488.
In order to reach the steady condition in flow paths, an experiment was performed for 1 h with a constant head; the results show that the variation of thin preferential channels remains generally unsteady. Many thin preferential paths were created during the first 10 min of the experiment, and then disappeared in the next 10 min. This creation of preferential channels during the first hour of the experiment is periodic, and has an effect on the volume of the five outlets ( Figure 4).
The image of the first flow path for different fracture angles when the water flow has reached the outlet is illustrated in Figure 8. A pixel-by-pixel comparison is performed in order to evaluate the impact of fracture inclination on the shape and travel time of the first flow path.
As highlighted in Figure 8, the variation in inclination angle from 55° to 65° results in an 8% reduction in the mean width of the first flow path, whereas the number of thin Additionally, according to the difference in properties of resin and concrete, the surface tension and corresponding capillary forces are different.
Preferential Flow Paths in Dry Fractures
In this section, the flow movement of four inclined single fractures was traced and analyzed.
The image of the first finger in the unsaturated Fracture-1 for β = 45 • is shown in Figure 7. As seen in the figure, water first starts to distribute near the inlets, and then (after 4 s), a capillary zone forms and the first flow path is created. The first channel in A comparison between the flow path in the dry fracture and the aperture distribution indicates that the first capillary zones are distributed around the area with apertures less than 1.8 mm. In these areas, capillary forces are enough to overcome the gravitational force, whereas the thin flow channels are observed near the largest apertures. Thus, the width of these capillary zones is influenced by the ratio of gravity force to capillary force, which is a function of fracture inclination. The Bond number (Bo) corresponding to the ratio of gravity force to capillary force, considering the epoxy resin surface tension (0.066 N m ), contact angle (63 • ), viscosity of dyed water (1.01 × 10 −3 kg ms ), and fracture inclination angle (β = 45 • ) is approximately 0.7488.
In order to reach the steady condition in flow paths, an experiment was performed for 1 h with a constant head; the results show that the variation of thin preferential channels remains generally unsteady. Many thin preferential paths were created during the first 10 min of the experiment, and then disappeared in the next 10 min. This creation of preferential channels during the first hour of the experiment is periodic, and has an effect on the volume of the five outlets (Figure 4).
The image of the first flow path for different fracture angles when the water flow has reached the outlet is illustrated in Figure 8. A pixel-by-pixel comparison is performed in order to evaluate the impact of fracture inclination on the shape and travel time of the first flow path.
Effect of Fractal Dimension and Mismatch Length on Preferential Flow Path
In this section, preferential flow pathways in the fractures with different fractal dimensions, different mismatch lengths, and a constant inclination angle (β = 45°) under wet initial conditions are investigated, and the temporal variation of the flow paths is measured.
In Fracture-1, about 20 s after the occurrence of the first flow path, two new flow paths occur at the left and right sides (Figure 9a). For Fracture-2, there are many separated preferential flow paths trapped in the wide aperture region (Figure 9b). In Fracture-3, a capillary zone is formed in the presence of small aperture areas at the top and middle parts of the fracture. Successively, about 5 s after the formation of the first flow path, three additional flow paths start to form (Figure 9c). In Fracture-4, the distribution of flow paths is relatively heterogeneous and the flow paths are divided into many thin channels (Figure 9d).
As is shown, the flow path distribution for large values of fractal dimension is homogenous; nevertheless, the width of the flow paths and preferential islands can be related to aperture dimension. Figure 10 shows the flow path along Fracture-4 for different cross sections highlighted in Figure 9d. The maximum and minimum width of the flow path occur near the smallest and largest apertures, respectively. Nevertheless, the maximum value of the flow rate is observed in the portion of the flow path close to the largest apertures (red arrow). As highlighted in Figure 8, the variation in inclination angle from 55 • to 65 • results in an 8% reduction in the mean width of the first flow path, whereas the number of thin preferential channels increases. As shown in Figure 8, the total area of capillary zone is reduced by about 7.5%, while the reduction in width of the thin channels is insignificant.
The reduction in the capillary area corresponds to a 28% increase in the ratio of gravity to capillary force. Moreover, the flow path is divided into two small channels near the outlet. This variation in flow path confirms the change in outlet's volume by an increasing inclination angle, as demonstrated in Figure 3. Therefore, the flow rate of outlet-2 is split between outlet-2 and outlet-3.
Effect of Fractal Dimension and Mismatch Length on Preferential Flow Path
In this section, preferential flow pathways in the fractures with different fractal dimensions, different mismatch lengths, and a constant inclination angle (β = 45 • ) under wet initial conditions are investigated, and the temporal variation of the flow paths is measured.
In Fracture-1, about 20 s after the occurrence of the first flow path, two new flow paths occur at the left and right sides (Figure 9a). For Fracture-2, there are many separated preferential flow paths trapped in the wide aperture region (Figure 9b). In Fracture-3, a capillary zone is formed in the presence of small aperture areas at the top and middle parts of the fracture. Successively, about 5 s after the formation of the first flow path, three additional flow paths start to form (Figure 9c). In Fracture-4, the distribution of flow paths is relatively heterogeneous and the flow paths are divided into many thin channels (Figure 9d). As is shown, the flow path distribution for large values of fractal dimension is homogenous; nevertheless, the width of the flow paths and preferential islands can be related to aperture dimension. Figure 10 shows the flow path along Fracture-4 for different cross sections highlighted in Figure 9d. The maximum and minimum width of the flow path occur near the smallest and largest apertures, respectively. Nevertheless, the maximum value of the flow rate is observed in the portion of the flow path close to the largest apertures (red arrow). Corresponding to large apertures, flow separation occurs; hence, the width of the flow path is small. The thin preferential channels show generally unsteady characteristics. Table 2 A comparison of flow paths at different times (see Table 3) indicates that the number of intermediate channels increased through time, and that the flow rate distribution changed through five outlets. The variation in flow rate is observed in Figure 4, where the slope of the straight lines related to the volume of discharged water varies with the inclinations.
Fracture
Flow and Intermediate Flow α = 45° α = 55° α = 65° 1 flow path 1 1 1 1 Intermediate channel 3 3 4 2 flow path 1 2 2 2 Intermediate channel 2 2 3 3 flow path 2 2 2 3 Intermediate channel 3 3 4 4 flow path 3 4 4 4 Intermediate channel 4 3 6 The temporal variation of the intermediate channels (increase or decrease) between 10 and 60 min for different fractures and inclinations is presented in Table 4. It clearly illustrates that, by increasing the fracture inclination and fractal dimension, the number of intermediate flow paths increases and the effect of fractal dimension (see Figure 5) is less than that of inclination (Table 4). Thus, the temporal variation of the flow rate over the fracture outlet is nonlinear, and, hence, predicted by a nonlinear regression algorithm. Corresponding to large apertures, flow separation occurs; hence, the width of the flow path is small. The thin preferential channels show generally unsteady characteristics. Table 2 A comparison of flow paths at different times (see Table 3) indicates that the number of intermediate channels increased through time, and that the flow rate distribution changed through five outlets. The variation in flow rate is observed in Figure 4, where the slope of the straight lines related to the volume of discharged water varies with the inclinations. The temporal variation of the intermediate channels (increase or decrease) between 10 and 60 min for different fractures and inclinations is presented in Table 4. It clearly illustrates that, by increasing the fracture inclination and fractal dimension, the number of intermediate flow paths increases and the effect of fractal dimension (see Figure 5) is less than that of inclination (Table 4). Thus, the temporal variation of the flow rate over the fracture outlet is nonlinear, and, hence, predicted by a nonlinear regression algorithm.
Prediction of Total Flow Rate
The input samples which were used for training the M5 tree are fractal dimension, standard deviation, and mismatch length of the fracture with different inclinations. These parameters are used to predict the temporal variation of flow rate in five outlets as outputs of the M5 tree. In order to remove the scale impact, non-dimensional forms of these inputs were considered for the construction of the M5 tree (see Table 5). As demonstrated in Equation (8), permeability of the preferential channels as well as their location are functions of e D f and sinβ, respectively. The M5 tree was generated using 75% of the experimental dataset, and was validated by 25% of samples. Due to the small number of training samples, the hold out technique was used to split the testing and training database [48]. The range of parameters used for training the M5 tree is presented in Table 4. Several types of parameters are tested, and the matched formula with experiment results is selected. The generated linear relations for the pruned tree by two nodes and three leaves (see Table 5) indicate that by increasing the fractal dimension, the total flow volume increases. RMSE index 3.54 mL/min suggests that the relations generated by the tree model are the most accurate. The sensitivity analyses of the impact of λ, sinβ and e D f on the total flow volume of the fracture (V) showed that there is a meaningful relation between V and e D f , and that the influence of the other parameters is negligible. e D f , as a function of fractal dimension, is the most significant characterization of fracture geometry affecting V in all three relations. The minimum variation in V through time corresponds to e D f ≤ 9.43.
Prediction of Flow Rate Time Series
In order to find the best performance of the Wavelet model, different Wavelet functions were analyzed, and finally, the Laplacian operator function was selected. The Wavelet algorithm was trained with the flow rate time series of Fracture-1, Fracture-2, and Fracture-3, and was validated with Fracture-4. The optimal value of neurons was computed by trial and error as four neurons. The training data of the flow rate with second-long time steps were decomposed using four layers, including d 1 , d 2 , d 3 , and d 4 (see Figure 11). At the decomposition step, an unscaled noise structure with a heuristic threshold technique was selected.
ater 2022, 14, x FOR PEER REVIEW 18 of 21 steps were decomposed using four layers, including d , d , d , and d (see Figure 11). At the decomposition step, an unscaled noise structure with a heuristic threshold technique was selected. The time series of the predicted flow rate by Wavelet analysis and observed experimental data in the validation step (about 25% of samples) is illustrated in Figure 12. As shown in Figure 12, the estimated values of the Wavelet model, with a Nash value of 0.81 and RMSE = 3.21, are concentrated near the best-fitting line for the observed data (R = 1). Although the negative value of the mean error (−3.72) indicates that the predicted values are relatively underestimated (see Figure 12), the error between the predicted peak flow rate and observed peak value is negligible. This underestimation of the flow rate can be attributed to the impact of fractal dimension and mismatch length, which was neglected in the training of the Wavelet model. As mentioned above, the Wavelet model was trained using the flow rate time series of the three fractures, and was then implemented for the prediction of Fracture-4. Thus, the periodic fluctuations of flow rate time series over the fracture outlet can be estimated with an optimal Wavelet model. The time series of the predicted flow rate by Wavelet analysis and observed experimental data in the validation step (about 25% of samples) is illustrated in Figure 12. As shown in Figure 12, the estimated values of the Wavelet model, with a Nash value of 0.81 and RMSE = 3.21, are concentrated near the best-fitting line for the observed data (R = 1). Although the negative value of the mean error (−3.72) indicates that the predicted values are relatively underestimated (see Figure 12), the error between the predicted peak flow rate and observed peak value is negligible. This underestimation of the flow rate can be attributed to the impact of fractal dimension and mismatch length, which was neglected in the training of the Wavelet model. As mentioned above, the Wavelet model was trained using the flow rate time series of the three fractures, and was then implemented for the prediction of Fracture-4. Thus, the periodic fluctuations of flow rate time series over the fracture outlet can be estimated with an optimal Wavelet model.
Conclusions
Analyzing infiltration patterns in fractured rock is crucial for the understanding of recharge processes in fractured rock aquifers.
In this study, experiments on preferential flow paths over inclined fractures were conducted under different geometric characteristics of synthetic fractures, such as fractal dimension, standard deviation, and mismatch length. The results indicated that a variation in the inclination angle of dry fracture from 55° to 65° resulted in an 8% reduction in the mean width of the first flow path. This reduction occurred mostly in the capillary regions, whereas the number of thin preferential channels near the outlet increased. The reduction in the capillary area corresponded to a 28% increase in the ratio of gravity to capillary force.
Moreover, there is a direct relationship between the fracture inclination angle and the number of preferential channels near the outlet. The assessment of flow pathways in the saturated fracture reveals that the number of flow paths and their locations is a function of fractal dimension values, while a change in mismatch length only changes the flow rate. However, no linear relationship has been found between the number of flow paths and the magnitude of fractal dimension. By means of a sensitivity analysis, a linear relation has been detected between the flow rate and the exponential form of the fractal dimension. In addition, the influence of mismatch length on the flow pathways has been found to be negligible.
The results also demonstrate that the maximum width of the preferential channels belongs to the area with the smallest aperture, whereas flow separates near the largest aperture. Moreover, the variation in these thin preferential channels near the fracture outlet is generally unsteady, which results in a nonlinear flow rate in each outlet. Finally, an efficient Wavelet algorithm calibrated across experimental data predicted the time series of the flow rate with a Nash value of 0.81.
Laboratory-scale fracture flow experiments fill critical knowledge gaps by providing direct observations and measurements of fracture geometry and flow under controlled conditions that cannot be obtained in the field. However, the conducted experimental investigations on infiltration dynamics in a single fracture should be viewed as proof-ofconcept analysis, and they are not to be considered exhaustive. Future research on fracture flow can be directed towards prediction uncertainty of flow rates at the fracture outlet,
Conclusions
Analyzing infiltration patterns in fractured rock is crucial for the understanding of recharge processes in fractured rock aquifers.
In this study, experiments on preferential flow paths over inclined fractures were conducted under different geometric characteristics of synthetic fractures, such as fractal dimension, standard deviation, and mismatch length. The results indicated that a variation in the inclination angle of dry fracture from 55 • to 65 • resulted in an 8% reduction in the mean width of the first flow path. This reduction occurred mostly in the capillary regions, whereas the number of thin preferential channels near the outlet increased. The reduction in the capillary area corresponded to a 28% increase in the ratio of gravity to capillary force.
Moreover, there is a direct relationship between the fracture inclination angle and the number of preferential channels near the outlet. The assessment of flow pathways in the saturated fracture reveals that the number of flow paths and their locations is a function of fractal dimension values, while a change in mismatch length only changes the flow rate. However, no linear relationship has been found between the number of flow paths and the magnitude of fractal dimension. By means of a sensitivity analysis, a linear relation has been detected between the flow rate and the exponential form of the fractal dimension. In addition, the influence of mismatch length on the flow pathways has been found to be negligible.
The results also demonstrate that the maximum width of the preferential channels belongs to the area with the smallest aperture, whereas flow separates near the largest aperture. Moreover, the variation in these thin preferential channels near the fracture outlet is generally unsteady, which results in a nonlinear flow rate in each outlet. Finally, an efficient Wavelet algorithm calibrated across experimental data predicted the time series of the flow rate with a Nash value of 0.81.
Laboratory-scale fracture flow experiments fill critical knowledge gaps by providing direct observations and measurements of fracture geometry and flow under controlled conditions that cannot be obtained in the field. However, the conducted experimental investigations on infiltration dynamics in a single fracture should be viewed as proof-ofconcept analysis, and they are not to be considered exhaustive. Future research on fracture flow can be directed towards prediction uncertainty of flow rates at the fracture outlet, due to different realizations of fracture apertures. For this purpose, it will be necessary to link the Wavelet algorithm with a numerical model calibrated with the experimental data.
Funding: The work is funded by the Start Up Grant awarded by The University of Queensland (Australia).
Institutional Review Board Statement: Not applicable.
|
2022-10-14T15:14:01.192Z
|
2022-10-11T00:00:00.000
|
{
"year": 2022,
"sha1": "a8423e7296c13874bcca4321b5c8c11fcce00a47",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4441/14/20/3199/pdf?version=1665575418",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "646648cae94fb62e7203b2f5387a405ed40b3c31",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": []
}
|
233458598
|
pes2o/s2orc
|
v3-fos-license
|
Effectiveness of the Comirnaty (BNT162b2, BioNTech/Pfizer) vaccine in preventing SARS-CoV-2 infection among healthcare workers, Treviso province, Veneto region, Italy, 27 December 2020 to 24 March 2021
Data on effectiveness of the BioNTech/Pfizer COVID-19 vaccine in real-world settings are limited. In a study of 6,423 healthcare workers in Treviso Province, Italy, we estimated that, within the time intervals of 14–21 days from the first and at least 7 days from the second dose, vaccine effectiveness in preventing SARS-CoV-2 infection was 84% (95% confidence interval (CI): 40–96) and 95% (95% CI: 62–99), respectively. These results could support the ongoing vaccination campaigns by providing evidence for targeted communication.
By 24 March 2021, the coronavirus disease (COVID-19) pandemic has caused over 3.4 million cases and 105,000 deaths in Italy [1]. Although non-pharmaceutical interventions implemented in Italy were effective in reducing the impact of the first and second wave [2,3], there is urgency, now with the availability of approved vaccines, to accelerate the COVID-19 vaccination campaigns.
The first stage of the vaccination campaign in Italy started on 27 December 2020, which initially targeted healthcare workers (HCW) and residents in long-term care facilities. The Comirnaty, (BNT162b2, BioNTech/ Pfizer, Mainz, Germany/New York, United States) vaccine was used because it was the only vaccine approved by the Italian Medicines Agency at that date [4]. Recommended administration was two doses 21 days apart.
Although efficacy of the Comirnaty vaccine has been proven in clinical trials [5], there is a need to evaluate its effectiveness in real-world settings. Based on surveillance data, this study aimed to estimate the effectiveness of the Comirnaty vaccine in preventing severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection in frontline HCW employed at the local health unit that serves the entire province of Treviso in the Veneto region (LHU-TV).
Vaccination coverage and characteristics of healthcare workers included in the study
We conducted a retrospective cohort study of 9,878 HCW employed at the LHU-TV. From the local COVID-19 surveillance database, we retrieved information on demographic and professional characteristics, recorded dates of vaccine administration (all HCW were vaccinated with the Comirnaty vaccine) and the recorded date of SARS-CoV-2 infection, based on a positive antigenic test (SARS-CoV-2 Ag Test, LumiraDx, Alloa, United Kingdom (UK); sensitivity = 97.6% and specificity = 96.6% according to the manufacturer's indications) confirmed by RT-PCR on the same day.
A total of 6,423 HCW were included in the analysis, after exclusion of 1,285 (13.0%) HCW infected with SARS-CoV-2 before the vaccination campaign, and 2,170 (22.0%) HCW working outside hospitals and district outpatient centres or who were support and administrative staff. The mean age of the included HCW was 47.1 years (standard deviation (SD): 10.8 years), most of them female (n = 4,986 (56.5%)) ( Table 1). A total of 3,630 HCW were nurses (56.5%), 1,469 were medical doctors (22.9%), and 1,324 were social HCW (20.6%) ( Table 1). All the included HCW were screened approximately every 8 days and at any other time if presenting symptoms consistent with COVID-19.
The percentage of unvaccinated HCW was higher in women than in men (17.9% vs 13.5%), and in those aged 30-39 years (23.0%) compared with other age groups ( Table 1). The highest percentage of complete vaccination with both doses was highest in medical doctors (85.7%) and HCW working in hospitals (82.1%).
Effectiveness of the Comirnaty vaccine
We also conducted a time-to-event analysis using the number of days elapsed from vaccine administration to measure the length of follow-up. We estimated the effectiveness of one and two dose administration of the Comirnaty vaccine to prevent SARS-CoV-2 infection at different time intervals using a multivariable Cox proportional hazard model, including sex, age group, professional category, work context, and starting week of exposure as covariates. The adjusted hazard ratios
a The follow-up analysis was started on day 7 from the start of the vaccination campaign when the number of vaccinated HCW was sufficiently high to allow robust estimates [10]. (HR) were used to calculate vaccine effectiveness (VE) as ((1 − HR) × 100).
In the time interval of 14-21 days after the administration of the first dose, VE in preventing all (both asymptomatic and symptomatic) and only symptomatic SARS-CoV-2 infections was estimated at 84% (95% CI: 40-96) and 83% (95% CI: 15-97), respectively ( Table 2). In the time interval of at least 7 days after the administration of the second dose, VE increased to 95% (95% CI: 62-99) and 94% (95% CI: 51-99) in the two groups. The analysis showing the Kaplan-Meier failure curve by vaccination status according to time since vaccination or start of exposure for unvaccinated HCW is presented in Supplementary Figure S2.
Trend of COVID-19-associated hospital admissions in the study area, immunisation rate, and number of SARS-CoV-2 infections among healthcare workers
Finally, we analysed the trend of COVID-19-associated hospital admissions in the study area, together with the trend of immunisation rate and of the number of SARS-CoV-2 infections among HCW included in the study. We found that from mid-February 2021, when the potential long-term immunisation rate among HCW was ca 70%, the number of newly diagnosed cases in this group remained stable despite a higher risk of exposure due to the rapid increase of COVID-19 hospital admissions ( Figure 2). a The level of restrictions implemented in the Veneto region according to risk scenarios ranged from high (red band) to medium (orange band) and low (yellow band) [11].
Ethical statement
The dissemination of COVID-19 surveillance data was authorised by the Italian Presidency of the Council of Ministers on 27 February 2020 (Ordinance no. 640).
Discussion
Our analysis suggests that the Comirnaty vaccine had a high effectiveness in preventing SARS-CoV-2 infection in HCW during the time intervals after administration where protection may be expected [6]. Data on trends of COVID-19-associated hospital admissions in the study area, immunisation rate, and number of SARS-CoV-2 infections among HCW support this finding.
Moreover, also Italian national data suggest that the vaccination campaign in HCW was successful. A recent report has shown how, from mid-January 2021, COVID-19 incidence among HCW started to decrease rapidly, while it increased in the general population where vaccination coverage was still low [7].
To our knowledge, the few studies evaluating the Comirnaty vaccine effectiveness in preventing SARS-CoV-2 infection among HCW that included a control group were conducted in the UK and Israel [8,9]. We found that VE was 85% (95% CI: −35 to 98) after 21 days from the first dose administration compared with 72% (95% CI: 58-86) in the UK [8], although our estimate suffers from lack of precision because of the reduced time of follow-up in this interval (the large majority of subjects received the second dose of vaccine within few days from the scheduled date on day 21 from the first dose). When comparing VE 7 days after administration of the second dose, we found a higher VE of 95% (95% CI: 62-99) compared with 86% in the UK (95% CI: 76-97). Our estimates of VE in preventing symptomatic infections during the time intervals 1-14 days and 15-28 days from administration of the first dose were 40% (95% CI: 9-60) and 86% (95% CI: 33-97), respectively. This closely reflects the estimated VE of 47% (first dose, 95% CI: 17-66) and 85% (second dose, 95% CI: 71-92) estimated in Israel [9]. However, despite the general agreement, these comparisons should be interpreted with caution as they may be biased by several factors, such as differences in case definition and surveillance procedures.
Overall, 17% of the eligible HCW were not yet vaccinated almost 3 months after the start of the vaccination campaign, probably because of refusal. In accordance with findings from the UK, female individuals and HCW under 40 years of age had a lower tendency to be vaccinated, while medical doctors were the professional category showing the highest coverage [8].
This study has several limitations. It included only HCW and cannot be assumed as representative of the general population. We had no information about possible occurrence of adverse effects after vaccine administration, although there was no evidence of severe complications (no post-vaccination hospital admissions in vaccinated HCW). Unfortunately, the date of testing was not recorded in case of a negative result, and we were therefore unable to assess adherence to routine testing. However, the analyses evaluating VE in preventing all infections or only symptomatic infections did not show great differences. Given the probability that testing was performed in a timely manner in the event of symptoms, we feel confident that asymptomatic cases were also detected early. However, a residual differential bias could remain. For example, vaccinated people may have less rigorously adhered to testing, based on the belief they were protected, thus leading to an overestimate of VE. It is also possible that we missed mild or asymptomatic infections undetectable through the first-line antigenic test. It was not possible to accurately estimate VE in time intervals where the number of person-days of follow-up were much reduced (i.e. after 21 days from the administration of the first dose, before receiving the second dose). Finally, information about the SARS-CoV-2 variants linked to infections was not available and it was therefore not possible to evaluate VE according to this variable.
Conclusions
In a real-world setting in northern Italy, during time intervals after vaccine administration where protection may be expected, we found a high effectiveness of the Comirnaty vaccine. This result could help to promote the ongoing vaccination campaign in the general population and among the still unvaccinated HCW by reinforcing communication based on evidence.
|
2021-05-01T06:17:24.753Z
|
2021-04-01T00:00:00.000
|
{
"year": 2021,
"sha1": "37c1b9181039e88d699945362424d0f9f96b266c",
"oa_license": "CCBY",
"oa_url": "https://www.eurosurveillance.org/deliver/fulltext/eurosurveillance/26/17/eurosurv-26-17-2.pdf?containerItemId=content/eurosurveillance&itemId=/content/10.2807/1560-7917.ES.2021.26.17.2100420&mimeType=pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b3e738c76f3a9b87014246d83a30e8b4a75393b6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
56239869
|
pes2o/s2orc
|
v3-fos-license
|
Relationship of nine constants
Through the process of trial and error, four unitless equations made up of nine constants have been found with exact answers. The related constants are the Speed of Light [1], the Planck constant [2], Wien’s displacement constant [3], Avogadro’s number [4], the universal Gravity constant [5], the Ampere constant [6], the Faraday constant [7], the Gas constant [8] and Apery’s constant [9].
INTRODUCTION
At the end of the spring semester 2013, I had found an expression of a few physical constants that gave the correct value of the universal Gravity constant [5].I shared my findings with my classmates and they all pointed out the units were incorrect.
This started the search for a unitless expression of physical constants similar in form to the Fine Structure constant but with more constants.
In the context of this paper, the term unitless is defined as all the exponents of the units on the left hand side of the equation are equal to zero and the right hand side of the equation is represented by only a numeric expression.
MAIN BODY
The first few equations were found by trial and error.One would literally examine a listing of physical constants and guess which set of constants multiplied together and divided by another set of multiplied constants produces an answer with units raised to the zero power.
I had the limited success of finding the Fine Structure constant over and over again.At this point I changed my strategy by writing a program that would try every combination of a set of constants within a certain integer range of exponents, with its dimensionality equal to a selected SI unit.This strategy worked in the sense that it produced a large set of equations, of the selected constants that had the required SI units of seconds or meters, etc.
The programming process and the testing the programs happened over a few weeks and various sets of physical constants where tried.Overall the physical constants that produced the most equations were selected to be in the final set of nine presented in this paper.
A few things happened concurrently that allowed me to find the equations presented in this paper.One was that I started using a unit of an ampere-mole as a range extender in my search programs.The derived unit could be removed from the final answers yet its presence in the program allowed more equations to be found.
The second was that it occurred to me that the structure of the programs that I had written; could search for unitless equations too.The third was that I added the Faraday constant to the primary set of search constants.I intended to use the Faraday constant as a more robust replacement for the derived ampere-mole constant, and was hoping for similar results.
A few minutes later, the first of many unitless equations appeared on the screen.Through the process of trial and error I had found a set of eight physical constants that produced unitless equations.
Once a pattern was found in the first few equations, a new program in the Cuda GPU language was written to find unitless combinations expressed as the powers of the constants.A program listing is included for completeness as Appendix I.
A set of 200 unitless equations are shown in Table 1, and Eq.1 through Eq.4 are the results of the reduced row echelon form of Table 1.The reduced row echelon operation on Table 1, results with two rows.
Eq.1 represents the first row and Eq.2 represents the second row of the reduced row echelon form.Eq.3 represents the multiplication of the Eq.1 and Eq.2 and Eq.4 represents the quotient of Eq.1 and Eq. 2. One can
DISCUSSION
Once we know that the dimensionality of the left hand sides of the equations are correct, then our focus switches to the right hand side of the equations.One should note by definition all the physical constants on the left hand side are measurements and have limited accuracy.
Obviously the equations based physical measurements can not be more accurate than the measurements themselves.My method was to give the Maple software program the benefit of the doubt when computing the right hand sides of the equations.9 (4) For example while factoring and processing Eq.3 with Maple's identify command, the Apery's constant [9] appears in the result.Apery's constant can be expressed as a series, which means we could convert the right hand side of Eq.3 into a series just by redistributing some of the factors.For this reason I left Apery's constant in the answer which propagated to other the equations.
Figure 1 is a plot of Table 1, and is intended to show that the system of equations in Table 1 is not random but very periodic.The green line represents the natural log of the right hand sides of the equations and the other lines represent the exponent powers of the physical constants.
I view the form of the right hand sides of the equations as an idealized guess times an error term which was supplied by the reduced row echelon operation.
Figure 2 is also a plot of Table 1, where the equations of the table have been resorted based on the values of the ninth column of the table, instead of the tenth column.A problem is that the right hand side of the equations are inherently more accurate than the left hand side of the equations; which means any exact answer found by my method is merely a good guess.
On the other hand, these guesses appear to have over seven significant digits of accuracy.Practically speaking, the right hand sides of Eq.1 through Eq.4 are close enough to the "right answers" to solve most problems and if one wishes more accuracy one can always use the left hand side to directly compute a decimal value.
SUMMARY
In some ways, this paper is mundane.We have a family of similar equations where any single equation can be proven with dimensional analysis to be unitless.
Assuming that a suitable expression can be found for the right hand sides of the equations, then most of these equations could be used like a Swiss army knife to change from one physical constant to another.
On the mundane side we basically have a relationship between nine constants that connects the constants like a key ring.On the other hand, one could argue that the relationships shown in this paper existed before any of the physical constants were measured.
Obviously I can not address the range of philosophical issues that this paper may cause.To answer the reader's unspoken question, I do not know why these relationships exist; I only know that each time that I check them they seem to be correct.I invite other papers to address the deeper issues and physical interpretations of my equations.
A database of over 17,000 equations is available for download; the reader is encouraged to download the database and verify my work.By definition the terms of these equations tend to be self canceling, meaning if you make the wrong substitution, the whole left hand side can disappear and just leave a number.This has happened to me quite a few times in the last few months, which leads me to my final statement of the paper: "I claim nothing."
Figure 1 .
Figure 1.Plot ofTable 1 sorted by the right hand side values.
Table 1 .
A family of unitless equations.
Figures1 and 2should prove that the family of unitless equations contained in Table1is not random but instead is a structure made up of periodic waveforms.use dimensional analysis to check that Eq.1 through Eq.4 are unitless equations.
Table 1
sorted by the right hand side values.
Figure 2 .
Plot of Table 1 sorted by the exponent values of the Faraday constant term.
|
2018-12-15T10:06:34.026Z
|
2013-08-23T00:00:00.000
|
{
"year": 2013,
"sha1": "fa9a5d2ecb3bc6e460eec211bfac3e0353a84393",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=36207",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "fa9a5d2ecb3bc6e460eec211bfac3e0353a84393",
"s2fieldsofstudy": [
"Physics",
"Education"
],
"extfieldsofstudy": [
"Physics"
]
}
|
236989402
|
pes2o/s2orc
|
v3-fos-license
|
Generalized bioelectric impedance‐based equations underestimate body fluids in athletes
The current study aimed: (i) to external validate total body water (TBW) and extracellular water (ECW) derived from athlete and non‐athlete predictive equations using radioisotope dilution techniques as a reference criterion in male and female athletes; (ii) in a larger sample, to determine the agreement between specific and generalized equations when estimating body fluids in male and female athletes practicing different sports. A total of 1371 athletes (men: n = 921, age 23.9 ± 1.4 y; women: n = 450, age 27.3 ± 6.8 y) participated in this study. All athletes underwent bioelectrical impedance analyses, while TBW and ECW were assessed with dilution techniques in a subgroup of 185 participants (men: n = 132, age 21.7 ± 5.1 y; women: n = 53, age 20.3 ± 4.5 y). Two specific and eight generalized predictive equations were tested. Compared to the criterion methods, no mean bias was observed using the athlete‐specific equations for TBW and ECW (−0.32 to 0.05, p > 0.05) and the coefficient of determination ranged from R 2 = 0.83 to 0.94. The majority of the generalized predictive equations underestimated TBW and ECW (p < 0.05); R 2 ranged from 0.66 to 0.89. In the larger sample, all the generalized equations showed lower TBW and ECW values (ranging from −6.58 to −0.19, p < 0.05) than specific predictive equations; except for TBW in female power/velocity (one equation) athletes and team sport (two equations). The use of generalized BIA‐based equations leads to an underestimation of TBW, and ECW compared to athlete‐specific predictive equations. Additionally, the larger sample indicates that generalized equations overall provided lower TBW and ECW compared to the athlete‐specific equations.
| INTRODUCTION
The study of body composition in athletes has attracted the interest of researchers and practitioners over the years, given the implications on sports performance and health. 1,2 By monitoring body composition parameters, the effects of a diet or training could be qualitatively investigated. 1,2 Considering the different nature of body composition elements that make up the total body mass, different parameters can be measured or estimated. 2 However, reference methods for assessing body composition are often not available in the practice, so a number of alternative procedures have been implemented. 1,3 Among the possible methods, the bioelectrical impedance analysis (BIA) represents a portable, user-friendly, and low-cost tool that makes it possible to estimate a wide range of parameters, including total body water (TBW) and extracellular water (ECW). 1,3 In particular, TBW represents the major component of body mass and its unrestored loss reflects dehydration, a condition that negatively affects performance and health. [4][5][6] In addition, the distribution of the fluids between intraand extra-cellular compartments provides information about the body cell mass, the metabolically active portion of body mass, and fluid retention and inflammation. 7 Due to the association between the body fluids and the bioelectrical properties, [8][9][10] BIA represents a valid alternative to the gold standard methods identified as the dilution techniques. 3,4 The BIA provides raw bioelectrical values that can be inserted into predictive equations for estimating total TBW and ECW. 1,3 Most of the predictive equations have been developed and validated in the general population, [11][12][13][14][15][16] but the extent to which athletes water compartments may have been incorrectly estimated is still to be determined. Indeed, specific predictive equations for assessing TBW and ECW in athletes have recently been provided 17 and used in some studies. 18,19 In this regard, previous publications reported that BIA-based prediction equations yield inaccurate body composition estimates when applied in samples that differ from the original derivation sample. 3 This may be due to the specific body composition features that characterize each population. For example, athletes show a greater phase angle and therefore a higher intracellular water/ECW ratio compared to the general population. 1,3 Therefore, predictive models may not be particularly accurate if applied to samples with characteristics that are far from those of the sample on which they were developed. Similarly, given that several BIA devices may show a lack of agreement in the measured raw bioelectrical values, 3 in order to achieve a greater accuracy each equation should be applied with devices similar to those used in their development.
Notwithstanding, there are still studies being published that used generalized equations, [20][21][22] as well as those that used manufacturer-provided proprietary predictive formulas. [23][24][25][26][27] Some researchers have warned against the use of generalized equations in athletes, since inaccurate output could be extrapolated. 1,3,28 However, the magnitude of the possible bias compared with the dilution techniques as criterion, as well as its direction, has not been determined thus far. Additionally, a comparison between TBW and ECW estimated specific versus generalized equations in athletes practicing different sports has not been performed yet. This may help to quantify the agreement between using specific and generalized estimations. Therefore, the aims of the present study were as follows: (i) to external validate total body water and extracellular water derived from specific and generalized predictive equations using dilution techniques as the reference criterion in male and female athletes; (ii) to determine the agreement between specific and generalized equations when estimating body fluids in male and female athletes practicing different sports, in a larger sample. Since athletes may show different body composition features compared to the general population, our hypothesis was that bioelectrical impedance prediction models derived from non-athletes would result in different TBW and ECW values compared with criterion methods and specific equations developed for adult athletes.
The following inclusion criteria were used: (1) 10 or more hours of training per week, (2) negative test outcomes for performance-enhancing drugs, and (3) not taking any medications. All subjects were informed about the possible risks of the investigation before giving written informed consent to participate. All procedures were approved by the bioethics committee of the University of Bologna and were conducted in accordance with the declaration of Helsinki for human studies (Ethical Approval Code: 25027).
| Procedures
Participants came to the laboratory refraining and alcohol or stimulant beverages and fasting for at least 3 h. Testing began promptly at 08:00 with at least 15 h from the last exercise session.
Body weight was measured with a scale without shoes and wearing minimal clothes, to the nearest 0.01 kg and height was measured to the nearest 0.1 cm with a stadiometer (Seca).
The impedance measurements were performed with a BIA analyzer (BIA-101, RJL/Akern Systems) using an electric current at a frequency of 50 kHz. Measurements were made on an isolated cot from electrical conductors, the subjects were in the supine position with a leg opening of 45° compared to the median line of the body and the upper limbs, distant 30° from the trunk. After cleansing the skin with alcohol, two electrodes (Biatrodes Akern Srl) were placed on the right hand back and two electrodes (Biatrodes Akern Srl) on the corresponding foot. Prior to each test, the analyzer was calibrated with the calibration deemed successful if R value is 383 Ohm and Xc equal to 46 Ohm. The test-retest CV in 10 participants in our laboratory for R and Xc is 0.3% and 0.9%, respectively. The selected predictive equations for TBW and ECW estimations are shown in Table 1.
Matias et al. 17 Sun et al. 11 Schoeller et al. 16 Kushner et al. 15 Kotler et al. 12 and Lukaski et al. 14 predictive equations were validated using deuterium dilution; whereas Matias et al. 17 Sergi et al. 13 and Lukaski et al. 14 were validated using bromide dilution. Only Matias et al. 17 predictive equations were validated in athletes. The equations used were chosen because of their popularity and as being representative of the many equations that have been published. 29 Following the collection of a baseline urine sample, each participant was given an oral dose of 0.1 g of 99.9% 2 H 2 O per kg of body weight (Sigma-Aldrich) for the determination of total body water by deuterium dilution using a Hydra stable isotope ratio mass spectrometer (PDZ, Europa Scientific, UK). Subjects were encouraged to void their bladder prior to the 4-h equilibration period and subsequent sample collection, due to inadequate mixing of pre-existing urine in the bladder. Urine samples were prepared for 1 H/ 2 H analyses using the equilibration technique by Prosser and Scrimgeour. 30 Extracellular water was assessed from the sodium bromide (NaBr) dilution method after the subject consumed 0.030 g of 99.0% NaBr (Sigma-Aldrich) per kg of body weight, diluted in 50 ml of distilled-deionized water. Baseline samples of saliva were collected before sodium bromide oral dose administration, and enriched samples were collected 3 h post-dose administration.
| Statistical analysis
Data were analyzed with SPSS v. 27.0 (SPSS, IBM Corp.,) and MedCalc Statistical Software v.11.1.1.0, 2009 (Mariakerke, Belgium). The Shapiro-Wilk test was used to check the normal distribution of data. Sphericity of the data was preliminary assessed using the Mauchly's test. To external validate the selected equations, the resulting TBW and ECW were validated against the same parameters assessed using the reference method. A paired sample t test was employed to compare the mean values obtained from the reference technique and from BIA. Linear regression analysis was performed considering the values obtained from reference methods as dependent variables and the estimated parameters as independent variables. Agreement between specific and generalized predictive equations in the larger sample of athletes sorted out by sports modality was determined using the Bland-Altman method, Lin's concordance correlation coefficient (CCC), including precision (ρ) and accuracy (C b ) indexes, and by McBride's 31 strength concordance (almost perfect>0.99; substantial>0.95 to 0.99; moderate=0.90-0.95; and poor<0.90).
| External-validation study
In men, with the exception of the equation by Matias specific predictive equations, all other generalized predictive equations showed a significant difference (p < 0.05) in TBW estimation as compared with the deuterium dilution, as shown in Table 2. The extracellular water estimated by Sergi predictive equation differed with respect to the reference method. For athletic women, Matias et al. 17 and Kotler et al. 12 predictive equations did not T A B L E 1 Predictive bioelectrical impedance-based equations for body composition estimation using a foot-to-hand device at a sampling frequency of 50 kHz in healthy adults Author present differences when compared with TBW values obtained using radioisotope dilution method. However, only Matias et al. 17 predictive equations did not present differences when compared with ECW values obtained using radioisotope dilution method. Total body water estimation using specific or generalized equations was highly correlated (R 2 ranged from 0.86 to 0.94) with the reference values in both sexes with the highest coefficient of determination observed using the model developed for athletes (Matias et al. 17 ) ( Table 2). For the ECW, an R 2 value lower than 0.80 was found for the predictive equations developed by Sergi et al. 13 in men and Lukaski et al. 14 for men and women while a coefficient of determination of 84% was found using the specific models developed by Matias et al. 17 (Table 2).
Concerning the concordance analysis, the best performance was observed for Matias et al. 17 predictive equation, in both men and women, with a concordance correlation coefficient of 0.957 and 0.966 (considered as substantial by McBride 31 ), a precision of 0.958 and 0.967, and an accuracy of 0.999 and 0.998, respectively. Similar results were observed on the concordance analysis for ECW, with an observed concordance correlation coefficient and precision higher than 0.90, and an accuracy higher the 0.99 for both men and women using Matias et al. 17 predictive equation (Table 2).
For the agreement analysis performed for TBW assessment, no trend was observed in Matias and Kushner equations, while a trend (p < 0.05) was verified between the mean and the difference of methods for the Sun, Schoeller, Kotler, and Lukaski equations for both men and women as shown in Table 2. No trend was observed for extracellular water for any predictive equations, in men or women, as shown in Table 2. Additionally, a trend between the mean and the difference of the equations used to determine TBW and ECW was observed in all the agreement analysis, with the exception of the models for predicting TBW developed by Lukaski et al. 14 (endurance sports and velocity/power athletes) and by Sun et al. 11 Schoeller et al. 16 and Kotler et al. 12 in velocity/power athletes, as shown in Table 3. The predictions of TBW and ECW using the unspecified models tend to be exacerbated in the athletes showing lower levels of body water.
| DISCUSSION
The overall intentions of the present investigation were as follows: (i) to external validate TBW and ECW obtained using dilution techniques as criterion with those estimated from specific and generalized BIA-based equations in male and female athletes; (ii) to determine the agreement between specific and generalized equations in a larger athletic sample, when estimating body fluids in male and female athletes engaged in endurance, team, and strength/power sports. As hypothesized, generalized equations resulted in less accurate estimations of TBW and ECW compared with the dilution techniques. Additionally, most of the generalized predictive models showed different results when compared with the specific models for athletes. The present findings showed that only the specific Matias et al. 17 predictive equation agreed with the values obtained using the criterion, while all the generalized equations underestimated TBW in male and female athletes, with the exception of the Kotler et al. 12 predictive equation that showed no difference when applied to women. Considering extracellular water, the Sergi et al. 13 predictive equation underestimated the values obtained with bromide dilution in both men and women, while the predictive model proposed by Lukaski et al. 14 underestimated extracellular water in women. Furthermore, all the non-specific equations showed lower body fluid values in comparison with those obtained with the Matias et al. 17 predictive equations, irrespective of sex and sport. The current outcomes suggest that previous studies using generalized equations have underestimated body fluids in male and female athletes. When aiming to sports-specific body composition reference values, the monitoring through generalized BIA-based generalized equations may thus lead to inaccurate estimations.
Precision and accuracy between the selected equations and the reference methods were analyzed with the concordance correlation coefficient analysis, while the Bland-Altman's analysis was used to determine agreement between methods. A substantial strength of agreement between the Matias et al. 17 predictive equations and the reference methods was observed in estimating TBW and ECW, while a weaker agreement was found between the other equations with the dilution techniques results. Although no significant trend was observed in Matias et al. 17 predictive equation for both men and women, the 95% confidence intervals were larger for men. In this regard, total body water could be over-or underestimated by ~4.2 kg in men and by ~2.5 kg in women, while extracellular water could be over-or underestimated by ~2.3 kg in men and by ~1.5 kg in women. More specifically, considering equation comparison with deuterium dilution in men, Matias et al. 17 Recognizing the better performance of Matias et al. 17 equations in estimating the reference TBW and ECW in athletes, the second aim of the current study was to examine how generalized equations agree with the predictive models developed by Matias and collaborators. 17 In men, all the generalized equations underestimate total body water and extracellular water in endurance and team sports athletes. Regarding the velocity/power group, although Kushner et al. 15 predictive equation did not show a significant bias, an underestimation and overestimation were observed in athletes with the lower and higher TBW values, respectively. In women, all the generalized equations underestimated TBW and ECW in endurance and power/velocity athletes. Regarding team sports athletes, although the Kotler et al. 12 predictive equation did not show a significant bias, again a significant trend was found. Taken together these observations indicate that in general, generalized equations underestimated total body and extracellular in athletes, regardless of the sex and the sports categories. It should be also noted that athletes may have different body composition features compared with the general population, 1,3 so that possible discrepancies in predicted TBW and ECW values between athletes and non-athlete-derived models may occur when using BIA in athletes.
The current study presents limitations that should be addressed. First, our results are not generalizable to adolescent or senior athletes, since their body composition is overall different from the ones used to elaborate the predictive equations examined here. 32 Second, our outcomes derive from the use of a foot-to-hand technology and a 50 kHz sampling frequency. Therefore, the current findings cannot be extended to different technologies (e.g., BIA in standing position) and sampling frequencies. Last, the present study was conceived as a cross-sectional investigation and did not assess the ability of any equation to identify the longitudinal training-induced changes in body fluids.
In conclusion, the specific Matias et al. 17 equations resulted in valid TBW and ECW estimation when compared to dilution techniques while the generalized equations underestimate body fluids in male and female athletes. Additionally, using a larger sample of athletes engaged in endurance, team and strength and power sports, most of the generalized equations underestimated body fluids when compared to the specific models proposed by Matias et al. 17 regardless of sex and sports.
| Perspectives
The present findings have interesting perspectives. In first instance, data derived from BIA are used to assess body composition in athletes, so that specific values may be assured for a given athlete over the training process. As such, referring to generalized equations may result in inaccurate evaluations. This is not trifling, since many studies used generalized equations to estimate body fluids in athletes or still use generalized equations after the models developed by Matias et al. 17 have been published. [20][21][22]33 Furthermore, there is now a wide range of commercial BIA devices, used in research articles, that do not provide information on the equation used for measuring body fluids in athletes. 23,24,27,34 In this regard, it is important to consider that BIA-based equations should be applied using raw bioelectrical parameters obtained with devices and sampling frequencies similar to those with which they were developed. 35 In fact, numerous studies show how different outcomes are obtained using different devices and sampling frequencies. 35,36 These inaccuracies in assessing body fluids at the group and particularly the individual level may compromise an adequate assessment and monitoring of body fluids over the competitive season. Therefore, caution should be applied when interpreting data extracted from generalized equations or technologies.
|
2021-08-13T06:16:46.654Z
|
2021-08-12T00:00:00.000
|
{
"year": 2021,
"sha1": "c90189a0a86f1a372cd326c4961eda47ce5b474d",
"oa_license": "CCBY",
"oa_url": "https://air.unimi.it/bitstream/2434/861667/4/sms.14033.pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "c4f44f3bd809cb039855f13005a0201cf259cdbd",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
18887578
|
pes2o/s2orc
|
v3-fos-license
|
Prevalence of metabolic syndrome in patients with psoriasis: A population-based study in the United Kingdom
Increasing epidemiological evidence suggests independent associations between psoriasis and cardiovascular and metabolic disease. Our objective was to test the hypothesis that directly-assessed psoriasis severity relates to the prevalence of metabolic syndrome and its components. Population-based, cross-sectional study using computerized medical records from The Health Improvement Network Study population included individuals aged 45-65 years with psoriasis and practice-matched controls. Psoriasis diagnosis and extent were determined using provider-based questionnaires. Metabolic syndrome was defined using National Cholesterol Education Program (NCEP) Adult Treatment Panel (ATP) III criteria. 44,715 individuals were included: 4,065 with psoriasis and 40,650 controls. 2,044 participants had mild psoriasis (≤2% body surface area (BSA)), 1,377 had moderate (3-10% BSA), and 475 had severe psoriasis (>10% BSA). Psoriasis was associated with metabolic syndrome, adjusted odds ratio (OR) 1.41 (95% CI 1.31-1.51), varying in a “dose-response” manner, from mild (adj. OR 1.22, 95% CI 1.11-1.35) to severe psoriasis (adj. OR 1.98, 95% CI 1.62-2.43). Psoriasis is associated with metabolic syndrome and the association increases with increasing disease severity. Furthermore, associations with obesity, hypertriglyceridemia and hyperglycemia increase with increasing disease severity independent of other metabolic syndrome components. These findings suggest that screening for metabolic disease should be considered for psoriasis, especially when extensive.
INTRODUCTION
The metabolic syndrome is a clustering of cardiovascular risk factors, specifically obesity, hypertension, dyslipidemia, and insulin resistance (Eckel et al., 2005), which has been associated with an increased risk of cardiovascular disease (CVD) beyond traditional risk factors (Mente et al., 2010). The prevalence of metabolic syndrome is increasing in the United States (US; Ford et al., 2002) and partly in Europe, paralleling the rising prevalence of obesity worldwide (Mokdad et al., 2003;Mente et al., 2010). Systemic inflammation is associated with metabolic syndrome, with T helper type 1 proinflammatory cytokines such as tumor necrosis factor-a and nonspecific measures of inflammation such as C-reactive protein levels being elevated in patients with the syndrome compared with those without (Lakka et al., 2002). However, there is a limited understanding of the relationship between chronic inflammatory diseases and the prevalence of metabolic syndrome.
Psoriasis is the most common T helper type 1 inflammatory disease, affecting more than 125 million people worldwide (National Psoriasis Foundation). The severity of psoriasis in the general population is variable, with most patients having mild disease (Kurd and Gelfand, 2009), defined as involving p2% of the body surface area (BSA). Epidemiological evidence suggests that psoriasis is associated with an increased frequency of cardiovascular risk factors and adverse cardiovascular outcomes, including myocardial infarction (Gelfand et al., 2006), stroke , and cardiovascular death (Mehta et al., 2010a). Psoriasis, especially if severe, may be a risk factor for atherosclerotic CVD, beyond traditional risk factors (Gelfand et al., 2006Mehta et al., 2010a). Moreover, patients with severe psoriasis die at an age approximately 5 years younger than patients without psoriasis, with CVD being the most common cause of the excess mortality in these patients (Abuabara et al., 2010). Mechanistic studies of the metabolic syndrome (Shah et al., 2009) and insulin resistance (Mehta et al., 2010b) suggest that chronic T helper type 1 inflammation that characterizes psoriasis, metabolic syndrome, diabetes, and CVD may partly explain the association between these phenotypically distinct diseases.
A number of small, epidemiological studies have reported associations between psoriasis and the metabolic syndrome (Azfar and Gelfand, 2008;Gisondi and Girolomoni, 2009;Al-Mutairi et al., 2010;Mebazaa et al., 2011); however, population-based data in which the severity of psoriasis is objectively determined and individual components of the metabolic syndrome are directly measured are lacking (Augustin et al., 2010). Therefore, our objective was to examine whether there is an association between psoriasis and the metabolic syndrome in a broadly representative population of patients. We also investigate whether the degree of association varies with the extent of skin involvement with psoriasis. Table 1 describes the demographics of the study population. At the end of the survey collection period, 4,634 of 4,900 provider-based surveys were completed, giving a response rate of 95%. Our cohort included 4,065 people with confirmed psoriasis and 40,650 matched controls. The mean age of psoriasis patients was 1.2 years higher than that of controls (Po0.001), and 51% of psoriasis patients were male compared with 48% of controls (Po0.001). A total of 2,044 (53%) participants had mild psoriasis (p2% BSA), 1,377 (35%) had moderate psoriasis (3-10% BSA), and 475 (12%) had severe psoriasis (410% BSA). Information on body mass index (BMI), blood pressure, high-density lipoprotein, glucose, and triglyceride levels was available for 41,249 (92%), 44,019 (98%), 25,234 (56%), 28,743 (64%), and 25,067 (56%) of the patients, respectively (Table 2a). These measurements were available in similar numbers of patients with and without psoriasis.
RESULTS
Metabolic syndrome was identified in 34% of participants with psoriasis compared with 26% of controls (odds ratio (OR) 1.50, 95% confidence interval (CI) 1.40-1.61). This association persisted after adjusting for age, gender, and follow-up (adjusted (adj.) OR 1.41, 95% CI 1.31-1.51). Adjusting for smoking and social class did not change the study findings, and these were not retained in the final model. Psoriasis severity affected the degree of association, with the metabolic syndrome seen in 32% of the patients with mild disease (adj. OR 1.22, 95% CI 1.11-1.35), 36% with moderate disease (adj. OR 1.56, 95% CI 1.38-1.76), and 40% with severe psoriasis (adj. OR 1.98, 95% CI 1.62-2.43). Modest but statistically significant interactions were detected between psoriasis and age, and between psoriasis and sex, whereby the OR of metabolic syndrome and psoriasis was slightly higher in the younger age groups and in women (data not shown).
There was a 20% increased odds of having raised triglyceride levels in individuals with psoriasis overall, independent of obesity (adj. OR 1.20, 95% CI 1.10-1.31). This association also demonstrated an increased odds of having raised triglyceride levels, from 10% in those with mild psoriasis (adj. OR 1.10, 95% CI 0.98-1.25) to 46% in those with severe psoriasis (adj. OR 1.46, 95% CI 1.13-1.88). Raised glucose level was also associated with psoriasis independent of (Table 3). Sensitivity analysis using the revised Adult Treatment Panel (ATP) III, International Diabetes Federation criteria or by limiting lab values to the first or most recent observation, and excluding individuals on psoriasis treatments that are known to have an impact on components of the metabolic syndrome, e.g., ciclosporin or acitretin, did not significantly change the study conclusions (data not shown).
DISCUSSION
Psoriasis is associated with the metabolic syndrome in a ''dose-response''manner, with a 22% increase in the odds of developing the metabolic syndrome in those with mild psoriasis, 56% increase in those with moderate disease, and a 98% increase in those with severe psoriasis. In a fully adjusted model looking at associations between factors comprising the metabolic syndrome and psoriasis after adjusting for other components, independent associations were seen between psoriasis and obesity (25% increased odds), raised triglyceride levels (20% increased odds), and raised serum glucose levels (16% increased odds in a ''dose-response'' manner from mild to severe psoriasis).
The strengths of this investigation are that it is a large population-based study with a population broadly representative of the UK population in the age group of 45-65 years, which minimizes selection bias and increases the external validity (i.e., generalizability) of the findings. The ''doseresponse'' association detected provides compelling evidence for an association between psoriasis and the metabolic syndrome. Study findings were based on laboratory values and objectively measured disease extent, which allowed observation of findings that are to our knowledge previously unreported. Observational study designs are associated with a number of limitations. These include the cross-sectional nature of this study, which does not allow us to determine which developed first-psoriasis or the metabolic syndrome. Second, we cannot be certain that psoriasis caused the metabolic syndrome; factors including diet, physical inactivity, alcohol, or genetic predisposition, which have not been evaluated in this study, may be functioning as confounding or effect-modifying factors in this relationship (Davidovici et al., 2010), leading to the possibility of residual confounding. In terms of information bias, two aspects of this study make this an unlikely explanation for the findings: (1) laboratory and clinical values were recorded at similar rates in psoriasis patients and controls as part of routine medical care by general practitioners (GPs) unaware of the hypothesis under study; (2) the persistence of the study findings in the sensitivity analysis restricted to the first laboratory or clinical value per person. Disease severity was determined by asking the GPs to rate the extent of skin involvement with psoriasis into simple discrete categories.
Although previous studies have suggested that UK GPs are reasonably accurate in terms of diagnosing psoriasis (Basarab et al., 1996), direct data on the accuracy of GP assessment of the extent of skin involvement with psoriasis are not, to our knowledge, available. We have previously demonstrated the ''construct validity'' of this approach in that patients rated by GPs as having higher BSA categories are more likely to require frequent visits for psoriasis and require systemic therapy specific for psoriasis or phototherapy (Seminara et al., 2011). Moreover, we used the same categories used in the epidemiological studies conducted by NHANES and NPF in which patients are asked to rate their degree of skin involvement with psoriasis, suggesting that this approach is acceptable (i.e., ''face'' validity; Dommasch et al., 2010;Krueger et al., 2001;Seminara et al., 2011). Moreover, these data represent ''real-world'' data, where the extent of psoriasis has been assessed by hundreds of GPs around the UK and resulted in discrimination of the prevalence of metabolic disorders based on these clinical assessments, demonstrating the usefulness of this approach. Nevertheless, our findings are subject to a form of error (i.e., misclassification of the extent of skin involvement) that would be expected to be non-differential and thus bias toward the null. GPs were asked to assess the BSA of involvement that the patient typically demonstrates; this measure may not be stable over time, although a previous large cohort study demonstrated that despite various therapeutic interventions the severity of psoriasis for individuals did not generally change over time (Nijsten et al., 2007). This study significantly advances the existing literature on psoriasis and the metabolic syndrome, as this is the first population-based study to use objective measures of psoriasis severity, direct measurement of the components of metabolic syndrome, and standard criteria for diagnosis of metabolic syndrome. Of special interest is the clear ''dose-response'' relationship between psoriasis severity and the metabolic syndrome. No previous study has, to our knowledge, shown a directional increase in the association with raised triglyceride levels and increasing psoriasis severity independent of the effects of obesity. The consistency with other study findings (Gisondi and Girolomoni, 2009;Al-Mutairi et al., 2010;Augustin et al., 2010;Love et al., 2011;Mebazaa et al., 2011), presence of a ''dose-response'' relationship, strong associations, and biological plausibility support some causality, but further mechanistic and longitudinal studies are required (Rothman and Greenland, 2005).
A possible biological mechanism that may account for this association is that the proinflammatory state associated with psoriasis functions as a central driving force for development of the metabolic syndrome. In psoriasis patients, Th1 inflammatory cytokines, e.g., tumor necrosis factor-a, IL-1, and IL-6, are increased in skin and blood (Azfar and Gelfand, 2008). These inflammatory mediators may have a range of effects on insulin signaling, lipid metabolism, and adipogenesis. In addition, inflammation-induced insulin resistance may lead to the development of a systemic insulin-resistant state (Mehta et al., 2010b). Further mechanistic studies will be needed to test this hypothesis.
Study findings demonstrate a strong association between psoriasis and the metabolic syndrome, with increasing psoriasis severity being associated with increasing odds of metabolic syndrome. Increased odds of raised triglyceride levels and serum glucose were seen in individuals with psoriasis independent of the effects of obesity. The results of this study firmly establish that the metabolic syndrome is an important comorbidity with psoriasis, and that vigilance and enhanced screening may be important in psoriasis patients, particularly those with severe disease. Examining the components of metabolic syndrome associated with psoriasis, weight reduction is clearly a key step to prevent CVD; however, our findings also show the importance of screening for the other components of metabolic syndrome, particularly hypertriglyceridemia and raised glucose levels, as these tests are more likely to be abnormal in patients with psoriasis independent of traditional risk factors (such as obesity). Small increases in the individual components of metabolic syndrome have led to an 8% absolute increase in the prevalence of metabolic syndrome overall and a 14% increase in those with severe psoriasis. Further prospective studies are required to determine the directionality of the association between psoriasis and metabolic syndrome and to study other unexplored confounders, including diet, physical activity, alcohol, and genetic factors, which may be important residual confounders in this relationship.
Study design
We conducted a cross-sectional study using The Health Improvement Network (THIN).
Study population
THIN is a computerized longitudinal general-practice database with demographic data similar to the general United Kingdom (UK) population. THIN has anonymized medical record data on 3.4 million ''active'' patients followed up for a cumulative 50 million person years, and is broadly representative of the UK population. The THIN database contains demographic details, diagnoses, laboratory results, and prescriptions recorded by GPs, the gatekeepers for medical care in the UK. The version of THIN we used contained data from 413 general practices that use the ''In Practice Vision'' software. A number of studies have confirmed that THIN data are highly accurate, thus making it ideal for use in epidemiological research (Lewis et al., 2007;Seminara et al., 2011). The cohort was identified from individuals in the age group of 45-64 years with at least one psoriasis Read code (using a previously validated coding algorithm (Seminara et al., 2011)) in the 2 years before the survey. Patients were required to be registered with a general practice contributing actively to Additional Information Services (AIS). AIS practices have an agreement to respond to questionnaires; 55% (n ¼ 228) of THIN practices were AIS active at the time of sampling. A total of 4,900 eligible patients with psoriasis diagnostic codes were randomly sampled, and questionnaires were sent to their GPs through AIS to verify the presence of psoriasis and the extent of disease. Up to 10 controls in the age group of 45-64 years were randomly matched to each psoriasis patient based on practice; similar to cases, controls needed to be alive and actively registered with at least one GP visit within 2 years at the time of sampling.
Outcomes
Patients were defined as having psoriasis if their diagnosis was confirmed by a questionnaire completed by their GP. The questionnaire also determined the severity of psoriasis, namely mild psoriasis (o2% BSA), moderate psoriasis (3-10% BSA), and severe psoriasis (410% BSA). This approach has been previously well accepted (Feldman, 2004). Cardiovascular risk factors, specifically BMI calculated using standard formulation (overweight was defined as BMIX25 kg m À2 and o30 kg m À2 , obese was defined as X30 kg/m 2 ), hypertension, hyperlipidemia, smoking, and diabetes mellitus, were identified by the presence of diagnostic Read codes and additional recording and laboratory values in the Additional Health Details portion of the database.
Subjects were defined as having metabolic syndrome using the National Cholesterol Education Program (NCEP) ATP III diagnostic criteria (Expert Panel on Detection, 2001). Using NCEP criteria, a person with metabolic syndrome fulfills three or more of the following criteria: central obesity (determined by a BMIX30 kg m À2 in THIN), hypertriglyceridemia X1.7 mmol l À1 , low high-density lipoprotein cholesterol (in men o1.03 mmol l À1 and in women o1.29 mmol l À1 ), high blood pressure (X130/85 mm Hg) and high fasting glucose level (X6.1 mmol l À1 ). Time-varying variables were dealt with by selecting the maximum laboratory value or clinical measurement and using the most recent value for BMI. Conditions were measured from the patients' start date (defined as the latest of the Vision software or computerization in the practice and registration dates of the patient), whereas the end of the study was defined as the earliest date of transfer out, death, or end of the study period in February 2009.
Study size
We calculated that a sample size of 4,900 would yield 4,190 patients, which would be sufficient to detect increased relative risks of 1.14 for a BMI of X25 kg m À2 , 1.37 for hypertension, 1.71 for hyperlipidemia, and 2.0 for diabetes mellitus, with 80% power, respectively, assuming a two-sided test and a significance level of 0.05, and we were satisfied that such differences would be clinically meaningful.
Statistical methods
ORs and 95% CIs for the association between psoriasis overall and by psoriasis extent were calculated using conditional logistic regression. Multiplicative interaction terms were fitted to assess the effect modification by age and sex. Adjusted ORs were determined by adjusting for confounders including age, sex, and duration of follow-up time in THIN. Other possible confounders that were explored included smoking and social class, which were measured using Townsend scores (Phillimore et al., 1994). Further analyses were undertaken of the association between psoriasis and disease extent and the components of the metabolic syndrome to ensure that the findings were not explained by individual components such as obesity. Sensitivity analyses were undertaken using the revised NCEP ATP III definition (glucose cut point 45.6 mmol l À1 ) and the International Diabetes Federation definitions of metabolic syndrome . Sensitivity analyses were also carried out using only the first and most recent laboratory value for each individual and in patients who did not receive psoriasis treatments that may affect blood pressure and lipid levels (i.e., ciclosporin or acitretin). All analyses were carried out in Stata SE10 (Stata Corporation, College Station, TX).
Ethics
This study was approved by the University of Pennsylvania institutional review board and the Cambridgeshire Research Ethics Committee, and was funded by the National Heart Lung and Blood Institute of the NIH.
Role of the funding source
The sponsors had no role in the conduct or interpretation of the study. The corresponding and senior author had full access to all data in the study and had final responsibility for the decision to submit for publication.
|
2018-04-03T05:13:04.585Z
|
2011-11-24T00:00:00.000
|
{
"year": 2011,
"sha1": "d6a91652709a6e20dbfa5d1c22fd5cdbec395f1d",
"oa_license": "publisher-specific-oa",
"oa_url": "http://www.jidonline.org/article/S0022202X15356712/pdf",
"oa_status": "BRONZE",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c14ab677b383af4a5d2606cd49f57a46014bacf7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
15840563
|
pes2o/s2orc
|
v3-fos-license
|
Giant Nasolabial Cyst Treated Using Neumann Incision: Case Report
Introduction A nasolabial cyst is an ectodermal development cyst. It presents as a fullness of canine fossa, nasal ala, or vestibule of the nose. It is rare and usually small. Treatment consists of complete surgical excision or transnasal endoscopic marsupialization. Objective To describe a giant nasolabial cyst case treated using Neumann incision. Case Report A 37-year-old man was referred to the otolaryngology department with nasal obstruction and nasal deformity. Computed tomography showed a nasal cystic lesion 4 × 4.5 × 5 cm wide. Surgical excision using Neumann incision was performed. Discussion Neumann incision provides wide access to the nasal cavity and may be useful in nasolabial cyst treatment.
Introduction
Nasolabial cysts are also known as nasoalveolar cyst or Klestadt cyst and were described in 1882 by Zuckerkandl. 1,2 They are rare, 3 affecting 1.6 per 100,000 persons per year. 2 They occur more frequently in females (4:1), especially among African Americans, in the fourth and fifth decades of life. 2,4,5 In 90% of the cases, they are unilateral. 5 Their growth is slow and painless, so they are often underdiagnosed. [1][2][3]6 Patients typically complain of deformity and nasal obstruction. Signs and symptoms include nasal obstruction, local pain, swelling, and facial deformity. 1,2,4,7 Usually, there is a fullness of the canine fossa, the nasal ala, and nasal vestibule. The bulging reaches the nasal cavity beneath the anterior third of the inferior turbinate, resulting in obliteration of the nasolabial fold and elevation of the alae of the nose. 2 Because of its close anatomical relation to the nasal cavity and teeth, it may become infected easily. When it is infected, it grows quickly and may be painful. 1 A nasolabial cyst is diagnosed by clinical examination and is confirmed by histopathologic study. 1,2 Usually, the nasolabial fold is obliterated. The cysts must be palpated bimanually with one finger in the floor of the nasal vestibule and another in the labial sulcus. Imaging tests, like computed tomography (CT) of the paranasal sinuses and nuclear magnetic resonance (NMR), may be useful. 1,8 CT shows a cystic lesion located anterior to the piriform aperture; its contents may be homogeneous. Cysts are hyperintense on T1 and isointense with cerebrospinal fluid on T2-weighted images at NMR, without changes after fat suppression. 1,8 Differential diagnosis includes cysts of the nasopalatine duct, periapical inflammatory lesions (granuloma, cyst, or abscess), and epidermoid or epidermal inclusion cysts. 1 Complete surgical excision of the nasolabial cyst is the best treatment. The most used incision is the sublabial. [2][3][4]7 Our aim is to describe the Neumann incision to treat a giant nasolabial cyst.
Case Report
A 37-year-old black man had bilateral nasal obstruction, which improved with saline nasal lavage and nasal corticosteroids. Two years later, he presented with bulging of the nasal vestibule, nasal deformity and asymmetry, and worsened nasal obstruction. Hyposmia, dysgeusia, tenderness, and headache were related.
Keywords
► cysts ► nasal obstruction ► nasal cavity Abstract Introduction A nasolabial cyst is an ectodermal development cyst. It presents as a fullness of canine fossa, nasal ala, or vestibule of the nose. It is rare and usually small. Treatment consists of complete surgical excision or transnasal endoscopic marsupialization.
Objective To describe a giant nasolabial cyst case treated using Neumann incision. Case Report A 37-year-old man was referred to the otolaryngology department with nasal obstruction and nasal deformity. Computed tomography showed a nasal cystic lesion 4 Â 4.5 Â 5 cm wide. Surgical excision using Neumann incision was performed.
Discussion Neumann incision provides wide access to the nasal cavity and may be useful in nasolabial cyst treatment.
Puncture of the right nasal vestibule was performed, removing 60 mL of serous liquid, relieving the symptoms. However, the lesion recurred with worsening of symptoms.
CT was performed and a cyst lesion anterior to the right pyriform aperture, 4 Â 4.5 Â 5.5 cm wide, was found, pushing the nasal septum to left and bulging the palate (►Fig. 1).
Complete surgical excision of the cyst was performed using the Neumann incision (►Fig. 1). Histopathologic study revealed squamous and respiratory epithelium with chronic inflammatory process associated with histiocytic reaction. Culture of the secretion isolated Streptococcus viridans.
The patient currently has no evidence of recurrent disease at 2 years postoperatively (►Fig. 2).
Discussion
The origin of nasolabial cysts is controversial. There are two main theories about its growth. The first suggests that the cysts derive from inclusion cysts, secondary to mesenchymal cells after the fusion of medial and lateral nasal prominences to the maxillary prominence during facial skeleton formation. The other theory suggests the cyst is an epithelial remnant of the nasolacrimal duct, running between lateral nasal and maxillary prominences. 1,2 The most common histologic type is pseudostratified columnar epithelium, followed by squamous stratified epithelium and simple cuboidal epithelium. 7 Su et al assessed 10 cysts on electronic microscopy and noticed the cyst had a highly plicated mucosa and were made of nonciliated stratified columnar epithelium, including basal and goblet cells, structurally different from the ciliated columnar epithelium of the paranasal and nasal sinuses and airways. 9 The treatment of nasolabial cysts consists of complete removal, aiming to prevent infections, define histologic type, and improve esthetics. Fine needle aspiration and cauterization are other treatment options available, but these techniques carry a high recurrence rate. 1 As reported in the case, cyst puncture for pain relief should always be considered.
Endodontists currently use Neumann incision in performing alveoloplasties. It became popular as an alternative to approaching the maxillary sinus in 1970, replacing Caldwell-Luc technique. 3 It consists of incision in the free edge of the gingiva in the region of the interdental papillae, from the medial portion of the lateral incisor to the lateral portion of the second premolar and first molar, with two vertical extensions, one medial and other lateral to the gingiva and labia, with subsequent elevation of the flap created, allowing full access to the pyriform aperture (►Fig. 2). After the intervention, the mucosal flap is put back into its original position, and the papillae are sutured with absorbable sutures and atraumatic needle, as well as the vertical extensions. The incision considers the vessels and nerves in the region; therefore, the local sensory disturbances such as bleeding are minimal. The potential complications are facial swelling, insensitive gingiva, teeth numbness, and surgical site infection. 3 The patient must be oriented to blow his or her nose and use a toothbrush on the surgical site. Diet should be reintroduced slowly for the first week. Dental prostheses can be used immediately after the surgery. 3 Fig. 1 Computed tomography (1 and 4) and surgical aspects (2 and 3).
Choi et al described a series of cases in which the cysts varied in size from 1 Â 1 cm to 3 Â 5 cm. 7 In the case reported, the cyst measured 4 Â 4.5 Â 5 cm, and during follow-up, no signs of recurrence or complications were noticed.
Conclusion
Despite its rarity, nasolabial cysts should be considered in differential diagnosis when there is swelling of the floor of the nasal cavity or vestibule. The Neumann incision permits good access to the pyriform aperture and the complete cyst excision.
|
2017-06-21T15:52:00.233Z
|
2013-09-01T00:00:00.000
|
{
"year": 2013,
"sha1": "56695cb145859d220b713a13c30c6996d3c83ff2",
"oa_license": "CCBYNCND",
"oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.1055/s-0033-1351674.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "56695cb145859d220b713a13c30c6996d3c83ff2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.